OiO.lk Community platform!

Oio.lk is an excellent forum for developers, providing a wide range of resources, discussions, and support for those in the developer community. Join oio.lk today to connect with like-minded professionals, share insights, and stay updated on the latest trends and technologies in the development field.
  You need to log in or register to access the solved answers to this problem.
  • You have reached the maximum number of guest views allowed
  • Please register below to remove this limitation

How to determine breakpoints locally in a continuous piecewise linear regression so identified breakpoints are not affected by series truncation?

  • Thread starter Thread starter Ken C
  • Start date Start date
K

Ken C

Guest
I am trying to find a method to do piecewise linear curve-fitting where the break points are determined by some purely local method (i.e., so that breakpoint identification is not be affected by a distal truncation of the data set.

The idea is that if the breakpoint in a time series data set is representing a property of the real world, identification of the breakpoint should not be affected by something that happens far in the future or happened far in the past.

Right now, I am using some standard python libraries and I am finding that breakpoint identification is sensitive to truncation of the data series.

I can imagine a method like having a moving window of a a priori specified bandwidth and then fit both a straight line and a two-piece line within that window then have some procedure to decide when the two-piece line is sufficiently different from the straight line to determine the location of a breakpoint, and then maybe filter the breakpoints determined by this method.

However, I am doing this for an academic study, and ideally would like to cite someone else's method rather than making up a method of my own. Any suggestions? [I am working in python.]
<p>I am trying to find a method to do piecewise linear curve-fitting where the break points are determined by some purely local method (i.e., so that breakpoint identification is not be affected by a distal truncation of the data set.</p>
<p>The idea is that if the breakpoint in a time series data set is representing a property of the real world, identification of the breakpoint should not be affected by something that happens far in the future or happened far in the past.</p>
<p>Right now, I am using some standard python libraries and I am finding that breakpoint identification is sensitive to truncation of the data series.</p>
<p>I can imagine a method like having a moving window of a <em>a priori</em> specified bandwidth and then fit both a straight line and a two-piece line within that window then have some procedure to decide when the two-piece line is sufficiently different from the straight line to determine the location of a breakpoint, and then maybe filter the breakpoints determined by this method.</p>
<p>However, I am doing this for an academic study, and ideally would like to cite someone else's method rather than making up a method of my own. Any suggestions? [I am working in python.]</p>
 

Latest posts

I
Replies
0
Views
1
impact christian
I
Top