Karya, built on Mon Jul 24 11:39:07 PDT 2017 (patch 33511aca01257b76b88de7c7a2763b7a965c084e)

Safe HaskellNone




A Stream is a collection of LEvent.LEvents which is hopefully sorted in time order.



data Stream a Source #

A list seems inefficient, since each call appends to the stream. A block call will then append a bunch of events which need to be copied, and recopied as part of the larger chunk for the next block call up. It's possible the head of the stream will also be copied every time something is appended to it, but I'm not sure about that. It could also be that the number of events is low enough that all the inefficiency doesnt actually matter, but I'm not sure. I need to understand profiling to tell.

TODO one possibility is a MergeList:

data MergeList a = Chunk [a] | Merge (MergeList a) (MergeList a)

This way I don't need to copy large chunks multiple times. Also, if I make sure there is no data dependency between the merge branches, I can evaluate them in parallel.

Each call generates a chunk [Event], and the chunks are then joined with (<>). This means every cons is copied once, but I think this is hard to avoid if I want to merge streams.

TODO the Functor and Traversable can destroy the order, but this isn't checked. Maybe I shouldn't have them?

Currently I don't actually track order, and just trust the callers.


Functor Stream # 


fmap :: (a -> b) -> Stream a -> Stream b #

(<$) :: a -> Stream b -> Stream a #

Show InstrumentCalls # 
Monoid NoteDeriver # 
Show a => Show (Stream a) # 


showsPrec :: Int -> Stream a -> ShowS #

show :: Stream a -> String #

showList :: [Stream a] -> ShowS #

Show (CallMaps d) # 


showsPrec :: Int -> CallMaps d -> ShowS #

show :: CallMaps d -> String #

showList :: [CallMaps d] -> ShowS #

Monoid (Stream Signal.Control) #

Signal.Control streams don't need sorted order.

Monoid (Stream PSignal) # 
Monoid (Stream Score.Event) # 
DeepSeq.NFData a => DeepSeq.NFData (Stream a) # 


rnf :: Stream a -> () #

Pretty.Pretty a => Pretty.Pretty (Stream a) # 


from_sorted_events :: [a] -> Stream a Source #

Promise that the stream is really sorted.


partition :: Stream a -> ([a], [Log.Msg]) Source #

events_of :: Stream a -> [a] Source #


take_while :: (a -> Bool) -> Stream a -> Stream a Source #

drop_while :: (a -> Bool) -> Stream a -> Stream a Source #

cat_maybes :: Stream (Maybe a) -> Stream a Source #

catMaybes for Stream.

merge_asc_lists :: [Stream Score.Event] -> Stream Score.Event Source #

Merge sorted lists of events. If the lists themselves are also sorted, I can produce output without scanning the entire input list, so this should be more efficient for a large input list than merge.

This assumes all the streams are sorted. I could check first, but this would destroy the laziness. Instead, let it be out of order, and Convert will complain about it.


zip :: [a] -> Stream x -> Stream (a, x) Source #

zip_on :: ([a] -> [b]) -> Stream a -> Stream (b, a) Source #

zip3 :: [a] -> [b] -> Stream x -> Stream (a, b, x) Source #

zip3_on :: ([a] -> [b]) -> ([a] -> [c]) -> Stream a -> Stream (b, c, a) Source #

zip4 :: [a] -> [b] -> [c] -> Stream x -> Stream (a, b, c, x) Source #

misc util