Wednesday, July 17, 2019

2019-07-17: Bathsheba Farrow (Computer Science PhD Student)

My name is Bathsheba Farrow.  I joined Old Dominion University as a PhD student in the fall of 2016.  My PhD advisor is Dr. Sampath Jayaratha. I am currently researching various technologies for reliable data collection in individuals suffering from Post-Traumatic Stress Disorder (PTSD).  I intend to use machine learning algorithms to identify patterns their physiological data to support rapid, reliable PTSD diagnosis.  However, diagnosis is only one side of the equation.  I also plan to investigate eye movement desensitization and reprocessing, brainwave technology, and other methods that may actually alleviate or eliminate PTSD symptoms.  I am working with partners at Eastern Virginia Medical School (EVMS) to discover more ways technology can be used to diagnosis and treat PTSD patients.

In May 2019, I wrote and submitted my first paper related to my PTSD research to the IEEE 20th International Conference on Information Reuse and Integration (IRI) for Data Science: Technological Advancements in Post-Traumatic Stress Disorder Detection:  A Survey.  The paper was accepted in June 2019 by the conference committee as a short paper.  I am currently scheduled to present the paper at the conference on 30 July 2019.  The paper describes brain structural irregularities and psychophysiological characteristics that can be used to diagnosis PTSD.  It identifies technologies and methodologies used in past research to measure symptoms and biomarkers associated with PTSD that has or could aid in diagnosis.  The paper also describes some of the shortcomings past research and other technologies that could be utilized in the future studies.

While working on my PhD, I also work full-time as a manager of a branch of military, civilian and contractor personnel within Naval Surface Warfare Center Dahlgren Division Dam Neck Activity (NSWCDD DNA).  I originally started my professional career with internships at the Bonneville Power Administration and Lucent Technologies.  Since 2000, I have worked as a full-time software engineer developing applications for Version, National Aeronautics and Space Administration (NASA), Defense Logistics Agency (DLA), Space and Naval Warfare (SPAWAR) Systems Command, and Naval Sea Systems Command (NAVSEA).  I have used a number of programming languages and technologies during my career including, but not limited to, Smalltalk, Java, C++, Hibernate, Enterprise Architect, SonarQube, and HP Fortify.

I completed  a Master’s degree in Information Technology through Virginia Tech in 2007 and a Bachelor of Science degree in Computer Science at Norfolk State University in 2000.  I also completed other training courses through my employers including, but not limited to, Capability Maturity Model Integration (CMMI), Ethical Hacking, and other Defense Acquisition University courses.

I am originally from the Hampton Roads area.  I have two children, with my oldest beginning her undergraduate computer science degree program in the fall 2019 semester.

--Bathsheba Farrow

Monday, July 15, 2019

2019-07-15: Lab Streaming Layer (LSL) Tutorial for Windows

First of all, I would like to give credit to Matt Gray for going through the major hassle in figuring out the prerequisites and for the awesome documentation provided on how to Install and Use Lab Streaming Layer on Windows.
In this blog, I will guide you how to install open source Lab Stream Layer (LSL) and stream data (eye tracking example using PupilLabs eye tracker) to NeuroPype Academic edition. Though a basic version of LSL is available along with NeuroPype, you will still need to complete following prerequisites before installing LSL.
You can find installation instructions for LSL at https://github.com/sccn/labstreaminglayer/blob/master/doc/BUILD.md. The intention of this blog is to provide an easier and more streamlined step-by-step guide for installing LSL and NeuroPype.
LSL is low-level technology for exchange of time series between programs and computers.

Figure: LSL core components
Source: ftp://sccn.ucsd.edu/pub/bcilab/lectures/Demo_1_The_Lab_Streaming_Layer.pdf


Christian A. Kothe, one of the developers of LSL, has a YouTube video in which he explains the structure and function of LSL.
Figure: LSL network overview
Source: ftp://sccn.ucsd.edu/pub/bcilab/lectures/Demo_1_The_Lab_Streaming_Layer.pdf
Installing Dependencies for LSL: LSL need to be built and installed manually using CMakeWe will need a C++ compiler to install LSL. We can use Visual Studio 15 2017 as  C++ compiler. In addition to CMake and Visual Studio, it is required to install Git, Qt, and Boost prior to LSL installation. Though Qt and Boost are not required for the core liblsl library, they are required for some of the apps used to connect to the actual devices.

Installing Visual Studio 15.0 2017: Visual Studio can be downloaded and installed from https://visualstudio.microsoft.com/vs/older-downloads/. You must download Visual Studio 2017 since other versions (including latest 2019) does not work when building some of the dependencies.  You can select community version as it is free.
VS version - 2017


The installation process will ask which Workloads you want to install additionally. Select the following Workloads to install. 
        1. .NET desktop development
        2. Desktop development with C++
        3.  Universal Windows Platform development

Figure: Workloads need to be installed additionally
   Installing Git: Git is open source distributed version control system. We will use Git to download the LSL Git repository. Download Git for Windows from https://git-scm.com/download/win. Continue the installation with default settings except feel free to choose your own default text editor (vim, notepad++, sublime, etc) to use with git.In addition, when encountered Adjust your PATH environment page, make sure to choose the Git from the command line and also from 3rd-party software option in order to execute git commands using command prompt, python prompts, and other third party software.

Installing CMake:
Figure: First interface of CMake Installer
CMake is a program for building/installing other programs onto an OS. 
You can download CMake from https://cmake.org/download/. Choose the cmake-3.14.3-win64-x64.msi file to download under Binary distributions
When installing, feel free to choose the default selections, except, when prompted, choose Add CMake to the system PATH for all users.

Installing Qt:
Qt is a GUI generation program mostly used to create user interfaces. Some of the LSL apps use this to create user interfaces for the end user to interact with when connecting to the device. 
Open-source version can be downloaded and installed from https://www.qt.io/download. An executable installer for Qt is provided so installing should be easy. 

You will be asked to enter details of a Qt account in the install wizard. You can either create or log in if you have an account already. 
Figure: Qt Account creation step

-      During the installation process, select defaults for all options except in the Select Components page, select the following to install:
o Under Qt 5.12.3:
§  MSVC 2017 64-bit
§  MinGW 7.3.0 64-bit
§  UWP ARMv7 (MSVC 2017)
§  UWP x64 (MSVC 2017)
§  UWP x86 (MSVC 2017)
Figure: Select Components to be installed in Qt

The directory that you need for installing LSL is  C:\Qt\5.12.3\msvc2017_64\lib\cmake\Qt5

Installing Boost
Boost is a set of C++ libraries which provides additional functionalitis to C++ coding. Boost also needs to be compiled/installed manually. The online instructions for doing this is at https://www.boost.org/doc/libs/1_69_0/more/getting_started/windows.html.
You can download Boost from https://www.boost.org/users/history/version_1_67_0.html. Select the downloaded boost_1_67_0.zip file and extract it directly into your C:\ drive. Then, open a command prompt window and navigate to C:\boost_1_67_0 folder using cd C:\boost_1_67_0 command.

Then execute 
1. bootstrap 
2. .\b2 
commands one after the other.
Figure: Executing bootstrap and .\b2 commands

Figure: After Executing bootstrap and .\b2 commands

The directory that you need for installing LSL is C:\boost_1_67_0\stage\lib

Installing Lab Streaming Layer: Clone lab streaming layer repository from Github into your C:\ drive.

In a command prompt, execute following commands. 
1. cd C:\
2. git clone https://github.com/sccn/labstreaminglayer.git --recursive
Make a build directory in the labstreaminglayer folder
3. cd labstreaminglayer
4. mkdir build && cd build
\

Configure lab streaming layer using CMake

5. cmake C:\labstreaminglayer -G "Visual Studio 15 2017 Win64"  
-DLSL_LSLBOOST_PATH=C:\labstreaminglayer\LSL\liblsl\lslboost 
-DQt5_DIR=C:\Qt\5.12.3\msvc2017_64\lib\cmake\Qt5 
-DBOOST_ROOT=C:\boost_1_67_0\stage\lib 
-DLSLAPPS_LabRecorder=ON 
-DLSLAPPS_XDFBrowser=ON
-DLSLAPPS_Examples=ON
-DLSLAPPS_Benchmarks=ON 
-DLSLAPPS_BestPracticesGUI=ON

The above command configures LSL, defines which Apps are installed, and tell LSL where the Qt, Boost, and other dependencies are installed.
     i.     C:\labstreaminglayer is the path to the lab streaming layer root directory (where you cloned LSL from Gihub)
                                    ii.     The –G command defines the compiler used to compile LSL ( We use Visual Studio 15 2017 Win64)
                                    iii.     –D is the command for additional options.
1.     –DLSL_LSLBOOST_PATH à Path to the LSL Boost directory
2.     –DQt5_DIR à Path to Qt cmake files
3.     –DBOOST_ROOT à Path to installed boost libraries
4.   –DLSLAPPS_<App Name>=ON à These are the Apps located in the Apps folder (C:\labstreaminglayer\Apps) that you want installed. Just add the name of the folder within the Apps folder that you want installed directly after –DLSLAPPS_ with no spaces 
Build (install) lab streaming layer using CMake
6. cd ..
7. mkdir install


      8. cmake --build C:\labstreaminglayer\build --config Release --target C:\labstreaminglayer\install



Now, that the LSL installation is complete, we will have a look at the LabRecorder. Labrecorder is the main LSL program to interact with all the streams. You can find the LabRecorder program at C:\labstreaminglayer\install\LabRecorder\ LabRecorder.exe.

The interface of LabRecorder  looks like following.
Figure: LabRecorder interface when PupilLabs is streaming

The green color check box entries below Record from Streams are the PupilLabs’(eye tracking device) streams. When all programs are installed and running for each respective device, the devices’ streams will appear as above under Record from Streams.You can check your required data stream from the devices listed, then just press Start to begin data recording from all the devices. The value under Saving to on the right specify where the data files (in XDF format) will be saved.

Installing PupilLabs LSL connection: There are many devices which could be connected with LSL. Muse EEG device, Emotive Epoc EEG device, and PupiLabs core eye tracker are some of them. The example below shows how to use PupilLabs core eye tracker with LSL for streaming data to NeuroPype.

Figure : PupilLabs core eye tracker, Source - https://pupil-labs.com/products/core/

Let us first begin with Setting Up PupilLabs core eye tracker. You can find instructions for using and developing with PupilLabs here. I’ll provide some steps to setup everything from start to finish to work with LSL below though.  The LSL install instructions for PupilLabs is at https://github.com/labstreaminglayer/App-PupilLabs/tree/9c7223c8e4b8298702e4df614e7a1e6526716bcc

To setup PupilLabs Eyetracker, first you have to download PupilLabs software from https://github.com/pupil-labs/pupil/releases/tag/v1.11. You can go ahead and choose pupil_v1.11-4-gb8870a2_windows_x64.7z file and unzip it into your C:\ drive. You may need 7z unzip program for unzipping. Then, you just have to plug in the PupilLabs eye tracker to your computer. It will automatically begin to install drivers for the hardware.

After that, you can run the Pupil Capture program located at: C:\pupil_v1.11-4-gb8870a2_windows_x64\pupil_capture_windows_x64_v1.11-4-gb8870a2\pupil_capture.exe with Administrative Privileges so that it can install the necessary drivers. Next, you can follow the instructions in https://docs.pupil-labs.com/ to setup, calibrate, and use the eye tracker with the Pupil Capture program.

Connect PupilLabs with LSL: Build liblsl-Python in a Python or Anaconda Prompt. You could do with your command prompt as well. Execute following commands:
1. cd C:\labstreaminglayer\LSL\liblsl-Python
2. python setup.py build

Then, you have to install LSL as plugin in Pupil Capture program. 
a.     In the newly created C:\labstreaminglayer\LSL\liblsl-Python\build\lib folder, copy the pylsl folder and all its contents into the C:\Users\<user_profile>\pupil_capture_settings\plugins folder (replace <user_profile> with your Windows user profile).
b.     In the C:\labstreaminglayer\Apps\PupilLabs folder, copy pupil_lsl_relay.py into the C:\Users\<user_profile>\pupil_capture_settings\plugins folder.
Figure: Original Location of  pupil_lsl_relay.py

Figure: After copying pupil_lsl_relay.py and pylsl folder into C:\Users\<user_profile>\pupil_capture_settings\plugins folder

If the pylsl folder does not have lib folder containing liblsl64.dll, there is a problem with pylsl build. As an alternative approach, install pylsl via pip by running pip3 install pylsl command in command prompt. Make sure you have installed pip in your computer prior running these commands in your command prompt. You can use pip3 show pylsl command to see where is the pylsl module built in your compute. This module will include the pre-built library files. Copy this newly created pylsl module to the C:\Users\<user_profile>\pupil_capture_settings\plugins folder. 
In this example,  pylsl module  was installed in C:\Users\<user_profile>\AppData\Local\Python\Python37\Lib\site-packages\pylsl folder. It includes a lib folder which contains 
 Figure: pylsl module's installation location when used pip3 install pylsl command
As the next step, launch pupil_capture.exe and enable Pupil LSL Relay from Plugin manager in Pupil Capture – World window.

Figure: Enabled Pupil LSL Realy from Plugin Manager
Now when you hit the R button on the left of World window, you start recording from PupilLabs while streaming it to the LSL.  In Labrecorder, you could see the streams in green color (see Figure LabRecorder interface when PupilLabs is streaming).
Now, let's have a look at how to get data from LSL to Neuropype.

Getting Started with Neuropype and Pipeline Designer:
First, you have to download and install the NeuroPype Academic Edition (neuropype-suite-academic-edition-2017.3.2.exe) from https://www.neuropype.io/academic-edition. The NeuroPype Academic Edition includes a Pipeline Designer application, which you can use to design, edit, and execute NeuroPype pipelines using a visual ‘drag-and-drop’ interface. 

Before launching Neuropype Pipeline Designer, make sure that NeuroPype Server is running on background. If not, you can run it by double clicking on NeuroPype Academic icon. You can set to launch NeuroPype server on startup as well. 
The large white area in the following screenshot is the ‘canvas’ that shows your current pipeline, which you can edit using drag-and-drop and double-click actions. On the left you have the widget panel, which has all the available widgets or nodes that you can drag onto the canvas.


Create an Example Pipeline: Select LSL Input node in Network (green) section, Dejitter Timestamp in Utilities section (light blue), Assign Channel Locations in Source Localization section (Pink), Print To Console in Diagnostics section (pink) from widget panel.
Canvas looks like Fig. Pipeline created in Neuropype after creating the pipeline. After getting the nodes to the canvas, you can connect them using the dashed curved lines on both sides of them. Double click on either of the dashed line of one node and drag the connecting line to a dashed curved line of the other node. It will create a connection between two nodes named Data.
You can hover the mouse over any section or widget or click on a widget on canvas and see a tooltip that briefly summarizes it. 

Figure: Pipeline created in Neuropype
Start Processing
LSL is not only a way to get data from one computer to another, but also to get data from your EEG system, or any other kind of sensor that supports it, into NeuroPype. You can also use it to get data out of NeuroPype into external real-time visualizations, stimulus presentation software, and so on.
Make sure that the LSL Input node has a query string that matches that sensor. For instance, if you use PupilLabs, you need to enter type=’Pupil Capture’ as below. Then NeuroPype will pick up data from the PupilLabs eye tracker.
                                                                     
Figure: Set up type of LSL Input
To launch the current patch, click the BLUE pause icon in the toolbar (initially engaged) to unpause it. As a result, the Pipeline Designer will ask the NeuroPype server to start executing your pipeline. This will print some output. 

Congratulations! You successfully set up LSL, PupilLabs and NeuroPype Academic version. Go ahead and experiment with your EEG system, or any other kind of sensor that supports LSL and NeuroPype.

Feel free to tweet @Gavindya2 if you have any questions about this tutorial or need any help with your installation.

--Gavindya Jayawardena (@Gavindya2)

Thursday, July 11, 2019

2019-07-11: Raintale -- A Storytelling Tool For Web Archives


My work builds upon AlNoamany's efforts to use social media storytelling to summarize web archive collections. AlNoamany employed Storify as a visualization platform. Storify is now gone. I explored alternatives to Storify in 2017 and found many of them to be insufficient for our purposes. In 2018, I developed MementoEmbed to produce surrogates for mementos and we used it in a recent research study. Surrogates summarize individual mementos. They are the building blocks of social media storytelling. Using MementoEmbed, Raintale takes surrogates to the next level, providing social media storytelling for web archives. My goal is to help web archives not only summarize their collections but promote their holdings in new ways.

Raintale is the latest entry in the Dark and Stormy Archives project. Our goal is to provide research studies and tools for combining web archives and social media storytelling. Raintale provides the storytelling capability. It has been designed to visualize a small number of mementos selected from an immense web archive collection, allowing a user to summarize and visualize the whole collection or a specific aspect of it.

Raintale accepts a list of memento URIs (URI-Ms) from the user and produces a story containing surrogates of those URI-Ms. It then publishes this story to an individual file, in a format like HTML (as seen below), or a service, like Twitter (as seen above). Our goal is to explore and offer different publishing services and file formats to meet a variety of storytelling needs. You can help by finding defects and making suggestions on the directions we should take. The rest of this article highlights some of Raintale's features. For more information, please consult Raintale's websiteits documentation, and our GitHub repository.

Raintale provides many customization options for different types of storytelling. In this example, the HTML output contains Bootstrap cards and animated GIFs (MementoEmbed imagereels) of the best five images from each memento.

What Is Possible With Raintale



We created Raintale with several types of users in mind. Web archives can use it as another tool for featuring their holdings in new ways. Collection curators can promote their collections by featuring a small sample. Bloggers and other web page authors can write stories like they previously did with Storify.

When a user supplies the URI-Ms, Raintale supplies the formatted story. The URI-Ms do not even need to be from the same web archive. Raintale uses the concept of a storyteller to allow you to publish content to a variety of different file formats and social media services.

Raintale supports HTML storytelling with MementoEmbed social cards (see below). Story authors can use this HTML for static web sites or paste it into services like Blogger. Web archiving professionals can incorporate it into scripts for curation workflows. Raintale also provides storytellers that generate Jekyll headers for HTML or Markdown, suitable for posting to GitHub pages.

Raintale, by default, generates MementoEmbed social cards via the HTML storyteller.


Seen below, Raintale supports MediaWiki storytelling. It generates MediaWiki markdown that story authors can paste into a wiki page. This MediaWiki storyteller can help organizations who employ storytelling with wiki pages as part of ongoing collaboration.

Raintale can generate a story as MediaWiki markup suitable for pasting into MediaWiki pages.


Likewise, Raintale provides a Markdown storyteller with output suitable for GitHub gists. This output is useful for developers providing a list of resources from web archives.

Raintale provides a story as Markdown, rendered here in a GitHub gist available at this link.


For social media sharing, Raintale can also generate a Twitter story. Raintale leverages MementoEmbed's ability to surgically extract specific information from a memento to produce Tweets for each URI-M in a story. These URI-Ms are then bound by an overarching tweet, thus publishing the whole story as a Twitter thread.

Raintale's default Twitter storyteller generates surrogates consisting of the title of the memento, its memento-datetime, its URI-M, a browser thumbnail, and the top 3 images as ranked by MementoEmbed. The Tweet starting the thread contains information about the name of the story, who generated it, and which collection to which it is connected. The Twitter thread shown in this screenshot is available here.


Our Facebook equivalent is still in its experimental phase. We use a Facebook post to contain the story, and Raintale visualizes each URI-M as an individual comment to that post. Our Facebook posts do not yet have image support. The lack of images leads Facebook to generate social cards for the URI-Ms. As noted in a prior blog post, Facebook does not reliably produce surrogates for mementos. Also, Facebook's authentication tokens expire within a short window (sometimes 10 minutes), which requires the user to request new ones continually. We have observed that the comments on the post are not in the order they were submitted. We welcome suggestions on improving Raintale's Facebook storyteller.
We are beginning to explore Raintale's ability to post stories to Facebook.


We are experimenting with producing MPEG videos of collections, as seen below. Raintale generates these videos from the top sentences and top images from the submitted URI-Ms. A story author can then publish the video can to Twitter or YouTube to tell their story. Below, we show a tweet containing an example video created with Raintale.



Raintale supports presets, allowing you to alter the look of your story. If you do not like the social card HTML story shown above, a four column thumbnail story may work better (shown below). These presets provide a variety of options for users. Presets are templates that are already included with Raintale. We will continue to add new presets as development continues. To see what is available, visit our Template Gallery.

This story was produced via the HTML storyteller, but with the thumbnails4col preset. Presets are templates included with Raintale. Users can also supply their own templates.
Templates are an easy way to generate different types of surrogates for our research studies. Some of the initial presets come from those studies and are quite vanilla in tone because we wanted to limit what might influence the study participant. Raintale's output does not need to be this way. Raintale provides template support so that you can choose which surrogate features work best for your web archive or blog, as shown in the fictional "My Archive" story below.

Raintale allows users to supply their own templates, such as this one for the fictional "My Archive." Using these templates, curators can create their own stories describing their collections.
In the "My Archive" example, we show how one can brand a story using their own formatting and images. This example displays thumbnails, favicons, text snippets, titles, original resource domains, memento-datetimes, links to other mementos, links to the live web page, and the top four images discovered in each memento. Each of these features is controlled by a template variable and there are more features available than those shown here. We will continue to add new features as development proceeds.

Take a look at our Template Gallery to see what is available. The documentation provides more information on how to build your own templates using the variables and preferences provided by Raintale.

Requirements for Running Raintale



In the Raintale documentation, we discuss the different ways of installing and running Raintale. Raintale is a command-line Python application tightly coupled to MementoEmbed. The easiest way to run Raintale is with docker-compose. We implemented a command-line utility named tellstory so that a user can easily include Raintale in scripts for automation.

For file formats, tellstory requires a -o argument to specify the output file.

# docker-compose run raintale tellstory -i story-mementos.txt --storyteller html -o mystory.html --title "This is My Story Title"


For social media services, tellstory requires a -c argument to specify the file containing your API credentials.

# docker-compose run raintale tellstory -i story_mementos.txt --storyteller twitter --title "This is My Story Title" -c twitter-credentials.yml



A user can supply the content of the story as either a text file, like the story_mementos.txt above, or JSON. The text file is a newline-separated list of URI-Ms. Alternatively, the user can supply a JSON file for more control over the content. See the documentation for more information.

Our Reasons for Developing Raintale



As noted in a prior blog post, each surrogate is a visualization of the underlying resource. My research focuses on social media storytelling and web archives. The surrogates, presented together as a group, are visualizations of a sample of the underlying collection. In recent work, we explored how well different surrogates worked for collection understanding. Raintale came out of the lessons learned from generating stories with different types of surrogates. We decided that both we and the community would benefit from a tool fitting in this problem space.

Providing Feedback on Raintale



Development on Raintale is just starting, and we would appreciate feedback at our GitHub issues page. In addition to finding defects, we also want to know where you think Raintale should go. Have you developed a template that you find to be useful and want to share it? Is there a storyteller (file format or service) that you want us to incorporate?

The Dark and Stormy Archives Toolkit



Raintale joins MementoEmbed, the Off-Topic Memento Toolkit, and Archive-It Utilities as another member of the growing Dark and Stormy Archives (DSA) Toolkit. The DSA Toolkit includes tools for summarizing and generating stories from web archive collections. The next tool in development, Hypercane, will use structural features of web archive collections, along with similarity metrics and Natural Language Processing, to select the best mementos from collections for our stories.

We will continue to improve Raintale. What stories will we all tell with it?

-- Shawn M. Jones