Search results for: Alexander P. Eremeev
150 Context Detection in Spreadsheets Based on Automatically Inferred Table Schema
Authors: Alexander Wachtel, Michael T. Franzen, Walter F. Tichy
Abstract:
Programming requires years of training. With natural language and end user development methods, programming could become available to everyone. It enables end users to program their own devices and extend the functionality of the existing system without any knowledge of programming languages. In this paper, we describe an Interactive Spreadsheet Processing Module (ISPM), a natural language interface to spreadsheets that allows users to address ranges within the spreadsheet based on inferred table schema. Using the ISPM, end users are able to search for values in the schema of the table and to address the data in spreadsheets implicitly. Furthermore, it enables them to select and sort the spreadsheet data by using natural language. ISPM uses a machine learning technique to automatically infer areas within a spreadsheet, including different kinds of headers and data ranges. Since ranges can be identified from natural language queries, the end users can query the data using natural language. During the evaluation 12 undergraduate students were asked to perform operations (sum, sort, group and select) using the system and also Excel without ISPM interface, and the time taken for task completion was compared across the two systems. Only for the selection task did users take less time in Excel (since they directly selected the cells using the mouse) than in ISPM, by using natural language for end user software engineering, to overcome the present bottleneck of professional developers.Keywords: natural language processing, natural language interfaces, human computer interaction, end user development, dialog systems, data recognition, spreadsheet
Procedia PDF Downloads 310149 The Low-Cost Design and 3D Printing of Structural Knee Orthotics for Athletic Knee Injury Patients
Authors: Alexander Hendricks, Sean Nevin, Clayton Wikoff, Melissa Dougherty, Jacob Orlita, Rafiqul Noorani
Abstract:
Knee orthotics play an important role in aiding in the recovery of those with knee injuries, especially athletes. However, structural knee orthotics is often very expensive, ranging between $300 and $800. The primary reason for this project was to answer the question: can 3D printed orthotics represent a viable and cost-effective alternative to present structural knee orthotics? The primary objective for this research project was to design a knee orthotic for athletes with knee injuries for a low-cost under $100 and evaluate its effectiveness. The initial design for the orthotic was done in SolidWorks, a computer-aided design (CAD) software available at Loyola Marymount University. After this design was completed, finite element analysis (FEA) was utilized to understand how normal stresses placed upon the knee affected the orthotic. The knee orthotic was then adjusted and redesigned to meet a specified factor-of-safety of 3.25 based on the data gathered during FEA and literature sources. Once the FEA was completed and the orthotic was redesigned based from the data gathered, the next step was to move on to 3D-printing the first design of the knee brace. Subsequently, physical therapy movement trials were used to evaluate physical performance. Using the data from these movement trials, the CAD design of the brace was refined to accommodate the design requirements. The final goal of this research means to explore the possibility of replacing high-cost, outsourced knee orthotics with a readily available low-cost alternative.Keywords: 3D printing, knee orthotics, finite element analysis, design for additive manufacturing
Procedia PDF Downloads 178148 Diagnostic Value of Different Noninvasive Criteria of Latent Myocarditis in Comparison with Myocardial Biopsy
Authors: Olga Blagova, Yuliya Osipova, Evgeniya Kogan, Alexander Nedostup
Abstract:
Purpose: to quantify the value of various clinical, laboratory and instrumental signs in the diagnosis of myocarditis in comparison with morphological studies of the myocardium. Methods: in 100 patients (65 men, 44.7±12.5 years) with «idiopathic» arrhythmias (n = 20) and dilated cardiomyopathy (DCM, n = 80) were performed 71 endomyocardial biopsy (EMB), 13 intraoperative biopsy, 5 study of explanted hearts, 11 autopsy with virus investigation (real-time PCR) of the blood and myocardium. Anti-heart antibodies (AHA) were also measured as well as cardiac CT (n = 45), MRI (n = 25), coronary angiography (n = 47). The comparison group included of 50 patients (25 men, 53.7±11.7 years) with non-inflammatory heart diseases who underwent open heart surgery. Results. Active/borderline myocarditis was diagnosed in 76.0% of the study group and in 21.6% of patients of the comparison group (p < 0.001). The myocardial viral genome was observed more frequently in patients of comparison group than in study group (group (65.0% and 40.2%; p < 0.01. Evaluated the diagnostic value of noninvasive markers of myocarditis. The panel of anti-heart antibodies had the greatest importance to identify myocarditis: sensitivity was 81.5%, positive and negative predictive value was 75.0 and 60.5%. It is defined diagnostic value of non-invasive markers of myocarditis and diagnostic algorithm providing an individual assessment of the likelihood of myocarditis is developed. Conclusion. The greatest significance in the diagnosis of latent myocarditis in patients with 'idiopathic' arrhythmias and DCM have AHA. The use of complex of noninvasive criteria allows estimate the probability of myocarditis and determine the indications for EMB.Keywords: myocarditis, "idiopathic" arrhythmias, dilated cardiomyopathy, endomyocardial biopsy, viral genome, anti-heart antibodies
Procedia PDF Downloads 172147 Fashion Performing/Fashioning Performances: Catwalks as Communication Tools between Market, Branding and Performing Art
Authors: V. Linfante
Abstract:
Catwalks are one of the key moments in fashion: the first and most relevant display where brands stage their collections, products, ideas, and style. The garment is 'the star' of the catwalk and must show itself not just as a product but as a result of a design process endured for several months. All contents developed within this process become ingredients for connecting scenography, music, lights, and direction into a unique fashion narrative. According to the spirit of different ages, fashion shows have been transformed and shaped into peculiar formats: from Pandoras to presentations organized by Parisian couturiers, across the 'marathons' typical of the beginning of modern fashion system, coming up to the present structure of fashion weeks, with their complex organization and related creative and technical businesses. The paper intends to introduce the evolution of the fashion system through its unique process of seasonally staging and showing its production. The paper intends to analyse the evolution of the fashion shows from the intimacy of ballrooms at the beginning of the 20th century, passing through the enthusiasm attitude typical from the '70s and the '80s, to finally depict our present. In this last scenario, catwalks are not anymore a standard collections presentation but become one of the most exciting expression of contemporary culture (and sub-cultures), going from sophisticated performances (as Karl Lagerfeld's Chanel shows) to real artistic happenings (as the events of Victor&Rolf, Alexander McQueen, OFF_WHITE, Vetements, and Martin Margiela), often involving contemporary architecture, digital world, technology, social media, performing art and artists.Keywords: branding, communication, fashion, new media, performing art
Procedia PDF Downloads 149146 Study of Storms on the Javits Center Green Roof
Authors: Alexander Cho, Harsho Sanyal, Joseph Cataldo
Abstract:
A quantitative analysis of the different variables on both the South and North green roofs of the Jacob K. Javits Convention Center was taken to find mathematical relationships between net radiation and evapotranspiration (ET), average outside temperature, and the lysimeter weight. Groups of datasets were analyzed, and the relationships were plotted on linear and semi-log graphs to find consistent relationships. Antecedent conditions for each rainstorm were also recorded and plotted against the volumetric water difference within the lysimeter. The first relation was the inverse parabolic relationship between the lysimeter weight and the net radiation and ET. The peaks and valleys of the lysimeter weight corresponded to valleys and peaks in the net radiation and ET respectively, with the 8/22/15 and 1/22/16 datasets showing this trend. The U-shaped and inverse U-shaped plots of the two variables coincided, indicating an inverse relationship between the two variables. Cross variable relationships were examined through graphs with lysimeter weight as the dependent variable on the y-axis. 10 out of 16 of the plots of lysimeter weight vs. outside temperature plots had R² values > 0.9. Antecedent conditions were also recorded for rainstorms, categorized by the amount of precipitation accumulating during the storm. Plotted against the change in the volumetric water weight difference within the lysimeter, a logarithmic regression was found with large R² values. The datasets were compared using the Mann Whitney U-test to see if the datasets were statistically different, using a significance level of 5%; all datasets compared showed a U test statistic value, proving the null hypothesis of the datasets being different from being true.Keywords: green roof, green infrastructure, Javits Center, evapotranspiration, net radiation, lysimeter
Procedia PDF Downloads 113145 Large Eddy Simulation with Energy-Conserving Schemes: Understanding Wind Farm Aerodynamics
Authors: Dhruv Mehta, Alexander van Zuijlen, Hester Bijl
Abstract:
Large Eddy Simulation (LES) numerically resolves the large energy-containing eddies of a turbulent flow, while modelling the small dissipative eddies. On a wind farm, these large scales carry the energy wind turbines extracts and are also responsible for transporting the turbines’ wakes, which may interact with downstream turbines and certainly with the atmospheric boundary layer (ABL). In this situation, it is important to conserve the energy that these wake’s carry and which could be altered artificially through numerical dissipation brought about by the schemes used for the spatial discretisation and temporal integration. Numerical dissipation has been reported to cause the premature recovery of turbine wakes, leading to an over prediction in the power produced by wind farms.An energy-conserving scheme is free from numerical dissipation and ensures that the energy of the wakes is increased or decreased only by the action of molecular viscosity or the action of wind turbines (body forces). The aim is to create an LES package with energy-conserving schemes to simulate wind turbine wakes correctly to gain insight into power-production, wake meandering etc. Such knowledge will be useful in designing more efficient wind farms with minimal wake interaction, which if unchecked could lead to major losses in energy production per unit area of the wind farm. For their research, the authors intend to use the Energy-Conserving Navier-Stokes code developed by the Energy Research Centre of the Netherlands.Keywords: energy-conserving schemes, modelling turbulence, Large Eddy Simulation, atmospheric boundary layer
Procedia PDF Downloads 464144 Methane versus Carbon Dioxide Mitigation Prospects
Authors: Alexander J. Severinsky, Allen L. Sessoms
Abstract:
Atmospheric carbon dioxide (CO₂) has dominated the discussion about the causes of climate change. This is a reflection of the time horizon that has become the norm adopted by the IPCC as the planning horizon. Recently, it has become clear that a 100-year time horizon is much too long, and yet almost all mitigation efforts, including those in the near-term horizon of 30 years, are geared toward it. In this paper, we show that, for a 30-year time horizon, methane (CH₄) is the greenhouse gas whose radiative forcing exceeds that of CO₂. In our analysis, we used radiative forcing of greenhouse gases in the atmosphere since they directly affect the temperature rise on Earth. In 2019, the radiative forcing of methane was ~2.5 W/m² and that of carbon dioxide ~2.1 W/m². Under a business-as-usual (BAU) scenario until 2050, such forcing would be ~2.8 W/m² and ~3.1 W/m², respectively. There is a substantial spread in the data for anthropogenic and natural methane emissions as well as CH₄ leakages from production to consumption. We estimated the minimum and maximum effects of the reduction of these leakages. Such action may reduce the annual radiative forcing of all CH₄ emissions by between ~15% and ~30%. This translates into a reduction of the RF by 2050 from ~2.8 W/m² to ~2.5 W/m² in the case of the minimum effect and to ~2.15 W/m² in the case of the maximum. Under the BAU, we found that the RF of CO₂ would increase from ~2.1 W/m² nowadays to ~3.1 W/m² by 2050. We assumed a reduction of 50% of anthropogenic emission linearly over the next 30 years. That would reduce radiative forcing from ~3.1 W/m² to ~2.9 W/m². In the case of ‘net zero,’ the other 50% of reduction of only anthropogenic emissions would be limited to either from sources of emissions or directly from the atmosphere. The total reduction would be from ~3.1 to ~2.7, or ~0.4 W/m². To achieve the same radiative forcing as in the scenario of maximum reduction of methane leakages of ~2.15 W/m², then an additional reduction of radiative forcing of CO₂ would be approximately 2.7 -2.15=0.55 W/m². This is a much larger value than in expectations from ‘net zero’. In total, one needs to remove from the atmosphere ~660 GT to match the maximum reduction of current methane leakages and ~270 GT to achieve ‘net zero.’ This amounts to over 900 GT in total.Keywords: methane leakages, methane radiative forcing, methane mitigation, methane net zero
Procedia PDF Downloads 143143 Knowledge Transfer among Cross-Functional Teams as a Continual Improvement Process
Authors: Sergio Mauricio Pérez López, Luis Rodrigo Valencia Pérez, Juan Manuel Peña Aguilar, Adelina Morita Alexander
Abstract:
The culture of continuous improvement in organizations is very important as it represents a source of competitive advantage. This article discusses the transfer of knowledge between companies which formed cross-functional teams and used a dynamic model for knowledge creation as a framework. In addition, the article discusses the structure of cognitive assets in companies and the concept of "stickiness" (which is defined as an obstacle to the transfer of knowledge). The purpose of this analysis is to show that an improvement in the attitude of individual members of an organization creates opportunities, and that an exchange of information and knowledge leads to generating continuous improvements in the company as a whole. This article also discusses the importance of creating the proper conditions for sharing tacit knowledge. By narrowing gaps between people, mutual trust can be created and thus contribute to an increase in sharing. The concept of adapting knowledge to new environments will be highlighted, as it is essential for companies to translate and modify information so that such information can fit the context of receiving organizations. Adaptation will ensure that the transfer process is carried out smoothly by preventing "stickiness". When developing the transfer process on cross-functional teams (as opposed to working groups), the team acquires the flexibility and responsiveness necessary to meet objectives. These types of cross-functional teams also generate synergy due to the array of different work backgrounds of their individuals. When synergy is established, a culture of continuous improvement is created.Keywords: knowledge transfer, continuous improvement, teamwork, cognitive assets
Procedia PDF Downloads 323142 Content Analysis of ‘Junk Food’ Content in Children’s TV Programmes: A Comparison of UK Broadcast TV and Video-On-Demand Services
Authors: Shreesh Sinha, Alexander B. Barker, Megan Parkin, Emma Wilson, Rachael L. Murray
Abstract:
Background and Objectives: Exposure to HFSS imagery is associated with the consumption of foods high in fat, sugar or salt (HFSS), and subsequently obesity, among young people. We report and compare the results of two content analyses, one of two popular terrestrial children's television channels in the UK and the other of a selection of children's programmes available on video-on-demand (VOD) streaming sites. Methods: Content analysis of three days' worth of programmes (including advertisements) on two popular children's television channels broadcast on UK television (CBeebies and Milkshake) as well as a sample of 40 highest-rated children's programmes available on the VOD platforms, Netflix and Amazon Prime, using 1-minute interval coding. Results: HFSS content was seen in 181 broadcasts (36%) and in 417 intervals (13%) on terrestrial television, 'Milkshake' had a significantly higher proportion of programmes/adverts which contained HFSS content than 'CBeebies'. In VOD platforms, HFSS content was seen in 82 episodes (72% of the total number of episodes), across 459 intervals (19% of the total number of intervals), with no significant difference in the proportion of programmes containing HFSS content between Netflix and Amazon Prime. Conclusions: This study demonstrates that HFSS content is common in both popular UK children's television channels and children's programmes on VOD services. Since previous research has shown that HFSS content in the media has an effect on HFSS consumption, children's television programmes broadcast either on TV or VOD services are likely to have an effect on HFSS consumption in children, and legislative opportunities to prevent this exposure are being missed.Keywords: public health, junk food, children's TV, HFSS
Procedia PDF Downloads 102141 Compliance of Systematic Reviews in Plastic Surgery with the PRISMA Statement: A Systematic Review
Authors: Seon-Young Lee, Harkiran Sagoo, Katherine Whitehurst, Georgina Wellstead, Alexander Fowler, Riaz Agha, Dennis Orgill
Abstract:
Introduction: Systematic reviews attempt to answer research questions by synthesising the data within primary papers. They are an increasingly important tool within evidence-based medicine, guiding clinical practice, future research and healthcare policy. We sought to determine the reporting quality of recent systematic reviews in plastic surgery. Methods: This systematic review was conducted in line with the Cochrane handbook, reported in line with the PRISMA statement and registered at the ResearchRegistry (UIN: reviewregistry18). MEDLINE and EMBASE databases were searched in 2013 and 2014 for systematic reviews by five major plastic surgery journals. Screening, identification and data extraction was performed independently by two teams. Results: From an initial set of 163 articles, 79 met the inclusion criteria. The median PRISMA score was 16 out of 27 items (59.3%; range 6-26, 95% CI 14-17). Compliance between individual PRISMA items showed high variability. It was poorest for items related to the use of review protocol (item 5; 5%) and presentation of data on risk of bias of each study (item 19; 18%), while being the highest for description of rationale (item 3; 99%) and sources of funding and other support (item 27; 95%), and for structured summary in the abstract (item 2; 95%). Conclusion: The reporting quality of systematic reviews in plastic surgery requires improvement. ‘Hard-wiring’ of compliance through journal submission systems, as well as improved education, awareness and a cohesive strategy among all stakeholders is called for.Keywords: PRISMA, reporting quality, plastic surgery, systematic review, meta-analysis
Procedia PDF Downloads 291140 Estimation of Endogenous Brain Noise from Brain Response to Flickering Visual Stimulation Magnetoencephalography Visual Perception Speed
Authors: Alexander N. Pisarchik, Parth Chholak
Abstract:
Intrinsic brain noise was estimated via magneto-encephalograms (MEG) recorded during perception of flickering visual stimuli with frequencies of 6.67 and 8.57 Hz. First, we measured the mean phase difference between the flicker signal and steady-state event-related field (SSERF) in the occipital area where the brain response at the flicker frequencies and their harmonics appeared in the power spectrum. Then, we calculated the probability distribution of the phase fluctuations in the regions of frequency locking and computed its kurtosis. Since kurtosis is a measure of the distribution’s sharpness, we suppose that inverse kurtosis is related to intrinsic brain noise. In our experiments, the kurtosis value varied among subjects from K = 3 to K = 5 for 6.67 Hz and from 2.6 to 4 for 8.57 Hz. The majority of subjects demonstrated leptokurtic kurtosis (K < 3), i.e., the distribution tails approached zero more slowly than Gaussian. In addition, we found a strong correlation between kurtosis and brain complexity measured as the correlation dimension, so that the MEGs of subjects with higher kurtosis exhibited lower complexity. The obtained results are discussed in the framework of nonlinear dynamics and complex network theories. Specifically, in a network of coupled oscillators, phase synchronization is mainly determined by two antagonistic factors, noise, and the coupling strength. While noise worsens phase synchronization, the coupling improves it. If we assume that each neuron and each synapse contribute to brain noise, the larger neuronal network should have stronger noise, and therefore phase synchronization should be worse, that results in smaller kurtosis. The described method for brain noise estimation can be useful for diagnostics of some brain pathologies associated with abnormal brain noise.Keywords: brain, flickering, magnetoencephalography, MEG, visual perception, perception time
Procedia PDF Downloads 147139 Sports Business Services Model: A Research Model Study in Reginal Sport Authority of Thailand
Authors: Siriraks Khawchaimaha, Sangwian Boonto
Abstract:
Sport Authority of Thailand (SAT) is the state enterprise, promotes and supports all sports kind both professional and athletes for competitions, and administer under government policy and government officers and therefore, all financial supports whether cash inflows and cash outflows are strictly committed to government budget and limited to the planned projects at least 12 to 16 months ahead of reality, as results of ineffective in sport events, administration and competitions. In order to retain in the sports challenges around the world, SAT need to has its own sports business services model by each stadium, region and athletes’ competencies. Based on the HMK model of Khawchaimaha, S. (2007), this research study is formalized into each 10 regional stadiums to details into the characteristics root of fans, athletes, coaches, equipments and facilities, and stadiums. The research designed is firstly the evaluation of external factors: hardware whereby competition or practice of stadiums, playground, facilities, and equipments. Secondly, to understand the software of the organization structure, staffs and management, administrative model, rules and practices. In addition, budget allocation and budget administration with operating plan and expenditure plan. As results for the third step, issues and limitations which require action plan for further development and support, or to cease that unskilled sports kind. The final step, based on the HMK model and modeling canvas by Alexander O and Yves P (2010) are those of template generating Sports Business Services Model for each 10 SAT’s regional stadiums.Keywords: HMK model, not for profit organization, sport business model, sport services model
Procedia PDF Downloads 305138 Trading off Accuracy for Speed in Powerdrill
Authors: Filip Buruiana, Alexander Hall, Reimar Hofmann, Thomas Hofmann, Silviu Ganceanu, Alexandru Tudorica
Abstract:
In-memory column-stores make interactive analysis feasible for many big data scenarios. PowerDrill is a system used internally at Google for exploration in logs data. Even though it is a highly parallelized column-store and uses in memory caching, interactive response times cannot be achieved for all datasets (note that it is common to analyze data with 50 billion records in PowerDrill). In this paper, we investigate two orthogonal approaches to optimize performance at the expense of an acceptable loss of accuracy. Both approaches can be implemented as outer wrappers around existing database engines and so they should be easily applicable to other systems. For the first optimization we show that memory is the limiting factor in executing queries at speed and therefore explore possibilities to improve memory efficiency. We adapt some of the theory behind data sketches to reduce the size of particularly expensive fields in our largest tables by a factor of 4.5 when compared to a standard compression algorithm. This saves 37% of the overall memory in PowerDrill and introduces a 0.4% relative error in the 90th percentile for results of queries with the expensive fields. We additionally evaluate the effects of using sampling on accuracy and propose a simple heuristic for annotating individual result-values as accurate (or not). Based on measurements of user behavior in our real production system, we show that these estimates are essential for interpreting intermediate results before final results are available. For a large set of queries this effectively brings down the 95th latency percentile from 30 to 4 seconds.Keywords: big data, in-memory column-store, high-performance SQL queries, approximate SQL queries
Procedia PDF Downloads 259137 A Content Analysis of ‘Junk Food’ Content in Children’s TV Programs: A Comparison of UK Broadcast TV and Video-On-Demand Services
Authors: Alexander B. Barker, Megan Parkin, Shreesh Sinha, Emma Wilson, Rachael L. Murray
Abstract:
Objectives: Exposure to HFSS imagery is associated with consumption of foods high in fat, sugar, or salt (HFSS), and subsequently obesity, among young people. We report and compare the results of two content analyses, one of two popular terrestrial children’s television channels in the UK and the other of a selection of children’s programs available on video-on-demand (VOD) streaming sites. Design: Content analysis of three days’ worth of programs (including advertisements) on two popular children’s television channels broadcast on UK television (CBeebies and Milkshake) as well as a sample of 40 highest-rated children’s programs available on the VOD platforms, Netflix and Amazon Prime, using 1-minute interval coding. Setting: United Kingdom, Participants: None. Results: HFSS content was seen in 181 broadcasts (36%) and in 417 intervals (13%) on terrestrial television, ‘Milkshake’ had a significantly higher proportion of programs/adverts which contained HFSS content than ‘CBeebies’. In VOD platforms, HFSS content was seen in 82 episodes (72% of the total number of episodes), across 459 intervals (19% of the total number of intervals), with no significant difference in the proportion of programs containing HFSS content between Netflix and Amazon Prime. Conclusions: This study demonstrates that HFSS content is common in both popular UK children’s television channels and children's programs on VOD services. Since previous research has shown that HFSS content in the media has an effect on HFSS consumption, children’s television programs broadcast either on TV or VOD services are likely having an effect on HFSS consumption in children and legislative opportunities to prevent this exposure are being missed.Keywords: public health, epidemiology, obesity, content analysis
Procedia PDF Downloads 186136 Ontology-Driven Knowledge Discovery and Validation from Admission Databases: A Structural Causal Model Approach for Polytechnic Education in Nigeria
Authors: Bernard Igoche Igoche, Olumuyiwa Matthew, Peter Bednar, Alexander Gegov
Abstract:
This study presents an ontology-driven approach for knowledge discovery and validation from admission databases in Nigerian polytechnic institutions. The research aims to address the challenges of extracting meaningful insights from vast amounts of admission data and utilizing them for decision-making and process improvement. The proposed methodology combines the knowledge discovery in databases (KDD) process with a structural causal model (SCM) ontological framework. The admission database of Benue State Polytechnic Ugbokolo (Benpoly) is used as a case study. The KDD process is employed to mine and distill knowledge from the database, while the SCM ontology is designed to identify and validate the important features of the admission process. The SCM validation is performed using the conditional independence test (CIT) criteria, and an algorithm is developed to implement the validation process. The identified features are then used for machine learning (ML) modeling and prediction of admission status. The results demonstrate the adequacy of the SCM ontological framework in representing the admission process and the high predictive accuracies achieved by the ML models, with k-nearest neighbors (KNN) and support vector machine (SVM) achieving 92% accuracy. The study concludes that the proposed ontology-driven approach contributes to the advancement of educational data mining and provides a foundation for future research in this domain.Keywords: admission databases, educational data mining, machine learning, ontology-driven knowledge discovery, polytechnic education, structural causal model
Procedia PDF Downloads 61135 Modified Clusterwise Regression for Pavement Management
Authors: Mukesh Khadka, Alexander Paz, Hanns de la Fuente-Mella
Abstract:
Typically, pavement performance models are developed in two steps: (i) pavement segments with similar characteristics are grouped together to form a cluster, and (ii) the corresponding performance models are developed using statistical techniques. A challenge is to select the characteristics that define clusters and the segments associated with them. If inappropriate characteristics are used, clusters may include homogeneous segments with different performance behavior or heterogeneous segments with similar performance behavior. Prediction accuracy of performance models can be improved by grouping the pavement segments into more uniform clusters by including both characteristics and a performance measure. This grouping is not always possible due to limited information. It is impractical to include all the potential significant factors because some of them are potentially unobserved or difficult to measure. Historical performance of pavement segments could be used as a proxy to incorporate the effect of the missing potential significant factors in clustering process. The current state-of-the-art proposes Clusterwise Linear Regression (CLR) to determine the pavement clusters and the associated performance models simultaneously. CLR incorporates the effect of significant factors as well as a performance measure. In this study, a mathematical program was formulated for CLR models including multiple explanatory variables. Pavement data collected recently over the entire state of Nevada were used. International Roughness Index (IRI) was used as a pavement performance measure because it serves as a unified standard that is widely accepted for evaluating pavement performance, especially in terms of riding quality. Results illustrate the advantage of the using CLR. Previous studies have used CLR along with experimental data. This study uses actual field data collected across a variety of environmental, traffic, design, and construction and maintenance conditions.Keywords: clusterwise regression, pavement management system, performance model, optimization
Procedia PDF Downloads 249134 More Precise: Patient-Reported Outcomes after Stroke
Authors: Amber Elyse Corrigan, Alexander Smith, Anna Pennington, Ben Carter, Jonathan Hewitt
Abstract:
Background and Purpose: Morbidity secondary to stroke is highly heterogeneous, but it is important to both patients and clinicians in post-stroke management and adjustment to life after stroke. The consideration of post-stroke morbidity clinically and from the patient perspective has been poorly measured. The patient-reported outcome measures (PROs) in morbidity assessment help improve this knowledge gap. The primary aim of this study was to consider the association between PRO outcomes and stroke predictors. Methods: A multicenter prospective cohort study assessed 549 stroke patients at 19 hospital sites across England and Wales during 2019. Following a stroke event, demographic, clinical, and PRO measures were collected. Prevalence of morbidity within PRO measures was calculated with associated 95% confidence intervals. Predictors of domain outcome were calculated using a multilevel generalized linear model. Associated P -values and 95% confidence intervals are reported. Results: Data were collected from 549 participants, 317 men (57.7%) and 232 women (42.3%) with ages ranging from 25 to 97 (mean 72.7). PRO morbidity was high post-stroke; 93.2% of the cohort report post-stroke PRO morbidity. Previous stroke, diabetes, and gender are associated with worse patient-reported outcomes across both the physical and cognitive domains. Conclusions: This large-scale multicenter cohort study illustrates the high proportion of morbidity in PRO measures. Further, we demonstrate key predictors of adverse outcomes (Diabetes, previous stroke, and gender) congruence with clinical predictors. The PRO has been demonstrated to be an informative and useful stroke when considering patient-reported outcomes and has wider implications for considerations of PROs in clinical management. Future longitudinal follow-up with PROs is needed to consider association of long-term morbidity.Keywords: morbidity, patient-reported outcome, PRO, stroke
Procedia PDF Downloads 129133 Intelligent Fault Diagnosis for the Connection Elements of Modular Offshore Platforms
Authors: Jixiang Lei, Alexander Fuchs, Franz Pernkopf, Katrin Ellermann
Abstract:
Within the Space@Sea project, funded by the Horizon 2020 program, an island consisting of multiple platforms was designed. The platforms are connected by ropes and fenders. The connection is critical with respect to the safety of the whole system. Therefore, fault detection systems are investigated, which could detect early warning signs for a possible failure in the connection elements. Previously, a model-based method called Extended Kalman Filter was developed to detect the reduction of rope stiffness. This method detected several types of faults reliably, but some types of faults were much more difficult to detect. Furthermore, the model-based method is sensitive to environmental noise. When the wave height is low, a long time is needed to detect a fault and the accuracy is not always satisfactory. In this sense, it is necessary to develop a more accurate and robust technique that can detect all rope faults under a wide range of operational conditions. Inspired by this work on the Space at Sea design, we introduce a fault diagnosis method based on deep neural networks. Our method cannot only detect rope degradation by using the acceleration data from each platform but also estimate the contributions of the specific acceleration sensors using methods from explainable AI. In order to adapt to different operational conditions, the domain adaptation technique DANN is applied. The proposed model can accurately estimate rope degradation under a wide range of environmental conditions and help users understand the relationship between the output and the contributions of each acceleration sensor.Keywords: fault diagnosis, deep learning, domain adaptation, explainable AI
Procedia PDF Downloads 179132 Social Media Idea Ontology: A Concept for Semantic Search of Product Ideas in Customer Knowledge through User-Centered Metrics and Natural Language Processing
Authors: Martin H¨ausl, Maximilian Auch, Johannes Forster, Peter Mandl, Alexander Schill
Abstract:
In order to survive on the market, companies must constantly develop improved and new products. These products are designed to serve the needs of their customers in the best possible way. The creation of new products is also called innovation and is primarily driven by a company’s internal research and development department. However, a new approach has been taking place for some years now, involving external knowledge in the innovation process. This approach is called open innovation and identifies customer knowledge as the most important source in the innovation process. This paper presents a concept of using social media posts as an external source to support the open innovation approach in its initial phase, the Ideation phase. For this purpose, the social media posts are semantically structured with the help of an ontology and the authors are evaluated using graph-theoretical metrics such as density. For the structuring and evaluation of relevant social media posts, we also use the findings of Natural Language Processing, e. g. Named Entity Recognition, specific dictionaries, Triple Tagger and Part-of-Speech-Tagger. The selection and evaluation of the tools used are discussed in this paper. Using our ontology and metrics to structure social media posts enables users to semantically search these posts for new product ideas and thus gain an improved insight into the external sources such as customer needs.Keywords: idea ontology, innovation management, semantic search, open information extraction
Procedia PDF Downloads 187131 Evidence Theory Enabled Quickest Change Detection Using Big Time-Series Data from Internet of Things
Authors: Hossein Jafari, Xiangfang Li, Lijun Qian, Alexander Aved, Timothy Kroecker
Abstract:
Traditionally in sensor networks and recently in the Internet of Things, numerous heterogeneous sensors are deployed in distributed manner to monitor a phenomenon that often can be model by an underlying stochastic process. The big time-series data collected by the sensors must be analyzed to detect change in the stochastic process as quickly as possible with tolerable false alarm rate. However, sensors may have different accuracy and sensitivity range, and they decay along time. As a result, the big time-series data collected by the sensors will contain uncertainties and sometimes they are conflicting. In this study, we present a framework to take advantage of Evidence Theory (a.k.a. Dempster-Shafer and Dezert-Smarandache Theories) capabilities of representing and managing uncertainty and conflict to fast change detection and effectively deal with complementary hypotheses. Specifically, Kullback-Leibler divergence is used as the similarity metric to calculate the distances between the estimated current distribution with the pre- and post-change distributions. Then mass functions are calculated and related combination rules are applied to combine the mass values among all sensors. Furthermore, we applied the method to estimate the minimum number of sensors needed to combine, so computational efficiency could be improved. Cumulative sum test is then applied on the ratio of pignistic probability to detect and declare the change for decision making purpose. Simulation results using both synthetic data and real data from experimental setup demonstrate the effectiveness of the presented schemes.Keywords: CUSUM, evidence theory, kl divergence, quickest change detection, time series data
Procedia PDF Downloads 334130 Photocaged Carbohydrates: Versatile Tools for Biotechnological Applications
Authors: Claus Bier, Dennis Binder, Alexander Gruenberger, Dagmar Drobietz, Dietrich Kohlheyer, Anita Loeschcke, Karl Erich Jaeger, Thomas Drepper, Joerg Pietruszka
Abstract:
Light absorbing chromophoric systems are important optogenetic tools for biotechnical and biophysical investigations. Processes such as fluorescence or photolysis can be triggered by light-absorption of chromophores. These play a central role in life science. Photocaged compounds belong to such chromophoric systems. The photo-labile protecting groups enable them to release biologically active substances with high temporal and spatial resolution. The properties of photocaged compounds are specified by the characteristics of the caging group as well as the characteristics of the linked effector molecule. In our research, we work with different types of photo-labile protecting groups and various effector molecules giving us possible access to a large library of caged compounds. As a function of the caged effector molecule, a nearly limitless number of biological systems can be directed. Our main interest focusses on photocaging carbohydrates (e.g. arabinose) and their derivatives as effector molecules. Based on these resulting photocaged compounds a precisely controlled photoinduced gene expression will give us access to studies of numerous biotechnological and synthetic biological applications. It could be shown, that the regulation of gene expression via light is possible with photocaged carbohydrates achieving a higher-order control over this processes. With the one-step cleavable photocaged carbohydrate, a homogeneous expression was achieved in comparison to free carbohydrates.Keywords: bacterial gene expression, biotechnology, caged compounds, carbohydrates, optogenetics, photo-removable protecting group
Procedia PDF Downloads 225129 Music of a Film City: Interwar Europe in Los Angeles, 1930s
Authors: Alexander Rosenblatt
Abstract:
The musical culture of the city of Los Angeles, as it is seen today, developed not without the influence of outstanding musicians who came from Europe during the period between the world wars. The combination of European modernist ideas with American musical culture, which differed in many ways from European musical culture, led to unique results. During the 1920s and even more so in the 1930s, members of the Austrian-German artistic intelligentsia, particularly those of Jewish origin who felt insecure in their homeland, began to look for a safer place. The United States has become such a place for many, and many of them chose the second largest metropolis—Los Angeles. The most notable figure in this group was the modernist composer Arnold Schoenberg. Other famous musicians were conductors Otto Klemperer and Bruno Walter. The study focused on how these people acclimated to a city whose culture and business revolved around film production; what place the conductors Klemperer and Walter occupied in the city, state, and country; how Schoenberg, whose musical style was little understood by the American public, was able to realize himself; what path he took when he was accepted to two universities as a professor of counterpoint and composition; and whether he revised his own views on the development of Western music. Another aspect was the study of how the composer’s memory was preserved in the universities where he taught. The study is based primarily on materials found in four libraries of two universities located in Los Angeles, UCLA and USC, during my tenure as a visiting scholar at USC Thornton School of Music (August 2023), to be completed during my upcoming visit there in August-September 2024, as well as on interviews with people active in efforts to keep Schoenberg’s memory alive on the USC Campus.Keywords: los angeles, filmmaking, immigrant musicians, arnold schoenberg, otto klemperer, bruno walter
Procedia PDF Downloads 26128 Biological Studies of N-O Donor 4-Acypyrazolone Heterocycle and Its Pd/Pt Complexes of Therapeutic Importance
Authors: Omoruyi Gold Idemudia, Alexander P. Sadimenko
Abstract:
The synthesis of N-heterocycles with novel properties, having broad spectrum biological activities that may become alternative medicinal drugs, have been attracting a lot of research attention due to the emergence of medicinal drug’s limitations such as disease resistance and their toxicity effects among others. Acylpyrazolones have been employed as pharmaceuticals as well as analytical reagent and their application as coordination complexes with transition metal ions have been well established. By way of a condensation reaction with amines acylpyrazolone ketones form a more chelating and superior group of compounds known as azomethines. 4-propyl-3-methyl-1-phenyl-2-pyrazolin-5-one was reacted with phenylhydrazine to get a new phenylhydrazone which was further reacted with aqueous solutions of palladium and platinum salts, in an effort towards the discovery of transition metal based synthetic drugs. The compounds were characterized by means of analytical, spectroscopic, thermogravimetric analysis TGA, as well as x-ray crystallography. 4-propyl-3-methyl-1-phenyl-2-pyrazolin-5-one phenylhydrazone crystallizes in a triclinic crystal system with a P-1 (No. 2) space group based on x-ray crystallography. The bidentate ON ligand formed a square planar geometry on coordinating with metal ions based on FTIR, electronic and NMR spectra as well as magnetic moments. Reported compounds showed antibacterial activities against the nominated bacterial isolates using the disc diffusion technique at 20 mg/ml in triplicates. The metal complexes exhibited a better antibacterial activity with platinum complex having an MIC value of 0.63 mg/ml. Similarly, ligand and complexes also showed antioxidant scavenging properties against 2, 2-diphenyl-1-picrylhydrazyl DPPH radical at 0.5mg/ml relative to ascorbic acid (standard drug).Keywords: acylpyrazolone, antibacterial studies, metal complexes, phenylhydrazone, spectroscopy
Procedia PDF Downloads 250127 A Review on Benzo(a)pyrene Emission Factors from Biomass Combustion
Authors: Franziska Klauser, Manuel Schwabl, Alexander Weissinger, Christoph Schmidl, Walter Haslinger, Anne Kasper-Giebl
Abstract:
Benzo(a)pyrene (BaP) is the most widely investigated representative of Polycyclic Aromatic Hydrocarbons (PAH) as well as one of the most toxic compounds in this group. Since 2013 in the European Union a limit value for BaP concentration in the ambient air is applied, which was set to a yearly average value of 1 ng m-3. Several reports show that in some regions, even where industry and traffic are of minor impact this threshold is regularly exceeded. This is taken as proof that biomass combustion for heating purposes contributes significantly to BaP pollution. Several investigations have been already carried out on the BaP emission behavior of biomass combustion furnaces, mostly focusing on a certain aspect like the influences from wood type, of operation type or of technology type. However, a superior view on emission patterns of BaP from biomass combustion and the aggregation of determined values also from recent studies is not presented so far. The combination of determined values allows a better understanding of the BaP emission behavior from biomass combustion. In this work the review conclusions are driven from the combination of outcomes from different publication. In two examples it was shown that technical progress leads to 10 to 100 fold lower BaP emission from modern furnaces compared to old technologies of equivalent type. It was also indicated that the operation with pellets or wood chips exhibits clearly lower BaP emission factors compared to operation with log wood. Although, the BaP emission level from automatic furnaces is strongly impacted by the kind of operation. This work delivers an overview on BaP emission factors from different biomass combustion appliances, from different operation modes and from the combustion of different fuel and wood types. The main impact factors are depicted, and suggestions for low BaP emission biomass combustion are derived. As one result possible investigation fields concerning BaP emissions from biomass combustion that seem to be most important to be clarified are suggested.Keywords: benzo(a)pyrene, biomass, combustion, emission, pollution
Procedia PDF Downloads 353126 Association of Vascular Endothelial Growth Factor Gene +405 C>G and -460 T>C Polymorphism with Type 2 Diabetic Foot Ulcer Patient in Cipto Mangunkusumo National Hospital Jakarta
Authors: Dedy Pratama, Akhmadu Muradi, Hilman Ibrahim, Raden Suhartono, Alexander Jayadi Utama, Patrianef Darwis, S. Dwi Anita, Luluk Yunaini, Kemas Dahlan
Abstract:
Introduction: Vascular endothelial growth factor (VEGF) gene shows association with various angiogenesis conditions including Diabetic Foot Ulcer (DFU) disease. In this study, we performed this study to examine VEGF gene polymorphism associated with DFU. Methods: Case-control study of polymorphism of VEGF gene +405 C>G and -460 T>C, of diabetes mellitus (DM) type 2 with Diabetic Foot Ulcer (DFU) in Cipto Mangunkusumo National Hospital (RSCM) Jakarta from June to December 2016. Results: There were 203 patients, 102 patients with DFU and 101 patients without DFU. Forty-nine point 8 percent of total samples is male and 50,2% female with mean age 56,06 years. Distribution of the wild-type genotype VEGF +405 C>G wild type CC was found in 6,9% of respondents, the number of mutant heterozygote CG was 69,5% and mutant homozygote GG was 19,7%. Cumulatively, there were 6,9% wild-type and 85,2% mutant and 3,9% of total blood samples could not be detected on PCR-RFLP. Distribution of VEGF allele +405 C>G C alleles were 43% and G alleles were 57%. Distribution of genotype from VEGF gene -460 T>C is wild type TT 42,9%, mutant heterozygote TC 37,9% and mutant homozygote CC 13,3%. Cumulatively, there were 42,9% wild-type and 51% mutant type. Distribution of VEGF -460 T>C were 62% T allele and 38% C allele. Conclusion: In this study we found the distribution of alleles from VEGF +405 C>G is C 43% and G 57% and from VEGF -460 T>C; T 62% and C 38%. We propose that G allele in VEGF +405 C>G can act as a protective allele and on the other hands T allele in VEGF -460 T>C could be acted as a risk factor for DFU in diabetic patients.Keywords: diabetic foot ulcer, diabetes mellitus, polymorphism, VEGF
Procedia PDF Downloads 294125 Virtue, Truth, Freedom, And The History Of Philosophy
Authors: Ashley DelCorno
Abstract:
GEM Anscombe’s 1958 essay Modern Moral Philosophy and the tradition of virtue ethics that followed has given rise to the restoration (or, more plainly, the resurrection) of Aristotle as something of an authority figure. Alisdair MacIntyre and Martha Nussbaum are proponents, for example, not just of Aristotle’s relevancy but also of his apparent implicit authority. That said, it’s not clear that the schema imagined by virtue ethicists accurately describes moral life or that it does not inadvertently work to impoverish genuine decision-making. If the label ‘virtue’ is categorically denied to some groups (while arbitrarily afforded to others), it can only turn on itself, thus rendering ridiculous its own premise. Likewise, as an inescapable feature of virtue ethics, Aristotelean binaries like ‘virtue/vice’ and ‘voluntary/involuntary’ offer up false dichotomies that may seriously compromise an agent’s ability to conceptualize choices that are truly free and rooted in meaningful criteria. Here, this topic is analyzed through a feminist lens predicated on the known paradoxes of patriarchy. The work of feminist theorists Jacqui Alexander, Katharine Angel, Simone de Beauvoir, bell hooks, Audre Lorde, Imani Perry, and Amia Srinivasan serves as important guideposts, and the argument here is built from a key tenet of black feminist thought regarding scarcity and possibility. Above all, it’s clear that though the philosophical tradition of virtue ethics presents itself as recovering the place of agency in ethics, its premises possess crippling limitations toward the achievement of this goal. These include, most notably, virtue ethics’ binding analysis of history, as well as its axiomatic attachment to obligatory clauses, problematic reading-in of Aristotle and arbitrary commitment to predetermined and competitively patriarchal ideas of what counts as a virtue.Keywords: feminist history, the limits of utopic imagination, curatorial creation, truth, virtue, freedom
Procedia PDF Downloads 80124 Digital Environment as a Factor of the City's Competitiveness in Attracting Tourists: The Case of Yekaterinburg
Authors: Alexander S. Burnasov, Anatoly V. Stepanov, Maria Y. Ilyushkina
Abstract:
In the conditions of transition to the digital economy, the digital environment of the city becomes one of the key factors of its tourism attractiveness. Modern digital environment makes travelling more accessible, improves the quality of travel services and the attractiveness of many tourist destinations. The digitalization of the industry allows to use resources more efficiently, to simplify business processes, to minimize risks, and to improve travel safety. The city promotion as a tourist destination in the foreign market becomes decisive in the digital environment. Information technologies are extremely important for the functioning of not only any tourist enterprise but also the city as a whole. In addition to solving traditional problems, it is also possible to implement some innovations from the tourism industry, such as the availability of city services in international systems of booking tickets and booking rooms in hotels, the possibility of early booking of theater and museum tickets, the possibility of non-cash payment by cards of international payment systems, Internet access in the urban environment for travelers. The availability of the city's digital services makes it possible to reduce ordering costs, contributes to the optimal selection of tourist products that meet the requirements of the tourist, provides increased transparency of transactions. The users can compare prices, features, services, and reviews of the travel service. The ability to share impressions with friends thousands of miles away directly affects the image of the city. It is possible to promote the image of the city in the digital environment not only through world-scale events (such as World Cup 2018, international summits, etc.) but also through the creation and management of services in the digital environment aimed at supporting tourism services, which will help to improve the positioning of the city in the global tourism market.Keywords: competitiveness, digital environment, travelling, Yekaterinburg
Procedia PDF Downloads 136123 Electric Field-Induced Deformation of Particle-Laden Drops and Structuring of Surface Particles
Authors: Alexander Mikkelsen, Khobaib Khobaib, Zbigniew Rozynek
Abstract:
Drops covered by particles have found important uses in various fields, ranging from stabilization of emulsions to production of new advanced materials. Particles at drop interfaces can be interlocked to form solid capsules with properties tailored for a myriad of applications. Despite the huge potential of particle-laden drops and capsules, the knowledge of their deformation and stability are limited. In this regard, we contribute with experimental studies on the deformation and manipulation of silicone oil drops covered with micrometer-sized particles subjected to electric fields. A mixture of silicone oil and particles were immersed in castor oil using a mechanical pipette, forming millimeter sized drops. The particles moved and adsorbed at the drop interfaces by sedimentation, and were structured at the interface by electric field-induced electrohydrodynamic flows. When applying a direct current electric field, free charges accumulated at the drop interfaces, yielding electric stress that deformed the drops. In our experiments, we investigated how particle properties affected drop deformation, break-up, and particle structuring. We found that by increasing the size of weakly-conductive clay particles, the drop shape can go from compressed to stretched out in the direction of the electric field. Increasing the particle size and electrical properties were also found to weaken electrohydrodynamic flows, induce break-up of drops at weaker electric field strengths and structure particles in chains. These particle parameters determine the dipolar force between the interfacial particles, which can yield particle chaining. We conclude that the balance between particle chaining and electrohydrodynamic flows governs the observed drop mechanics.Keywords: drop deformation, electric field induced stress, electrohydrodynamic flows, particle structuring at drop interfaces
Procedia PDF Downloads 203122 Assessment of the Electrical, Mechanical, and Thermal Nociceptive Thresholds for Stimulation and Pain Measurements at the Bovine Hind Limb
Authors: Samaneh Yavari, Christiane Pferrer, Elisabeth Engelke, Alexander Starke, Juergen Rehage
Abstract:
Background: Three nociceptive thresholds of thermal, electrical, and mechanical thresholds commonly use to evaluate the local anesthesia in many species, for instance, cow, horse, cat, dog, rabbit, and so on. Due to the lack of investigations to evaluate and/or validate such those nociceptive thresholds, our plan was the comparison of two-foot local anesthesia methods of Intravenous Regional Anesthesia (IVRA) and our modified four-point Nerve Block Anesthesia (NBA). Materials and Methods: Eight healthy nonpregnant nondairy Holstein Frisian cows in a cross-over study design were selected for this study. All cows divided into two different groups to receive two local anesthesia techniques of IVRA and our modified four-point NBA. Three thermal, electrical, and mechanical force and pinpricks were applied to evaluate the quality of local anesthesia methods before and after local anesthesia application. Results: The statistical evaluation demonstrated that our four-point NBA has a qualification to select as a standard foot local anesthesia. However, the recorded results of our study revealed no significant difference between two groups of local anesthesia techniques of IVRA and modified four-point NBA related to quality and duration of anesthesia stimulated by electrical, mechanical and thermal nociceptive stimuli. Conclusion and discussion: All three nociceptive threshold stimuli of electrical, mechanical and heat nociceptive thresholds can be applied to measure and evaluate the efficacy of foot local anesthesia of dairy cows. However, our study revealed no superiority of those three nociceptive methods to evaluate the duration and quality of bovine foot local anesthesia methods. Veterinarians to investigate the duration and quality of their selected anesthesia method can use any of those heat, mechanical, and electrical methods.Keywords: mechanical, thermal, electrical threshold, IVRA, NBA, hind limb, dairy cow
Procedia PDF Downloads 243121 Liquid-Liquid Plug Flow Characteristics in Microchannel with T-Junction
Authors: Anna Yagodnitsyna, Alexander Kovalev, Artur Bilsky
Abstract:
The efficiency of certain technological processes in two-phase microfluidics such as emulsion production, nanomaterial synthesis, nitration, extraction processes etc. depends on two-phase flow regimes in microchannels. For practical application in chemistry and biochemistry it is very important to predict the expected flow pattern for a large variety of fluids and channel geometries. In the case of immiscible liquids, the plug flow is a typical and optimal regime for chemical reactions and needs to be predicted by empirical data or correlations. In this work flow patterns of immiscible liquid-liquid flow in a rectangular microchannel with T-junction are investigated. Three liquid-liquid flow systems are considered, viz. kerosene – water, paraffin oil – water and castor oil – paraffin oil. Different flow patterns such as parallel flow, slug flow, plug flow, dispersed (droplet) flow, and rivulet flow are observed for different velocity ratios. New flow pattern of the parallel flow with steady wavy interface (serpentine flow) has been found. It is shown that flow pattern maps based on Weber numbers for different liquid-liquid systems do not match well. Weber number multiplied by Ohnesorge number is proposed as a parameter to generalize flow maps. Flow maps based on this parameter are superposed well for all liquid-liquid systems of this work and other experiments. Plug length and velocity are measured for the plug flow regime. When dispersed liquid wets channel walls plug length cannot be predicted by known empirical correlations. By means of particle tracking velocimetry technique instantaneous velocity fields in a plug flow regime were measured. Flow circulation inside plug was calculated using velocity data that can be useful for mass flux prediction in chemical reactions.Keywords: flow patterns, hydrodynamics, liquid-liquid flow, microchannel
Procedia PDF Downloads 394