Search results for: heading time
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18205

Search results for: heading time

13165 Designing Stochastic Non-Invasively Applied DC Pulses to Suppress Tremors in Multiple Sclerosis by Computational Modeling

Authors: Aamna Lawrence, Ashutosh Mishra

Abstract:

Tremors occur in 60% of the patients who have Multiple Sclerosis (MS), the most common demyelinating disease that affects the central and peripheral nervous system, and are the primary cause of disability in young adults. While pharmacological agents provide minimal benefits, surgical interventions like Deep Brain Stimulation and Thalamotomy are riddled with dangerous complications which make non-invasive electrical stimulation an appealing treatment of choice for dealing with tremors. Hence, we hypothesized that if the non-invasive electrical stimulation parameters (mainly frequency) can be computed by mathematically modeling the nerve fibre to take into consideration the minutest details of the axon morphologies, tremors due to demyelination can be optimally alleviated. In this computational study, we have modeled the random demyelination pattern in a nerve fibre that typically manifests in MS using the High-Density Hodgkin-Huxley model with suitable modifications to account for the myelin. The internode of the nerve fibre in our model could have up to ten demyelinated regions each having random length and myelin thickness. The arrival time of action potentials traveling the demyelinated and the normally myelinated nerve fibre between two fixed points in space was noted, and its relationship with the nerve fibre radius ranging from 5µm to 12µm was analyzed. It was interesting to note that there were no overlaps between the arrival time for action potentials traversing the demyelinated and normally myelinated nerve fibres even when a single internode of the nerve fibre was demyelinated. The study gave us an opportunity to design DC pulses whose frequency of application would be a function of the random demyelination pattern to block only the delayed tremor-causing action potentials. The DC pulses could be delivered to the peripheral nervous system non-invasively by an electrode bracelet that would suppress any shakiness beyond it thus paving the way for wearable neuro-rehabilitative technologies.

Keywords: demyelination, Hodgkin-Huxley model, non-invasive electrical stimulation, tremor

Procedia PDF Downloads 128
13164 Structural and Functional Correlates of Reaction Time Variability in a Large Sample of Healthy Adolescents and Adolescents with ADHD Symptoms

Authors: Laura O’Halloran, Zhipeng Cao, Clare M. Kelly, Hugh Garavan, Robert Whelan

Abstract:

Reaction time (RT) variability on cognitive tasks provides the index of the efficiency of executive control processes (e.g. attention and inhibitory control) and is considered to be a hallmark of clinical disorders, such as attention-deficit disorder (ADHD). Increased RT variability is associated with structural and functional brain differences in children and adults with various clinical disorders, as well as poorer task performance accuracy. Furthermore, the strength of functional connectivity across various brain networks, such as the negative relationship between the task-negative default mode network and task-positive attentional networks, has been found to reflect differences in RT variability. Although RT variability may provide an index of attentional efficiency, as well as being a useful indicator of neurological impairment, the brain substrates associated with RT variability remain relatively poorly defined, particularly in a healthy sample. Method: Firstly, we used the intra-individual coefficient of variation (ICV) as an index of RT variability from “Go” responses on the Stop Signal Task. We then examined the functional and structural neural correlates of ICV in a large sample of 14-year old healthy adolescents (n=1719). Of these, a subset had elevated symptoms of ADHD (n=80) and was compared to a matched non-symptomatic control group (n=80). The relationship between brain activity during successful and unsuccessful inhibitions and gray matter volume were compared with the ICV. A mediation analysis was conducted to examine if specific brain regions mediated the relationship between ADHD symptoms and ICV. Lastly, we looked at functional connectivity across various brain networks and quantified both positive and negative correlations during “Go” responses on the Stop Signal Task. Results: The brain data revealed that higher ICV was associated with increased structural and functional brain activation in the precentral gyrus in the whole sample and in adolescents with ADHD symptoms. Lower ICV was associated with lower activation in the anterior cingulate cortex (ACC) and medial frontal gyrus in the whole sample and in the control group. Furthermore, our results indicated that activation in the precentral gyrus (Broadman Area 4) mediated the relationship between ADHD symptoms and behavioural ICV. Conclusion: This is the first study first to investigate the functional and structural correlates of ICV collectively in a large adolescent sample. Our findings demonstrate a concurrent increase in brain structure and function within task-active prefrontal networks as a function of increased RT variability. Furthermore, structural and functional brain activation patterns in the ACC, and medial frontal gyrus plays a role-optimizing top-down control in order to maintain task performance. Our results also evidenced clear differences in brain morphometry between adolescents with symptoms of ADHD but without clinical diagnosis and typically developing controls. Our findings shed light on specific functional and structural brain regions that are implicated in ICV and yield insights into effective cognitive control in healthy individuals and in clinical groups.

Keywords: ADHD, fMRI, reaction-time variability, default mode, functional connectivity

Procedia PDF Downloads 255
13163 Development of Ketorolac Tromethamine Encapsulated Stealth Liposomes: Pharmacokinetics and Bio Distribution

Authors: Yasmin Begum Mohammed

Abstract:

Ketorolac tromethamine (KTM) is a non-steroidal anti-inflammatory drug with a potent analgesic and anti-inflammatory activity due to prostaglandin related inhibitory effect of drug. It is a non-selective cyclo-oxygenase inhibitor. The drug is currently used orally and intramuscularly in multiple divided doses, clinically for the management arthritis, cancer pain, post-surgical pain, and in the treatment of migraine pain. KTM has short biological half-life of 4 to 6 hours, which necessitates frequent dosing to retain the action. The frequent occurrence of gastrointestinal bleeding, perforation, peptic ulceration, and renal failure lead to the development of other drug delivery strategies for the appropriate delivery of KTM. The ideal solution would be to target the drug only to the cells or tissues affected by the disease. Drug targeting could be achieved effectively by liposomes that are biocompatible and biodegradable. The aim of the study was to develop a parenteral liposome formulation of KTM with improved efficacy while reducing side effects by targeting the inflammation due to arthritis. PEG-anchored (stealth) and non-PEG-anchored liposomes were prepared by thin film hydration technique followed by extrusion cycle and characterized for in vitro and in vivo. Stealth liposomes (SLs) exhibited increase in percent encapsulation efficiency (94%) and 52% percent of drug retention during release studies in 24 h with good stability for a period of 1 month at -20°C and 4°C. SLs showed about maximum 55% of edema inhibition with significant analgesic effect. SLs produced marked differences over those of non-SL formulations with an increase in area under plasma concentration time curve, t₁/₂, mean residence time, and reduced clearance. 0.3% of the drug was detected in arthritic induced paw with significantly reduced drug localization in liver, spleen, and kidney for SLs when compared to other conventional liposomes. Thus SLs help to increase the therapeutic efficacy of KTM by increasing the targeting potential at the inflammatory region.

Keywords: biodistribution, ketorolac tromethamine, stealth liposomes, thin film hydration technique

Procedia PDF Downloads 295
13162 Translation Directionality: An Eye Tracking Study

Authors: Elahe Kamari

Abstract:

Research on translation process has been conducted for more than 20 years, investigating various issues and using different research methodologies. Most recently, researchers have started to use eye tracking to study translation processes. They believed that the observable, measurable data that can be gained from eye tracking are indicators of unobservable cognitive processes happening in the translators’ mind during translation tasks. The aim of this study was to investigate directionality in translation processes through using eye tracking. The following hypotheses were tested: 1) processing the target text requires more cognitive effort than processing the source text, in both directions of translation; 2) L2 translation tasks on the whole require more cognitive effort than L1 tasks; 3) cognitive resources allocated to the processing of the source text is higher in L1 translation than in L2 translation; 4) cognitive resources allocated to the processing of the target text is higher in L2 translation than in L1 translation; and 5) in both directions non-professional translators invest more cognitive effort in translation tasks than do professional translators. The performance of a group of 30 male professional translators was compared with that of a group of 30 male non-professional translators. All the participants translated two comparable texts one into their L1 (Persian) and the other into their L2 (English). The eye tracker measured gaze time, average fixation duration, total task length and pupil dilation. These variables are assumed to measure the cognitive effort allocated to the translation task. The data derived from eye tracking only confirmed the first hypothesis. This hypothesis was confirmed by all the relevant indicators: gaze time, average fixation duration and pupil dilation. The second hypothesis that L2 translation tasks requires allocation of more cognitive resources than L1 translation tasks has not been confirmed by all four indicators. The third hypothesis that source text processing requires more cognitive resources in L1 translation than in L2 translation and the fourth hypothesis that target text processing requires more cognitive effort in L2 translation than L1 translation were not confirmed. It seems that source text processing in L2 translation can be just as demanding as in L1 translation. The final hypothesis that non-professional translators allocate more cognitive resources for the same translation tasks than do the professionals was partially confirmed. One of the indicators, average fixation duration, indicated higher cognitive effort-related values for professionals.

Keywords: translation processes, eye tracking, cognitive resources, directionality

Procedia PDF Downloads 463
13161 A Parallel Implementation of k-Means in MATLAB

Authors: Dimitris Varsamis, Christos Talagkozis, Alkiviadis Tsimpiris, Paris Mastorocostas

Abstract:

The aim of this work is the parallel implementation of k-means in MATLAB, in order to reduce the execution time. Specifically, a new function in MATLAB for serial k-means algorithm is developed, which meets all the requirements for the conversion to a function in MATLAB with parallel computations. Additionally, two different variants for the definition of initial values are presented. In the sequel, the parallel approach is presented. Finally, the performance tests for the computation times respect to the numbers of features and classes are illustrated.

Keywords: K-means algorithm, clustering, parallel computations, Matlab

Procedia PDF Downloads 385
13160 Energy Trading for Cooperative Microgrids with Renewable Energy Resources

Authors: Ziaullah, Shah Wahab Ali

Abstract:

Micro-grid equipped with heterogeneous energy resources present the idea of small scale distributed energy management (DEM). DEM helps in minimizing the transmission and operation costs, power management and peak load demands. Micro-grids are collections of small, independent controllable power-generating units and renewable energy resources. Micro-grids also motivate to enable active customer participation by giving accessibility of real-time information and control to the customer. The capability of fast restoration against faulty situation, integration of renewable energy resources and Information and Communication Technologies (ICT) make micro-grid as an ideal system for distributed power systems. Micro-grids can have a bank of energy storage devices. The energy management system of micro-grid can perform real-time energy forecasting of renewable resources, energy storage elements and controllable loads in making proper short-term scheduling to minimize total operating costs. We present a review of existing micro-grids optimization objectives/goals, constraints, solution approaches and tools used in micro-grids for energy management. Cost-benefit analysis of micro-grid reveals that cooperation among different micro-grids can play a vital role in the reduction of import energy cost and system stability. Cooperative micro-grids energy trading is an approach to electrical distribution energy resources that allows local energy demands more control over the optimization of power resources and uses. Cooperation among different micro-grids brings the interconnectivity and power trading issues. According to the literature, it shows that open area of research is available for cooperative micro-grids energy trading. In this paper, we proposed and formulated the efficient energy management/trading module for interconnected micro-grids. It is believed that this research will open new directions in future for energy trading in cooperative micro-grids/interconnected micro-grids.

Keywords: distributed energy management, information and communication technologies, microgrid, energy management

Procedia PDF Downloads 375
13159 Alternative Energy and Carbon Source for Biosurfactant Production

Authors: Akram Abi, Mohammad Hossein Sarrafzadeh

Abstract:

Because of their several advantages over chemical surfactants, biosurfactants have given rise to a growing interest in the past decades. Advantages such as lower toxicity, higher biodegradability, higher selectivity and applicable at extreme temperature and pH which enables them to be used in a variety of applications such as: enhanced oil recovery, environmental and pharmaceutical applications, etc. Bacillus subtilis produces a cyclic lipopeptide, called surfactin, which is one of the most powerful biosurfactants with ability to decrease surface tension of water from 72 mN/m to 27 mN/m. In addition to its biosurfactant character, surfactin exhibits interesting biological activities such as: inhibition of fibrin clot formation, lyses of erythrocytes and several bacterial spheroplasts, antiviral, anti-tumoral and antibacterial properties. Surfactin is an antibiotic substance and has been shown recently to possess anti-HIV activity. However, application of biosurfactants is limited by their high production cost. The cost can be reduced by optimizing biosurfactant production using cheap feed stock. Utilization of inexpensive substrates and unconventional carbon sources like urban or agro-industrial wastes is a promising strategy to decrease the production cost of biosurfactants. With suitable engineering optimization and microbiological modifications, these wastes can be used as substrates for large-scale production of biosurfactants. As an effort to fulfill this purpose, in this work we have tried to utilize olive oil as second carbon source and also yeast extract as second nitrogen source to investigate the effect on both biomass and biosurfactant production improvement in Bacillus subtilis cultures. Since the turbidity of the culture was affected by presence of the oil, optical density was compromised and no longer could be used as an index of growth and biomass concentration. Therefore, cell Dry Weight measurements with applying necessary tactics for removing oil drops to prevent interference with biomass weight were carried out to monitor biomass concentration during the growth of the bacterium. The surface tension and critical micelle dilutions (CMD-1, CMD-2) were considered as an indirect measurement of biosurfactant production. Distinctive and promising results were obtained in the cultures containing olive oil compared to cultures without it: more than two fold increase in biomass production (from 2 g/l to 5 g/l) and considerable reduction in surface tension, down to 40 mN/m at surprisingly early hours of culture time (only 5hr after inoculation). This early onset of biosurfactant production in this culture is specially interesting when compared to the conventional cultures at which this reduction in surface tension is not obtained until 30 hour of culture time. Reducing the production time is a very prominent result to be considered for large scale process development. Furthermore, these results can be used to develop strategies for utilization of agro-industrial wastes (such as olive oil mill residue, molasses, etc.) as cheap and easily accessible feed stocks to decrease the high costs of biosurfactant production.

Keywords: agro-industrial waste, bacillus subtilis, biosurfactant, fermentation, second carbon and nitrogen source, surfactin

Procedia PDF Downloads 301
13158 Short-Term Association of In-vehicle Ultrafine Particles and Black Carbon Concentrations with Respiratory Health in Parisian Taxi Drivers

Authors: Melissa Hachem, Maxime Loizeau, Nadine Saleh, Isabelle Momas, Lynda Bensefa-Colas

Abstract:

Professional drivers are exposed inside their vehicles to high levels of air pollutants due to the considerable time they spend close to motor vehicle emissions. Little is known about ultrafine particles (UFP) or black carbon (BC) adverse respiratory health effects compared to the regulated pollutants. We aimed to study the short-term associations between UFP and BC concentrations inside vehicles and (1) the onset of mucosal irritation and (2) the acute changes in lung function of Parisian taxi drivers during a working day. An epidemiological study was carried out on 50 taxi drivers in Paris. UFP and BC were measured inside their vehicles with DiSCmini® and microAeth®, respectively. On the same day, the frequency and the severity of nose, eye, and throat irritations were self-reported by each participant and a spirometry test was performed before and after the work shift. Multivariate analysis was used to evaluate the associations between in-taxis UFP and BC concentrations and mucosal irritation and lung function, after adjustment for potential confounders. In-taxis UFP concentrations ranged from 17.9 to 37.9 × 103 particles/cm³ and BC concentrations from 2.2 to 3.9 μg/m³, during a mean of 9 ± 2 working hours. Significant dose-response relationships were observed between in-taxis UFP concentrations and both nasal irritation and lung function. The increase of in-taxis UFP (for an interquartile range of 20 × 103 particles/cm3) was associated to an increase in nasal irritation (adjusted OR = 6.27 [95% CI: 1.02 to 38.62]) and to a reduction in forced expiratory flow at 25–75% by −7.44% [95% CI: −12.63 to −2.24], forced expiratory volume in one second by −4.46% [95% CI: −6.99 to −1.93] and forced vital capacity by −3.31% [95% CI: −5.82 to −0.80]. Such associations were not found with BC. Incident throat and eye irritations were not related to in-vehicle particles exposure; however, they were associated with outdoor air quality (estimated by the Atmo index) and in-vehicle humidity, respectively. This study is the first to show a significant association, within a short-period of time, between in-vehicle UFP exposure and acute respiratory effects in professional drivers.

Keywords: black carbon, lung function, mucosal irritation, taxi drivers, ultrafine particles

Procedia PDF Downloads 178
13157 A Method for Measurement and Evaluation of Drape of Textiles

Authors: L. Fridrichova, R. Knížek, V. Bajzík

Abstract:

Drape is one of the important visual characteristics of the fabric. This paper is introducing an innovative method of measurement and evaluation of the drape shape of the fabric. The measuring principle is based on the possibility of multiple vertical strain of the fabric. This method more accurately simulates the real behavior of the fabric in the process of draping. The method is fully automated, so the sample can be measured by using any number of cycles in any time horizon. Using the present method of measurement, we are able to describe the viscoelastic behavior of the fabric.

Keywords: drape, drape shape, automated drapemeter, fabric

Procedia PDF Downloads 656
13156 Identifying and Quantifying Factors Affecting Traffic Crash Severity under Heterogeneous Traffic Flow

Authors: Praveen Vayalamkuzhi, Veeraragavan Amirthalingam

Abstract:

Studies on safety on highways are becoming the need of the hour as over 400 lives are lost every day in India due to road crashes. In order to evaluate the factors that lead to different levels of crash severity, it is necessary to investigate the level of safety of highways and their relation to crashes. In the present study, an attempt is made to identify the factors that contribute to road crashes and to quantify their effect on the severity of road crashes. The study was carried out on a four-lane divided rural highway in India. The variables considered in the analysis includes components of horizontal alignment of highway, viz., straight or curve section; time of day, driveway density, presence of median; median opening; gradient; operating speed; and annual average daily traffic. These variables were considered after a preliminary analysis. The major complexities in the study are the heterogeneous traffic and the speed variation between different classes of vehicles along the highway. To quantify the impact of each of these factors, statistical analyses were carried out using Logit model and also negative binomial regression. The output from the statistical models proved that the variables viz., horizontal components of the highway alignment; driveway density; time of day; operating speed as well as annual average daily traffic show significant relation with the severity of crashes viz., fatal as well as injury crashes. Further, the annual average daily traffic has significant effect on the severity compared to other variables. The contribution of highway horizontal components on crash severity is also significant. Logit models can predict crashes better than the negative binomial regression models. The results of the study will help the transport planners to look into these aspects at the planning stage itself in the case of highways operated under heterogeneous traffic flow condition.

Keywords: geometric design, heterogeneous traffic, road crash, statistical analysis, level of safety

Procedia PDF Downloads 302
13155 The Successful in Construction Project via Effectiveness of Project Team

Authors: Zarabizan Zakaria, Hayati Zainal

Abstract:

The construction industry is one of the most important sectors that contribute to the nation’s economy and catalyze towards the growth of other industries. However, some construction projects have not been completed on its stipulated time and duration, scope and budget due to several factors. This problem arises due to the weaknesses of human factors, especially from ineffective leadership quality practiced by project managers and contractors in managing project teams. Therefore, a construction project should impose the element of Project Team. The project team is formed in the implementation of the project which includes the project brief, project scope, customer requirements and provided designs. Many organizations in the construction sector use teams to meet today's global competition and customer expectations, however, team effectiveness evaluation is required. In insuring the construction team is successful and effectiveness, the construction department must encourage, measure, set up, and evaluate or review the effectiveness of project team that was formed. In order to produce a better outcome for a high-end project, an effective and efficient project team is required which also help in increasing overall productivity. The purpose of this study is to determine the role of team effectiveness in the construction project team based on the overall construction project performance. It examines several different factors which related to team effectiveness. It also examines the relationship between team effectiveness factor and project performance aspect. Team Effect Review and Project Performance Review are developed to be used for data collection. Data collected were analyzed using several statistical tests. Results obtained from data analysis are validated using semi-structured interviews. Besides that, a comprehensive survey were developed to assess the way construction project teams in order to maintain its effectiveness throughout the project phase. In order to determine a project successful it has been found that Project Team Leadership is the most important factor. In addition, the definition of team effectiveness in the construction project team is developed based on the perspective of project clients and project team members. The results of this study are expected to provide an idea on the factors that are needed to be focused on improving the team's effectiveness towards project performance aspects. At the same time, the definition of team effectiveness from team members and owner views has been developed in order to provide a better understanding of the word team's effectiveness in construction projects.

Keywords: project team, leadership, construction project, project successful

Procedia PDF Downloads 177
13154 Enhancing Financial Security: Real-Time Anomaly Detection in Financial Transactions Using Machine Learning

Authors: Ali Kazemi

Abstract:

The digital evolution of financial services, while offering unprecedented convenience and accessibility, has also escalated the vulnerabilities to fraudulent activities. In this study, we introduce a distinct approach to real-time anomaly detection in financial transactions, aiming to fortify the defenses of banking and financial institutions against such threats. Utilizing unsupervised machine learning algorithms, specifically autoencoders and isolation forests, our research focuses on identifying irregular patterns indicative of fraud within transactional data, thus enabling immediate action to prevent financial loss. The data we used in this study included the monetary value of each transaction. This is a crucial feature as fraudulent transactions may have distributions of different amounts than legitimate ones, such as timestamps indicating when transactions occurred. Analyzing transactions' temporal patterns can reveal anomalies (e.g., unusual activity in the middle of the night). Also, the sector or category of the merchant where the transaction occurred, such as retail, groceries, online services, etc. Specific categories may be more prone to fraud. Moreover, the type of payment used (e.g., credit, debit, online payment systems). Different payment methods have varying risk levels associated with fraud. This dataset, anonymized to ensure privacy, reflects a wide array of transactions typical of a global banking institution, ranging from small-scale retail purchases to large wire transfers, embodying the diverse nature of potentially fraudulent activities. By engineering features that capture the essence of transactions, including normalized amounts and encoded categorical variables, we tailor our data to enhance model sensitivity to anomalies. The autoencoder model leverages its reconstruction error mechanism to flag transactions that deviate significantly from the learned normal pattern, while the isolation forest identifies anomalies based on their susceptibility to isolation from the dataset's majority. Our experimental results, validated through techniques such as k-fold cross-validation, are evaluated using precision, recall, and the F1 score alongside the area under the receiver operating characteristic (ROC) curve. Our models achieved an F1 score of 0.85 and a ROC AUC of 0.93, indicating high accuracy in detecting fraudulent transactions without excessive false positives. This study contributes to the academic discourse on financial fraud detection and provides a practical framework for banking institutions seeking to implement real-time anomaly detection systems. By demonstrating the effectiveness of unsupervised learning techniques in a real-world context, our research offers a pathway to significantly reduce the incidence of financial fraud, thereby enhancing the security and trustworthiness of digital financial services.

Keywords: anomaly detection, financial fraud, machine learning, autoencoders, isolation forest, transactional data analysis

Procedia PDF Downloads 57
13153 Urban Sustainable Development with Flood Crisis Management Approach

Authors: Ali Liaghat, Navid Tavanpour, Nima Tavanpour

Abstract:

An increase in population and prevalence of urbanity have led plan makers and decision makers put effort into sustainable development of cities at national and local levels. One of the important issues in urban development is compliance with safety issues in cities. Despite natural disasters and unexpected events such as floods, earthquakes, hurricanes, fires, etc., urban development should be regarded as an axiom, or else any form of construction and development is not safe, because it will greatly harm economic growth and development and pose an obstacle to achieving sustainable development, plus a loss to lives and finances of people. Therefore, in line with urban development, it is necessary to identify particular environmental and local issues as determinants and pay attention to them at the top of everything, in that we can call it a good action and factor in urban sustainable developments. Physical structure of each city represents how it has developed or its development shaped and what incidents, changes, natural disasters it has undergone over time. Since any form of development plan should be in accordance with the previous situations of cities, disregarding it, unfortunately, can escalate into uncontrolled urban development, non-resistant and unstable construction against earthquake or invasion of river areas, destruction of agricultural lands or vegetation, periodic floods over time. It has been viewed as serious threats to developing cities, and typically caused destruction of bed and other urban facilities as well as damages to lives and finances. In addition, uncontrolled development has caused cities to look ugly in terms of urban façade, and off and on such unplanned measures caused the country to face countless losses, and it not only vitiates expenses incurred, but it will also impose additional costs of reconstruction, i.e. it is unsustainable development. Thus, in this paper, in addition to a discussion about necessity for a profound attitude toward this subject and making long-term plans, programs for organizing river and its surrounding area, creating open and green urban spaces, retrofitting and flood preventing are presented for sustainable safety and development of cities along with a critique of successful countries.

Keywords: flood, sustainable development, urbanisation, urban management

Procedia PDF Downloads 268
13152 Micro-Droplet Formation in a Microchannel under the Effect of an Electric Field: Experiment

Authors: Sercan Altundemir, Pinar Eribol, A. Kerem Uguz

Abstract:

Microfluidics systems allow many-large scale laboratory applications to be miniaturized on a single device in order to reduce cost and advance fluid control. Moreover, such systems enable to generate and control droplets which have a significant role on improved analysis for many chemical and biological applications. For example, they can be employed as the model for cells in microfluidic systems. In this work, the interfacial instability of two immiscible Newtonian liquids flowing in a microchannel is investigated. When two immiscible liquids are in laminar regime, a flat interface is formed between them. If a direct current electric field is applied, the interface may deform, i.e. may become unstable and it may be ruptured and form micro-droplets. First, the effect of thickness ratio, total flow rate, viscosity ratio of the silicone oil and ethylene glycol liquid couple on the critical voltage at which the interface starts to destabilize is investigated. Then the droplet sizes are measured under the effect of these parameters at various voltages. Moreover, the effect of total flow rate on the time elapsed for the interface to be ruptured to form droplets by hitting the wall of the channel is analyzed. It is observed that an increase in the viscosity or the thickness ratio of the silicone oil to the ethylene glycol has a stabilizing effect, i.e. a higher voltage is needed while the total flow rate has no effect on it. However, it is observed that an increase in the total flow rate results in shortening of the elapsed time for the interface to hit the wall. Moreover, the droplet size decreases down to 0.1 μL with an increase in the applied voltage, the viscosity ratio or the total flow rate or a decrease in the thickness ratio. In addition to these observations, two empirical models for determining the critical electric number, i.e., the dimensionless voltage and the droplet size and another model which is a combination of both models, for determining the droplet size at the critical voltage are established.

Keywords: droplet formation, electrohydrodynamics, microfluidics, two-phase flow

Procedia PDF Downloads 176
13151 Simulation Based Analysis of Gear Dynamic Behavior in Presence of Multiple Cracks

Authors: Ahmed Saeed, Sadok Sassi, Mohammad Roshun

Abstract:

Gears are important components with a vital role in many rotating machines. One of the common gear failure causes is tooth fatigue crack; however, its early detection is still a challenging task. The objective of this study is to develop a numerical model that simulates the effect of teeth cracks on the resulting gears vibrations and permits consequently to perform an early fault detection. In contrast to other published papers, this work incorporates the possibility of multiple simultaneous cracks with different depths. As cracks alter significantly the stiffness of the tooth, finite element software is used to determine the stiffness variation with respect to the angular position, for different combinations of crack orientation and depth. A simplified six degrees of freedom nonlinear lumped parameter model of a one-stage spur gear system is proposed to study the vibration with and without cracks. The model developed for calculating the stiffness with the crack permitted to update the physical parameters of the second-degree-of-freedom equations of motions describing the vibration of the gearbox. The vibration simulation results of the gearbox were by obtained using Simulink/Matlab. The effect of one crack with different levels was studied thoroughly. The change in the mesh stiffness and the vibration response were found to be consistent with previously published works. In addition, various statistical time domain parameters were considered. They showed different degrees of sensitivity toward the crack depth. Multiple cracks were also introduced at different locations and the vibration response along with the statistical parameters were obtained again for a general case of degradation (increase in crack depth, crack number and crack locations). It was found that although some parameters increase in value as the deterioration level increases, they show almost no change or even decrease when the number of cracks increases. Therefore, the use of any statistical parameters could be misleading if not considered in an appropriate way.

Keywords: Spur gear, cracked tooth, numerical simulation, time-domain parameters

Procedia PDF Downloads 266
13150 Quantification of Magnetic Resonance Elastography for Tissue Shear Modulus using U-Net Trained with Finite-Differential Time-Domain Simulation

Authors: Jiaying Zhang, Xin Mu, Chang Ni, Jeff L. Zhang

Abstract:

Magnetic resonance elastography (MRE) non-invasively assesses tissue elastic properties, such as shear modulus, by measuring tissue’s displacement in response to mechanical waves. The estimated metrics on tissue elasticity or stiffness have been shown to be valuable for monitoring physiologic or pathophysiologic status of tissue, such as a tumor or fatty liver. To quantify tissue shear modulus from MRE-acquired displacements (essentially an inverse problem), multiple approaches have been proposed, including Local Frequency Estimation (LFE) and Direct Inversion (DI). However, one common problem with these methods is that the estimates are severely noise-sensitive due to either the inverse-problem nature or noise propagation in the pixel-by-pixel process. With the advent of deep learning (DL) and its promise in solving inverse problems, a few groups in the field of MRE have explored the feasibility of using DL methods for quantifying shear modulus from MRE data. Most of the groups chose to use real MRE data for DL model training and to cut training images into smaller patches, which enriches feature characteristics of training data but inevitably increases computation time and results in outcomes with patched patterns. In this study, simulated wave images generated by Finite Differential Time Domain (FDTD) simulation are used for network training, and U-Net is used to extract features from each training image without cutting it into patches. The use of simulated data for model training has the flexibility of customizing training datasets to match specific applications. The proposed method aimed to estimate tissue shear modulus from MRE data with high robustness to noise and high model-training efficiency. Specifically, a set of 3000 maps of shear modulus (with a range of 1 kPa to 15 kPa) containing randomly positioned objects were simulated, and their corresponding wave images were generated. The two types of data were fed into the training of a U-Net model as its output and input, respectively. For an independently simulated set of 1000 images, the performance of the proposed method against DI and LFE was compared by the relative errors (root mean square error or RMSE divided by averaged shear modulus) between the true shear modulus map and the estimated ones. The results showed that the estimated shear modulus by the proposed method achieved a relative error of 4.91%±0.66%, substantially lower than 78.20%±1.11% by LFE. Using simulated data, the proposed method significantly outperformed LFE and DI in resilience to increasing noise levels and in resolving fine changes of shear modulus. The feasibility of the proposed method was also tested on MRE data acquired from phantoms and from human calf muscles, resulting in maps of shear modulus with low noise. In future work, the method’s performance on phantom and its repeatability on human data will be tested in a more quantitative manner. In conclusion, the proposed method showed much promise in quantifying tissue shear modulus from MRE with high robustness and efficiency.

Keywords: deep learning, magnetic resonance elastography, magnetic resonance imaging, shear modulus estimation

Procedia PDF Downloads 68
13149 Placencia Belize: An Alternative to the Development of “Your Private Paradise”

Authors: Ryan Tao

Abstract:

This paper analyzes the local context and effects of tourism on Placencia in Belize to identify key environmental and social impacts. Placencia was a small, sleepy coastal fishing village at risk of losing its local identity to tourism. In the last decade, tourism has driven an economic shift from fishing to tourism. The consequence of this shift has eroded local environmental resources and diluted local cultural heritage. A key example is Harvest Caye, an island converted from a natural manatee breeding ground to a stereotypical sandy beach and palm tree resort complex. The incoming cruise ship-geared development of Harvest Caye reflects the urban tourist vision of Placencia’s local landscape, which indicates a “neo-colonial” rule. Consequently, this vision causes environmental destruction, replacing local memories of abundant manatee-filled waters. The paper will explore environmental and cultural damage from uncontrolled development by focusing on how Placencia has been affected by unmanaged tourism. It will then propose solutions to create a medium between tourism and the local community. New developments in other Belizean cities, such as Belmopan and Belize City, are planned at the time of approval to be sensitive to their setting. While Placencia is fully built out, there are opportunities to plan in advance for the future while preserving local integrity. As a consequence of time, shepherding tourist development, defining tourist areas, and planning these areas with an eye towards natural disasters (such as hurricanes) can act as a tool to craft a future vision that helps preserve the local identity of Placencia. This research will consist of personal observations, case studies, and synthesis of other source materials. These sources provide guidance for creating a framework to understand the local environment and culture and plan around it to ultimately protect Placencia from becoming “Your Private Paradise” for the rich.

Keywords: Placencia, coastal development, coastal protection, tourism, zoning, coastal zoning, Caribbean, Belize, small island developing states

Procedia PDF Downloads 14
13148 Poverty and Illiteracy as a Key Factor for Crime and Unrest in Nigeria

Authors: Lawrence Emah

Abstract:

Nigeria, Africa’s most populous nation is undoubtedly, currently going through one of the most difficult phases in her 55 years of existence as an independent nation. At no other time in her history is she under so much pressure of social unrest and unacceptable rate of crime and criminality than it is today. From the North where there is an insurgency to contend with, then to the South where kidnapping and armed robbery hold sway. These issues did not just sprout from nowhere; they have a root somewhere. This is why this paper seeks to bring to the fore poverty and illiteracy as leading causes of these twin social ills– crime and social restiveness as well as suggest practical solutions to the problems.

Keywords: crime, illiteracy, poverty, unrest

Procedia PDF Downloads 233
13147 Pet Care Monitoring with Arduino

Authors: Sathapath Kilaso

Abstract:

Nowadays people who live in the city tend to have a pet in order to relief the loneliness more than usual. It can be observed by the growth of the local pet industry. But the essentials of lifestyle of the urban people which is restricted by time and work might not allow the owner to take care of the pet properly. So this article will be about how to develop the prototype of pet care monitoring with Arduino Microcontroller. This prototype can be used to monitor the pet and its environment around the pet such as temperature (both pet’s temperature and outside temperature), humidity, food’s quantity, air’s quality and also be able to reduce the stress of the pet. This prototype can report the result back to the owner via online-channel such as website etc.

Keywords: pet care, Arduino Microcontroller, monitoring, prototype

Procedia PDF Downloads 358
13146 Searching for the ‘Why’ of Gendered News: Journalism Practices and Societal Contexts

Authors: R. Simões, M. Silveirinha

Abstract:

Driven by the need to understand the results of previous research that clearly shows deep unbalances of the media discourses about women and men in spite of the growing numbers of female journalists, our paper aims to progress from the 'what' to the 'why' of these unbalanced representations. Furthermore, it does so at a time when journalism is undergoing a dramatic change in terms of professional practices and in how media organizations are organized and run, affecting women in particular. While some feminist research points to the fact that female and male journalists evaluate the role of the news and production methods in similar ways feminist theorizing also suggests that thought and knowledge are highly influenced by social identity, which is also inherently affected by the experiences of gender. This is particularly important at a time of deep societal and professional changes. While there are persuasive discussions of gender identities at work in newsrooms in various countries studies on the issue will benefit from cases that focus on the particularities of local contexts. In our paper, we present one such case: the case of Portugal, a country hit hard by austerity measures that have affected all cultural industries including journalism organizations, already feeling the broader impacts of the larger societal changes of the media landscape. Can we gender these changes? How are they felt and understood by female and male journalists? And how are these discourses framed by androcentric, feminist and post-feminist sensibilities? Foregrounding questions of gender, our paper seeks to explore some of the interactions of societal and professional forces, identifying their gendered character and outlining how they shape journalism work in general and the production of unbalanced gender representations in particular. We do so grounded in feminist studies of journalism as well as feminist organizational and work studies, looking at a corpus of 20 in-depth interviews of female and male Portuguese journalists. The research findings illustrate how gender in journalism practices interacts with broader experiences of the cultural and economic contexts and show the ambivalences of these interactions in news organizations.

Keywords: gender, journalism, newsroom culture, Portuguese journalists

Procedia PDF Downloads 399
13145 Exploring the Prebiotic Potential of Glucosamine

Authors: Shilpi Malik, Ramneek Kaur, Archita Gupta, Deepshikha Yadav, Ashwani Mathur, Manisha Singh

Abstract:

Glucosamine (GS) is the most abundant naturally occurring amino monosaccharide and is normally produced in human body via cellular glucose metabolism. It is regarded as the building block of cartilage matrix and is also an essential component of cartilage matrix repair mechanism. Besides that, it can also be explored for its prebiotic potential as many bacterial species are known to utilize the amino sugar by acquiring them to form peptidoglycans and lipopolysaccharides in the bacterial cell wall. Glucosamine can therefore be considered for its fermentation by bacterial species present in the gut. Current study is focused on exploring the potential of glucosamine as prebiotic. The studies were done to optimize considerable concentration of GS to reach GI tract and being fermented by the complex gut microbiota and food grade GS was added to various Simulated Fluids of Gastro-Intestinal Tract (GIT) such as Simulated Saliva, Gastric Fluid (Fast and Fed State), Colonic fluid, etc. to detect its degradation. Since it was showing increase in microbial growth (CFU) with time, GS was Further, encapsulated to increase its residential time in the gut, which exhibited improved resistance to the simulated Gut conditions. Moreover, prepared microspehres were optimized and characterized for their encapsulation efficiency and toxicity. To further substantiate the prebiotic activity of Glucosamine, studies were also performed to determine the effect of Glucosamine on the known probiotic bacterial species, i.e. Lactobacillus delbrueckii (MTCC 911) and Bifidobacteriumbifidum (MTCC 5398). Culture conditions for glucosamine will be added in MRS media in anaerobic tube at 0.20%, 0.40%, 0.60%, 0.80%, and 1.0%, respectively. MRS media without GS was included in this experiment as the control. All samples were autoclaved at 118° C for 15 min. Active culture was added at 5% (v/v) to each anaerobic tube after cooling to room temperature and incubated at 37° C then determined biomass and pH and viable count at incubation 18h. The experiment was completed in triplicate and the results were presented as Mean ± SE (Standard error).The experimental results are conclusive and suggest Glucosamine to hold prebiotic properties.

Keywords: gastro intestinal tract, microspheres, peptidoglycans, simulated fluid

Procedia PDF Downloads 333
13144 Improved Soil and Snow Treatment with the Rapid Update Cycle Land-Surface Model for Regional and Global Weather Predictions

Authors: Tatiana G. Smirnova, Stan G. Benjamin

Abstract:

Rapid Update Cycle (RUC) land surface model (LSM) was a land-surface component in several generations of operational weather prediction models at the National Center for Environment Prediction (NCEP) at the National Oceanic and Atmospheric Administration (NOAA). It was designed for short-range weather predictions with an emphasis on severe weather and originally was intentionally simple to avoid uncertainties from poorly known parameters. Nevertheless, the RUC LSM, when coupled with the hourly-assimilating atmospheric model, can produce a realistic evolution of time-varying soil moisture and temperature, as well as the evolution of snow cover on the ground surface. This result is possible only if the soil/vegetation/snow component of the coupled weather prediction model has sufficient skill to avoid long-term drift. RUC LSM was first implemented in the operational NCEP Rapid Update Cycle (RUC) weather model in 1998 and later in the Weather Research Forecasting Model (WRF)-based Rapid Refresh (RAP) and High-resolution Rapid Refresh (HRRR). Being available to the international WRF community, it was implemented in operational weather models in Austria, New Zealand, and Switzerland. Based on the feedback from the US weather service offices and the international WRF community and also based on our own validation, RUC LSM has matured over the years. Also, a sea-ice module was added to RUC LSM for surface predictions over the Arctic sea-ice. Other modifications include refinements to the snow model and a more accurate specification of albedo, roughness length, and other surface properties. At present, RUC LSM is being tested in the regional application of the Unified Forecast System (UFS). The next generation UFS-based regional Rapid Refresh FV3 Standalone (RRFS) model will replace operational RAP and HRRR at NCEP. Over time, RUC LSM participated in several international model intercomparison projects to verify its skill using observed atmospheric forcing. The ESM-SnowMIP was the last of these experiments focused on the verification of snow models for open and forested regions. The simulations were performed for ten sites located in different climatic zones of the world forced with observed atmospheric conditions. While most of the 26 participating models have more sophisticated snow parameterizations than in RUC, RUC LSM got a high ranking in simulations of both snow water equivalent and surface temperature. However, ESM-SnowMIP experiment also revealed some issues in the RUC snow model, which will be addressed in this paper. One of them is the treatment of grid cells partially covered with snow. RUC snow module computes energy and moisture budgets of snow-covered and snow-free areas separately by aggregating the solutions at the end of each time step. Such treatment elevates the importance of computing in the model snow cover fraction. Improvements to the original simplistic threshold-based approach have been implemented and tested both offline and in the coupled weather model. The detailed description of changes to the snow cover fraction and other modifications to RUC soil and snow parameterizations will be described in this paper.

Keywords: land-surface models, weather prediction, hydrology, boundary-layer processes

Procedia PDF Downloads 88
13143 Association between Physical Inactivity and Sedentary Behaviours with Risk of Hypertension among Sedentary Occupation Workers: A Cross-Sectional Study

Authors: Hanan Badr, Fahad Manee, Rao Shashidhar, Omar Bayoumy

Abstract:

Introduction: Hypertension is the major risk factor for cardiovascular diseases and stroke and a universe leading cause of disability-adjusted life years and mortality. Adopting an unhealthy lifestyle is thought to be associated with developing hypertension regardless of predisposing genetic factors. This study aimed to examine the association between recreational physical activity (RPA), and sedentary behaviors with a risk of hypertension among ministry employees, where there is no role for occupational physical activity (PA), and to scrutinize participants’ time spent in RPA and sedentary behaviors on the working and weekend days. Methods: A cross-sectional study was conducted among randomly selected 2562 employees working at ten randomly selected ministries in Kuwait. To have a representative sample, the proportional allocation technique was used to define the number of participants in each ministry. A self-administered questionnaire was used to collect data about participants' socio-demographic characteristics, health status, and their 24 hours’ time use during a regular working day and a weekend day. The time use covered a list of 20 different activities practiced by a person daily. The New Zealand Physical Activity Questionnaire-Short Form (NZPAQ-SF) was used to assess the level of RPA. The scale generates three categories according to the number of hours spent in RPA/week: relatively inactive, relatively active, and highly active. Gender-matched trained nurses performed anthropometric measurements (weight and height) and measuring blood pressure (two readings) using an automatic blood pressure monitor (95% accuracy level compared to a calibrated mercury sphygmomanometer). Results: Participants’ mean age was 35.3±8.4 years, with almost equal gender distribution. About 13% of the participants were smokers, and 75% were overweight. Almost 10% reported doctor-diagnosed hypertension. Among those who did not, the mean systolic blood pressure was 119.9±14.2 and the mean diastolic blood pressure was 80.9±7.3. Moreover, 73.9% of participants were relatively physically inactive and 18% were highly active. Mean systolic and diastolic blood pressure showed a significant inverse association with the level of RPA (means of blood pressure measures were: 123.3/82.8 among relatively inactive, 119.7/80.4 among relatively active, and 116.6/79.6 among highly active). Furthermore, RPA occupied 1.6% and 1.8% of working and weekend days, respectively, while sedentary behaviors (watching TV, using electronics for social media or entertaining, etc.) occupied 11.2% and 13.1%, respectively. Sedentary behaviors were significantly associated with high levels of systolic and diastolic blood pressure. Binary logistic regression revealed that physical inactivity (OR=3.13, 95% CI: 2.25-4.35) and sedentary behaviors (OR=2.25, CI: 1.45-3.17) were independent risk factors for high systolic and diastolic blood pressure after adjustment for other covariates. Conclusions: Physical inactivity and sedentary lifestyle were associated with a high risk of hypertension. Further research to examine the independent role of RPA in improving blood pressure levels and cultural and occupational barriers for practicing RPA are recommended. Policies should be enacted in promoting PA in the workplace that might help in decreasing the risk of hypertension among sedentary occupation workers.

Keywords: physical activity, sedentary behaviors, hypertension, workplace

Procedia PDF Downloads 178
13142 Two-Protein Modified Gold Nanoparticles for Serological Diagnosis of Borreliosis

Authors: Mohammed Alasel, Michael Keusgen

Abstract:

Gold is a noble metal; in its nano-scale level (e.g. spherical nanoparticles), the conduction electrons are triggered to collectively oscillate with a resonant frequency when certain wavelengths of electromagnetic radiation interact with its surface; this phenomenon is known as surface plasmon resonance (SPR). SPR is responsible for giving the gold nanoparticles its intense red color depending mainly on its size, shape and distance between nanoparticles. A decreased distance between gold nanoparticles results in aggregation of them causing a change in color from red to blue. This aggregation enables gold nanoparticles to serve as a sensitive biosensoric indicator. In the proposed work, gold nanoparticles were modified with two proteins: i) Borrelia antigen, variable lipoprotein surface-exposed protein (VlsE), and ii) protein A. VlsE antigen induces a strong antibody response against Lyme disease and can be detected from early to late phase during the disease in humans infected with Borrelia. In addition, it shows low cross-reaction with the other non-pathogenic Borrelia strains. The high specificity of VlsE antigen to anti-Borrelia antibodies, combined simultaneously with the high specificity of protein A to the Fc region of all IgG human antibodies, was utilized to develop a rapid test for serological point of care diagnosis of borreliosis in human serum. Only in the presence of anti-Borrelia antibodies in the serum probe, an aggregation of gold nanoparticles can be observed, which is visible by a concentration-dependent colour shift from red (low IgG) to blue (high IgG). Experiments showed it is clearly possible to distinguish between positive and negative sera samples using a simple suspension of the two-protein modified gold nanoparticles in a very short time (30 minutes). The proposed work showed the potential of using such modified gold nanoparticles generally for serological diagnosis. Improved specificity and reduced assay time can be archived in applying increased salt concentrations combined with decreased pH values (pH 5).

Keywords: gold nanoparticles, gold aggregation, serological diagnosis, protein A, lyme borreliosis

Procedia PDF Downloads 398
13141 Integration of Agile Philosophy and Scrum Framework to Missile System Design Processes

Authors: Misra Ayse Adsiz, Selim Selvi

Abstract:

In today's world, technology is competing with time. In order to catch up with the world's companies and adapt quickly to the changes, it is necessary to speed up the processes and keep pace with the rate of change of the technology. The missile system design processes, which are handled with classical methods, keep behind in this race. Because customer requirements are not clear, and demands are changing again and again in the design process. Therefore, in the system design process, a methodology suitable for the missile system design dynamics has been investigated and the processes used for catching up the era are examined. When commonly used design processes are analyzed, it is seen that any one of them is dynamic enough for today’s conditions. So a hybrid design process is established. After a detailed review of the existing processes, it is decided to focus on the Scrum Framework and Agile Philosophy. Scrum is a process framework. It is focused on to develop software and handling change management with rapid methods. In addition, agile philosophy is intended to respond quickly to changes. In this study, it is aimed to integrate Scrum framework and agile philosophy, which are the most appropriate ways for rapid production and change adaptation, into the missile system design process. With this approach, it is aimed that the design team, involved in the system design processes, is in communication with the customer and provide an iterative approach in change management. These methods, which are currently being used in the software industry, have been integrated with the product design process. A team is created for system design process. The roles of Scrum Team are realized with including the customer. A scrum team consists of the product owner, development team and scrum master. Scrum events, which are short, purposeful and time-limited, are organized to serve for coordination rather than long meetings. Instead of the classic system design methods used in product development studies, a missile design is made with this blended method. With the help of this design approach, it is become easier to anticipate changing customer demands, produce quick solutions to demands and combat uncertainties in the product development process. With the feedback of the customer who included in the process, it is worked towards marketing optimization, design and financial optimization.

Keywords: agile, design, missile, scrum

Procedia PDF Downloads 168
13140 Machine Learning in Agriculture: A Brief Review

Authors: Aishi Kundu, Elhan Raza

Abstract:

"Necessity is the mother of invention" - Rapid increase in the global human population has directed the agricultural domain toward machine learning. The basic need of human beings is considered to be food which can be satisfied through farming. Farming is one of the major revenue generators for the Indian economy. Agriculture is not only considered a source of employment but also fulfils humans’ basic needs. So, agriculture is considered to be the source of employment and a pillar of the economy in developing countries like India. This paper provides a brief review of the progress made in implementing Machine Learning in the agricultural sector. Accurate predictions are necessary at the right time to boost production and to aid the timely and systematic distribution of agricultural commodities to make their availability in the market faster and more effective. This paper includes a thorough analysis of various machine learning algorithms applied in different aspects of agriculture (crop management, soil management, water management, yield tracking, livestock management, etc.).Due to climate changes, crop production is affected. Machine learning can analyse the changing patterns and come up with a suitable approach to minimize loss and maximize yield. Machine Learning algorithms/ models (regression, support vector machines, bayesian models, artificial neural networks, decision trees, etc.) are used in smart agriculture to analyze and predict specific outcomes which can be vital in increasing the productivity of the Agricultural Food Industry. It is to demonstrate vividly agricultural works under machine learning to sensor data. Machine Learning is the ongoing technology benefitting farmers to improve gains in agriculture and minimize losses. This paper discusses how the irrigation and farming management systems evolve in real-time efficiently. Artificial Intelligence (AI) enabled programs to emerge with rich apprehension for the support of farmers with an immense examination of data.

Keywords: machine Learning, artificial intelligence, crop management, precision farming, smart farming, pre-harvesting, harvesting, post-harvesting

Procedia PDF Downloads 105
13139 Teleconnection between El Nino-Southern Oscillation and Seasonal Flow of the Surma River and Possibilities of Long Range Flood Forecasting

Authors: Monika Saha, A. T. M. Hasan Zobeyer, Nasreen Jahan

Abstract:

El Nino-Southern Oscillation (ENSO) is the interaction between atmosphere and ocean in tropical Pacific which causes inconsistent warm/cold weather in tropical central and eastern Pacific Ocean. Due to the impact of climate change, ENSO events are becoming stronger in recent times, and therefore it is very important to study the influence of ENSO in climate studies. Bangladesh, being in the low-lying deltaic floodplain, experiences the worst consequences due to flooding every year. To reduce the catastrophe of severe flooding events, non-structural measures such as flood forecasting can be helpful in taking adequate precautions and steps. Forecasting seasonal flood with a longer lead time of several months is a key component of flood damage control and water management. The objective of this research is to identify the possible strength of teleconnection between ENSO and river flow of Surma and examine the potential possibility of long lead flood forecasting in the wet season. Surma is one of the major rivers of Bangladesh and is a part of the Surma-Meghna river system. In this research, sea surface temperature (SST) has been considered as the ENSO index and the lead time is at least a few months which is greater than the basin response time. The teleconnection has been assessed by the correlation analysis between July-August-September (JAS) flow of Surma and SST of Nino 4 region of the corresponding months. Cumulative frequency distribution of standardized JAS flow of Surma has also been determined as part of assessing the possible teleconnection. Discharge data of Surma river from 1975 to 2015 is used in this analysis, and remarkable increased value of correlation coefficient between flow and ENSO has been observed from 1985. From the cumulative frequency distribution of the standardized JAS flow, it has been marked that in any year the JAS flow has approximately 50% probability of exceeding the long-term average JAS flow. During El Nino year (warm episode of ENSO) this probability of exceedance drops to 23% and while in La Nina year (cold episode of ENSO) it increases to 78%. Discriminant analysis which is known as 'Categoric Prediction' has been performed to identify the possibilities of long lead flood forecasting. It has helped to categorize the flow data (high, average and low) based on the classification of predicted SST (warm, normal and cold). From the discriminant analysis, it has been found that for Surma river, the probability of a high flood in the cold period is 75% and the probability of a low flood in the warm period is 33%. A synoptic parameter, forecasting index (FI) has also been calculated here to judge the forecast skill and to compare different forecasts. This study will help the concerned authorities and the stakeholders to take long-term water resources decisions and formulate policies on river basin management which will reduce possible damage of life, agriculture, and property.

Keywords: El Nino-Southern Oscillation, sea surface temperature, surma river, teleconnection, cumulative frequency distribution, discriminant analysis, forecasting index

Procedia PDF Downloads 154
13138 Study of a Complete Free Route Implementation in the European Airspace

Authors: Cesar A. Nava-Gaxiola, C. Barrado

Abstract:

Harmonized with SESAR (Single European Sky Research) initiatives, a new concept related with airspace structures have been introduced in Europe, the Free Route Airspace. The key of free route is based in an airspace where users may freely plan a route between a defined entry and exit waypoint, with the possibility of routing via intermediate points, the free route flights remain subject to air traffic control (ATC) for the established separations. Free route airspace does not present anymore fixed airways to airspace users, as a consequence it brings a new paradigm for managing safe separations of aircrafts inside these airspace blocks . Nowadays, several European nations have been introduced the concept, some of them in a complete or partial stage, but finally offering limited benefits to airspace users for this condition. This research evaluates the future scenario of free route implementation across Europe, considering a unique airspace block configuration with a complete upper airspace with free route. The paper is centered in investigating the benefits for airspace users, and the study of possible increments of Air Traffic Controllers task loads with a full application. In this research, fast time simulations are carrying out for discovering how much flight time and distance aircrafts can save with an overall free route establishment. In the other side, the paper explains the evolution of conflicts derivate from possible separation losses between aircrafts in this new environment. Free route conflicts can emerges in any points of the airspace, requiring a great effort for solving it, in comparison with fixed airways, where conflicts normally were found by controllers in known waypoints, and they solved using the fixed network as reference. The airspace configuration modelled in this study take into account the actual navigation waypoints structure, moving into a future scenario, where new ones waypoints are added and new traffic flow patterns appears. In this sense, this research explores the advantages and unknown difficulties that a large scale application of free route concept can carry out in the European airspace.

Keywords: ATC conflicts, efficiency, free route airspace, SESAR

Procedia PDF Downloads 190
13137 Study on Safety Management of Deep Foundation Pit Construction Site Based on Building Information Modeling

Authors: Xuewei Li, Jingfeng Yuan, Jianliang Zhou

Abstract:

The 21st century has been called the century of human exploitation of underground space. Due to the characteristics of large quantity, tight schedule, low safety reserve and high uncertainty of deep foundation pit engineering, accidents frequently occur in deep foundation pit engineering, causing huge economic losses and casualties. With the successful application of information technology in the construction industry, building information modeling has become a research hotspot in the field of architectural engineering. Therefore, the application of building information modeling (BIM) and other information communication technologies (ICTs) in construction safety management is of great significance to improve the level of safety management. This research summed up the mechanism of the deep foundation pit engineering accident through the fault tree analysis to find the control factors of deep foundation pit engineering safety management, the deficiency existing in the traditional deep foundation pit construction site safety management. According to the accident cause mechanism and the specific process of deep foundation pit construction, the hazard information of deep foundation pit engineering construction site was identified, and the hazard list was obtained, including early warning information. After that, the system framework was constructed by analyzing the early warning information demand and early warning function demand of the safety management system of deep foundation pit. Finally, the safety management system of deep foundation pit construction site based on BIM through combing the database and Web-BIM technology was developed, so as to realize the three functions of real-time positioning of construction site personnel, automatic warning of entering a dangerous area, real-time monitoring of deep foundation pit structure deformation and automatic warning. This study can initially improve the current situation of safety management in the construction site of deep foundation pit. Additionally, the active control before the occurrence of deep foundation pit accidents and the whole process dynamic control in the construction process can be realized so as to prevent and control the occurrence of safety accidents in the construction of deep foundation pit engineering.

Keywords: Web-BIM, safety management, deep foundation pit, construction

Procedia PDF Downloads 154
13136 Urinary Mucosal Cryoglobulin: A Review

Authors: Ibrahim M. S. Shnawa, Naeem R. R. Algebory

Abstract:

The procedure for the assessment of the urinary mucosal cryoglobulin (UMCG) is being reviewed, testified and evaluated. The major features of UMCG are rather similar to that of serum cryoglobulin. Such evident similarities are forming the reality for the existence of the UMCG. There were seven characterizing criteria useable for the identification for UMCG. Upon matching them to the Irish criteria for serum cryoglobulin, some modifications are being proposed to the 16th standards that has been formulated and built as an Irish criterion. The existence of UMCG is being reported for the first time in human chronic infectious bacterial disease.

Keywords: urinary, mucosal, cryoglubulin, standard immunofixation

Procedia PDF Downloads 460