Search results for: Electro Myo Graphic (EMG) signals
214 Evaluating Gene-Gene Interaction among Nicotine Dependence Genes on the Risk of Oral Clefts
Authors: Mengying Wang, Dongjing Liu, Holger Schwender, Ping Wang, Hongping Zhu, Tao Wu, Terri H Beaty
Abstract:
Background: Maternal smoking is a recognized risk factor for nonsyndromic cleft lip with or without cleft palate (NSCL/P). It has been reported that the effect of maternal smoking on oral clefts is mediated through genes that influence nicotine dependence. The polymorphisms of cholinergic receptor nicotinic alpha (CHRNA) and beta (CHRNB) subunits genes have previously shown strong associations with nicotine dependence. Here, we attempted to investigate whether the above genes are associated with clefting risk through testing for potential gene-gene (G×G) and gene-environment (G×E) interaction. Methods: We selected 120 markers in 14 genes associated with nicotine dependence to conduct transmission disequilibrium tests among 806 Chinese NSCL/P case-parent trios ascertained in an international consortium which conducted a genome-wide association study (GWAS) of oral clefts. We applied Cordell’s method using “TRIO” package in R to explore G×G as well as G×E interaction involving environmental tobacco smoke (ETS) based on conditional logistic regression model. Results: while no SNP showed significant association with NSCL/P after Bonferroni correction, we found signals for G×G interaction between 10 pairs of SNPs in CHRNA3, CHRNA5, and CHRNB4 (p<10-8), among which the most significant interaction was found between RS3743077 (CHRNA3) and RS11636753 (CHRNB4, p<8.2×10-12). Linkage disequilibrium (LD) analysis revealed only low level of LD between these markers. However, there were no significant results for G×ETS interaction. Conclusion: This study fails to detect association between nicotine dependence genes and NSCL/P, but illustrates the importance of taking into account potential G×G interaction for genetic association analysis in NSCL/P. This study also suggests nicotine dependence genes should be considered as important candidate genes for NSCL/P in future studies.Keywords: Gene-Gene Interaction, Maternal Smoking, Nicotine Dependence, Non-Syndromic Cleft Lip with or without Cleft Palate
Procedia PDF Downloads 337213 Drug Delivery to Solid Tumor: Effect of Dynamic Capillary Network Induced by Tumor
Authors: Mostafa Sefidgar, Kaamran Raahemifar, Hossein Bazmara, Madjid Soltani
Abstract:
The computational methods provide condition for investigation related to the process of drug delivery, such as convection and diffusion of drug in extracellular matrices, and drug extravasation from microvascular. The information of this process clarifies the mechanisms of drug delivery from the injection site to absorption by a solid tumor. In this study, an advanced numerical method is used to solve fluid flow and solute transport equations simultaneously to show how capillary network structure induced by tumor affects drug delivery. The effect of heterogeneous capillary network induced by tumor on interstitial fluid flow and drug delivery is investigated by this multi scale method. The sprouting angiogenesis model is used for generating capillary network induced by tumor. Fluid flow governing equations are implemented to calculate blood flow through the tumor-induced capillary network and fluid flow in normal and tumor tissues. The Starling’s law is used for closing this system of equations and coupling the intravascular and extravascular flows. Finally, convection-diffusion-reaction equation is used to simulate drug delivery. The dynamic approach which changes the capillary network structure based on signals sent by hemodynamic and metabolic stimuli is used in this study for more realistic assumption. The study indicates that drug delivery to solid tumors depends on the tumor induced capillary network structure. The dynamic approach generates the irregular capillary network around the tumor and predicts a higher interstitial pressure in the tumor region. This elevated interstitial pressure with irregular capillary network leads to a heterogeneous distribution of drug in the tumor region similar to in vivo observations. The investigation indicates that the drug transport properties have a significant role against the physiological barrier of drug delivery to a solid tumor.Keywords: solid tumor, physiological barriers to drug delivery, angiogenesis, microvascular network, solute transport
Procedia PDF Downloads 312212 Chassis Level Control Using Proportional Integrated Derivative Control, Fuzzy Logic and Deep Learning
Authors: Atakan Aral Ormancı, Tuğçe Arslantaş, Murat Özcü
Abstract:
This study presents the design and implementation of an experimental chassis-level system for various control applications. Specifically, the height level of the chassis is controlled using proportional integrated derivative, fuzzy logic, and deep learning control methods. Real-time data obtained from height and pressure sensors installed in a 6x2 truck chassis, in combination with pulse-width modulation signal values, are utilized during the tests. A prototype pneumatic system of a 6x2 truck is added to the setup, which enables the Smart Pneumatic Actuators to function as if they were in a real-world setting. To obtain real-time signal data from height sensors, an Arduino Nano is utilized, while a Raspberry Pi processes the data using Matlab/Simulink and provides the correct output signals to control the Smart Pneumatic Actuator in the truck chassis. The objective of this research is to optimize the time it takes for the chassis to level down and up under various loads. To achieve this, proportional integrated derivative control, fuzzy logic control, and deep learning techniques are applied to the system. The results show that the deep learning method is superior in optimizing time for a non-linear system. Fuzzy logic control with a triangular membership function as the rule base achieves better outcomes than proportional integrated derivative control. Traditional proportional integrated derivative control improves the time it takes to level the chassis down and up compared to an uncontrolled system. The findings highlight the superiority of deep learning techniques in optimizing the time for a non-linear system, and the potential of fuzzy logic control. The proposed approach and the experimental results provide a valuable contribution to the field of control, automation, and systems engineering.Keywords: automotive, chassis level control, control systems, pneumatic system control
Procedia PDF Downloads 81211 Millimeter-Wave Silicon Power Amplifiers for 5G Wireless Communications
Authors: Kyoungwoon Kim, Cuong Huynh, Cam Nguyen
Abstract:
Exploding demands for more data, faster data transmission speed, less interference, more users, more wireless devices, and better reliable service-far exceeding those provided in the current mobile communications networks in the RF spectrum below 6 GHz-has led the wireless communication industry to focus on higher, previously unallocated spectrums. High frequencies in RF spectrum near (around 28 GHz) or within the millimeter-wave regime is the logical solution to meet these demands. This high-frequency RF spectrum is of increasingly important for wireless communications due to its large available bandwidths that facilitate various applications requiring large-data high-speed transmissions, reaching up to multi-gigabit per second, of vast information. It also resolves the traffic congestion problems of signals from many wireless devices operating in the current RF spectrum (below 6 GHz), hence handling more traffic. Consequently, the wireless communication industries are moving towards 5G (fifth generation) for next-generation communications such as mobile phones, autonomous vehicles, virtual reality, and the Internet of Things (IoT). The U.S. Federal Communications Commission (FCC) proved on 14th July 2016 three frequency bands for 5G around 28, 37 and 39 GHz. We present some silicon-based RFIC power amplifiers (PA) for possible implementation for 5G wireless communications around 28, 37 and 39 GHz. The 16.5-28 GHz PA exhibits measured gain of more than 34.5 dB and very flat output power of 19.4±1.2 dBm across 16.5-28 GHz. The 25.5/37-GHz PA exhibits gain of 21.4 and 17 dB, and maximum output power of 16 and 13 dBm at 25.5 and 37 GHz, respectively, in the single-band mode. In the dual-band mode, the maximum output power is 13 and 9.5 dBm at 25.5 and 37 GHz, respectively. The 10-19/23-29/33-40 GHz PA has maximum output powers of 15, 13.3, and 13.8 dBm at 15, 25, and 35 GHz, respectively, in the single-band mode. When this PA is operated in dual-band mode, it has maximum output powers of 11.4/8.2 dBm at 15/25 GHz, 13.3/3 dBm at 15/35 GHz, and 8.7/6.7 dBm at 25/35 GHz. In the tri-band mode, it exhibits 8.8/5.4/3.8 dBm maximum output power at 15/25/35 GHz. Acknowledgement: This paper was made possible by NPRP grant # 6-241-2-102 from the Qatar National Research Fund (a member of Qatar Foundation). The statements made herein are solely the responsibility of the authorsKeywords: Microwaves, Millimeter waves, Power Amplifier, Wireless communications
Procedia PDF Downloads 187210 The Subcellular Localisation of EhRRP6 and Its Involvement in Pre-Ribosomal RNA Processing in Growth-Stressed Entamoeba histolytica
Authors: S. S. Singh, A. Bhattacharya, S. Bhattacharya
Abstract:
The eukaryotic exosome complex plays a pivotal role in RNA biogenesis, maturation, surveillance and differential expression of various RNAs in response to varying environmental signals. The exosome is composed of evolutionary conserved nine core subunits and the associated exonucleases Rrp6 and Rrp44. Rrp6p is crucial for the processing of rRNAs, other non-coding RNAs, regulation of polyA tail length and termination of transcription. Rrp6p, a 3’-5’ exonuclease is required for degradation of 5’-external transcribed spacer (ETS) released from the rRNA precursors during the early steps of pre-rRNA processing. In the parasitic protist Entamoeba histolytica in response to growth stress, there occurs the accumulation of unprocessed pre-rRNA and 5’ ETS sub fragment. To understand the processes leading to this accumulation, we looked for Rrp6 and the exosome subunits in E. histolytica, by in silico approaches. Of the nine core exosomal subunits, seven had high percentage of sequence similarity with the yeast and human. The EhRrp6 homolog contained exoribonuclease and HRDC domains like yeast but its N- terminus lacked the PMC2NT domain. EhRrp6 complemented the temperature sensitive phenotype of yeast rrp6Δ cells suggesting conservation of biological activity. We showed 3’-5’ exoribonuclease activity of EhRrp6p with in vitro-synthesized appropriate RNAs substrates. Like the yeast enzyme, EhRrp6p degraded unstructured RNA, but could degrade the stem-loops slowly. Furthermore, immunolocalization revealed that EhRrp6 was nuclear-localized in normal cells but was diminished from nucleus during serum starvation, which could explain the accumulation of 5’ETS during stress. Our study shows functional conservation of EhRrp6p in E.histolytica, an early-branching eukaryote, and will help to understand the evolution of exosomal components and their regulatory function.Keywords: entamoeba histolytica, exosome complex, rRNA processing, Rrp6
Procedia PDF Downloads 201209 Dynamic Control Theory: A Behavioral Modeling Approach to Demand Forecasting amongst Office Workers Engaged in a Competition on Energy Shifting
Authors: Akaash Tawade, Manan Khattar, Lucas Spangher, Costas J. Spanos
Abstract:
Many grids are increasing the share of renewable energy in their generation mix, which is causing the energy generation to become less controllable. Buildings, which consume nearly 33% of all energy, are a key target for demand response: i.e., mechanisms for demand to meet supply. Understanding the behavior of office workers is a start towards developing demand response for one sector of building technology. The literature notes that dynamic computational modeling can be predictive of individual action, especially given that occupant behavior is traditionally abstracted from demand forecasting. Recent work founded on Social Cognitive Theory (SCT) has provided a promising conceptual basis for modeling behavior, personal states, and environment using control theoretic principles. Here, an adapted linear dynamical system of latent states and exogenous inputs is proposed to simulate energy demand amongst office workers engaged in a social energy shifting game. The energy shifting competition is implemented in an office in Singapore that is connected to a minigrid of buildings with a consistent 'price signal.' This signal is translated into a 'points signal' by a reinforcement learning (RL) algorithm to influence participant energy use. The dynamic model functions at the intersection of the points signals, baseline energy consumption trends, and SCT behavioral inputs to simulate future outcomes. This study endeavors to analyze how the dynamic model trains an RL agent and, subsequently, the degree of accuracy to which load deferability can be simulated. The results offer a generalizable behavioral model for energy competitions that provides the framework for further research on transfer learning for RL, and more broadly— transactive control.Keywords: energy demand forecasting, social cognitive behavioral modeling, social game, transfer learning
Procedia PDF Downloads 107208 Mechanical Response Investigation of Wafer Probing Test with Vertical Cobra Probe via the Experiment and Transient Dynamic Simulation
Authors: De-Shin Liu, Po-Chun Wen, Zhen-Wei Zhuang, Hsueh-Chih Liu, Pei-Chen Huang
Abstract:
Wafer probing tests play an important role in semiconductor manufacturing procedures in accordance with the yield and reliability requirement of the wafer after the backend-of-the-line process. Accordingly, the stable physical and electrical contact between the probe and the tested wafer during wafer probing is regarded as an essential issue in identifying the known good die. The probe card can be integrated with multiple probe needles, which are classified as vertical, cantilever and micro-electro-mechanical systems type probe selections. Among all potential probe types, the vertical probe has several advantages as compared with other probe types, including maintainability, high probe density and feasibility for high-speed wafer testing. In the present study, the mechanical response of the wafer probing test with the vertical cobra probe on 720 μm thick silicon (Si) substrate with a 1.4 μm thick aluminum (Al) pad is investigated by the experiment and transient dynamic simulation approach. Because the deformation mechanism of the vertical cobra probe is determined by both bending and buckling mechanisms, the stable correlation between contact forces and overdrive (OD) length must be carefully verified. Moreover, the decent OD length with corresponding contact force contributed to piercing the native oxide layer of the Al pad and preventing the probing test-induced damage on the interconnect system. Accordingly, the scratch depth of the Al pad under various OD lengths is estimated by the atomic force microscope (AFM) and simulation work. In the wafer probing test configuration, the contact phenomenon between the probe needle and the tested object introduced large deformation and twisting of mesh gridding, causing the subsequent numerical divergence issue. For this reason, the arbitrary Lagrangian-Eulerian method is utilized in the present simulation work to conquer the aforementioned issue. The analytic results revealed a slight difference when the OD is considered as 40 μm, and the simulated is almost identical to the measured scratch depths of the Al pad under higher OD lengths up to 70 μm. This phenomenon can be attributed to the unstable contact of the probe at low OD length with the scratch depth below 30% of Al pad thickness, and the contact status will be being stable when the scratch depth over 30% of pad thickness. The splash of the Al pad is observed by the AFM, and the splashed Al debris accumulates on a specific side; this phenomenon is successfully simulated in the transient dynamic simulation. Thus, the preferred testing OD lengths are found as 45 μm to 70 μm, and the corresponding scratch depths on the Al pad are represented as 31.4% and 47.1% of Al pad thickness, respectively. The investigation approach demonstrated in this study contributed to analyzing the mechanical response of wafer probing test configuration under large strain conditions and assessed the geometric designs and material selections of probe needles to meet the requirement of high resolution and high-speed wafer-level probing test for thinned wafer application.Keywords: wafer probing test, vertical probe, probe mark, mechanical response, FEA simulation
Procedia PDF Downloads 57207 Comparison of Direction of Arrival Estimation Method for Drone Based on Phased Microphone Array
Authors: Jiwon Lee, Yeong-Ju Go, Jong-Soo Choi
Abstract:
Drones were first developed for military use and were used in World War 1. But recently drones have been used in a variety of fields. Several companies actively utilize drone technology to strengthen their services, and in agriculture, drones are used for crop monitoring and sowing. Other people use drones for hobby activities such as photography. However, as the range of use of drones expands rapidly, problems caused by drones such as improperly flying, privacy and terrorism are also increasing. As the need for monitoring and tracking of drones increases, researches are progressing accordingly. The drone detection system estimates the position of the drone using the physical phenomena that occur when the drones fly. The drone detection system measures being developed utilize many approaches, such as radar, infrared camera, and acoustic detection systems. Among the various drone detection system, the acoustic detection system is advantageous in that the microphone array system is small, inexpensive, and easy to operate than other systems. In this paper, the acoustic signal is acquired by using minimum microphone when drone is flying, and direction of drone is estimated. When estimating the Direction of Arrival(DOA), there is a method of calculating the DOA based on the Time Difference of Arrival(TDOA) and a method of calculating the DOA based on the beamforming. The TDOA technique requires less number of microphones than the beamforming technique, but is weak in noisy environments and can only estimate the DOA of a single source. The beamforming technique requires more microphones than the TDOA technique. However, it is strong against the noisy environment and it is possible to simultaneously estimate the DOA of several drones. When estimating the DOA using acoustic signals emitted from the drone, it is impossible to measure the position of the drone, and only the direction can be estimated. To overcome this problem, in this work we show how to estimate the position of drones by arranging multiple microphone arrays. The microphone array used in the experiments was four tetrahedral microphones. We simulated the performance of each DOA algorithm and demonstrated the simulation results through experiments.Keywords: acoustic sensing, direction of arrival, drone detection, microphone array
Procedia PDF Downloads 160206 Downscaling Grace Gravity Models Using Spectral Combination Techniques for Terrestrial Water Storage and Groundwater Storage Estimation
Authors: Farzam Fatolazadeh, Kalifa Goita, Mehdi Eshagh, Shusen Wang
Abstract:
The Gravity Recovery and Climate Experiment (GRACE) is a satellite mission with twin satellites for the precise determination of spatial and temporal variations in the Earth’s gravity field. The products of this mission are monthly global gravity models containing the spherical harmonic coefficients and their errors. These GRACE models can be used for estimating terrestrial water storage (TWS) variations across the globe at large scales, thereby offering an opportunity for surface and groundwater storage (GWS) assessments. Yet, the ability of GRACE to monitor changes at smaller scales is too limited for local water management authorities. This is largely due to the low spatial and temporal resolutions of its models (~200,000 km2 and one month, respectively). High-resolution GRACE data products would substantially enrich the information that is needed by local-scale decision-makers while offering the data for the regions that lack adequate in situ monitoring networks, including northern parts of Canada. Such products could eventually be obtained through downscaling. In this study, we extended the spectral combination theory to simultaneously downscale spatiotemporally the 3o spatial coarse resolution of GRACE to 0.25o degrees resolution and monthly coarse resolution to daily resolution. This method combines the monthly gravity field solution of GRACE and daily hydrological model products in the form of both low and high-frequency signals to produce high spatiotemporal resolution TWSA and GWSA products. The main contribution and originality of this study are to comprehensively and simultaneously consider GRACE and hydrological variables and their uncertainties to form the estimator in the spectral domain. Therefore, it is predicted that we reach downscale products with an acceptable accuracy.Keywords: GRACE satellite, groundwater storage, spectral combination, terrestrial water storage
Procedia PDF Downloads 83205 The Use of Correlation Difference for the Prediction of Leakage in Pipeline Networks
Authors: Mabel Usunobun Olanipekun, Henry Ogbemudia Omoregbee
Abstract:
Anomalies such as water pipeline and hydraulic or petrochemical pipeline network leakages and bursts have significant implications for economic conditions and the environment. In order to ensure pipeline systems are reliable, they must be efficiently controlled. Wireless Sensor Networks (WSNs) have become a powerful network with critical infrastructure monitoring systems for water, oil and gas pipelines. The loss of water, oil and gas is inevitable and is strongly linked to financial costs and environmental problems, and its avoidance often leads to saving of economic resources. Substantial repair costs and the loss of precious natural resources are part of the financial impact of leaking pipes. Pipeline systems experts have implemented various methodologies in recent decades to identify and locate leakages in water, oil and gas supply networks. These methodologies include, among others, the use of acoustic sensors, measurements, abrupt statistical analysis etc. The issue of leak quantification is to estimate, given some observations about that network, the size and location of one or more leaks in a water pipeline network. In detecting background leakage, however, there is a greater uncertainty in using these methodologies since their output is not so reliable. In this work, we are presenting a scalable concept and simulation where a pressure-driven model (PDM) was used to determine water pipeline leakage in a system network. These pressure data were collected with the use of acoustic sensors located at various node points after a predetermined distance apart. We were able to determine with the use of correlation difference to determine the leakage point locally introduced at a predetermined point between two consecutive nodes, causing a substantial pressure difference between in a pipeline network. After de-noising the signal from the sensors at the nodes, we successfully obtained the exact point where we introduced the local leakage using the correlation difference model we developed.Keywords: leakage detection, acoustic signals, pipeline network, correlation, wireless sensor networks (WSNs)
Procedia PDF Downloads 109204 Speech Emotion Recognition: A DNN and LSTM Comparison in Single and Multiple Feature Application
Authors: Thiago Spilborghs Bueno Meyer, Plinio Thomaz Aquino Junior
Abstract:
Through speech, which privileges the functional and interactive nature of the text, it is possible to ascertain the spatiotemporal circumstances, the conditions of production and reception of the discourse, the explicit purposes such as informing, explaining, convincing, etc. These conditions allow bringing the interaction between humans closer to the human-robot interaction, making it natural and sensitive to information. However, it is not enough to understand what is said; it is necessary to recognize emotions for the desired interaction. The validity of the use of neural networks for feature selection and emotion recognition was verified. For this purpose, it is proposed the use of neural networks and comparison of models, such as recurrent neural networks and deep neural networks, in order to carry out the classification of emotions through speech signals to verify the quality of recognition. It is expected to enable the implementation of robots in a domestic environment, such as the HERA robot from the RoboFEI@Home team, which focuses on autonomous service robots for the domestic environment. Tests were performed using only the Mel-Frequency Cepstral Coefficients, as well as tests with several characteristics of Delta-MFCC, spectral contrast, and the Mel spectrogram. To carry out the training, validation and testing of the neural networks, the eNTERFACE’05 database was used, which has 42 speakers from 14 different nationalities speaking the English language. The data from the chosen database are videos that, for use in neural networks, were converted into audios. It was found as a result, a classification of 51,969% of correct answers when using the deep neural network, when the use of the recurrent neural network was verified, with the classification with accuracy equal to 44.09%. The results are more accurate when only the Mel-Frequency Cepstral Coefficients are used for the classification, using the classifier with the deep neural network, and in only one case, it is possible to observe a greater accuracy by the recurrent neural network, which occurs in the use of various features and setting 73 for batch size and 100 training epochs.Keywords: emotion recognition, speech, deep learning, human-robot interaction, neural networks
Procedia PDF Downloads 170203 Vegetable Oil-Based Anticorrosive Coatings for Metals Protection
Authors: Brindusa Balanuca, Raluca Stan, Cristina Ott, Matei Raicopol
Abstract:
The current study aims to develop anti corrosive coatings using vegetable oil (VO)-based polymers. Due to their chemical versatility, reduced costs and more important, higher hydrophobicity, VO’s are great candidates in the field of anti-corrosive materials. Lignin (Ln) derivatives were also used in this research study in order to achieve performant hydrophobic anti-corrosion layers. Methods Through a rational functionalization pathway, the selected VO (linseed oil) is converted to more reactive monomer – methacrylate linseed oil (noted MLO). The synthesized MLO cover the metals surface in a thin layer and through different polymerization techniques (using visible radiation or temperature, respectively) and well-established reaction conditions, is converted to a hydrophobic coating capable to protect the metals against corrosive factors. In order to increase the anti-corrosion protection, lignin (Ln) was selected to be used together with MLO macromonomer. Thus, super hydrophobic protective coatings will be formulated. Results The selected synthetic strategy to convert the VO in more reactive compounds – MLO – has led to a functionalization degree of greater than 80%. The obtained monomers were characterized through NMR and FT-IR by monitoring the characteristic signals after each synthesis step. Using H-NMR data, the functionalization degrees were established. VO-based and also VO-Ln anti corrosion formulations were both photochemical and thermal polymerized in specific reaction conditions (initiators, temperature range, reaction time) and were tested as anticorrosive coatings. Complete and advances characterization of the synthesized materials will be presented in terms of thermal, mechanical and morphological properties. The anticorrosive properties were also evaluated and will be presented. Conclusions Through the design strategy briefly presented, new composite materials for metal corrosion protection were successfully developed, using natural derivatives: vegetable oils and lignin, respectively.Keywords: anticorrosion protection, hydrophobe layers, lignin, methacrylates, vegetable oil
Procedia PDF Downloads 169202 Use of Giant Magneto Resistance Sensors to Detect Micron to Submicron Biologic Objects
Authors: Manon Giraud, Francois-Damien Delapierre, Guenaelle Jasmin-Lebras, Cecile Feraudet-Tarisse, Stephanie Simon, Claude Fermon
Abstract:
Early diagnosis or detection of harmful substances at low level is a growing field of high interest. The ideal test should be cheap, easy to use, quick, reliable, specific, and with very low detection limit. Combining the high specificity of antibodies-functionalized magnetic beads used to immune-capture biologic objects and the high sensitivity of a GMR-based sensors, it is possible to even detect these biologic objects one by one, such as a cancerous cell, a bacteria or a disease biomarker. The simplicity of the detection process makes its use possible even for untrained staff. Giant Magneto Resistance (GMR) is a recently discovered effect consisting in the electrical resistance modification of some conductive layers when exposed to a magnetic field. This effect allows the detection of very low variations of magnetic field (typically a few tens of nanoTesla). Magnetic nanobeads coated with antibodies targeting the analytes are mixed with a biological sample (blood, saliva) and incubated for 45 min. Then the mixture is injected in a very simple microfluidic chip and circulates above a GMR sensor that detects changes in the surrounding magnetic field. Magnetic particles do not create a field sufficient to be detected. Therefore, only the biological objects surrounded by several antibodies-functionalized magnetic beads (that have been captured by the complementary antigens) are detected when they move above the sensor. Proof of concept has been carried out on NS1 mouse cancerous cells diluted in PBS which have been bonded to magnetic 200nm particles. Signals were detected in cells-containing samples while none were recorded for negative controls. Binary response was hence assessed for this first biological model. The precise quantification of the analytes and its detection in highly diluted solution is the step now in progress.Keywords: early diagnosis, giant magnetoresistance, lab-on-a-chip, submicron particle
Procedia PDF Downloads 248201 Comparative Study of sLASER and PRESS Techniques in Magnetic Resonance Spectroscopy of Normal Brain
Authors: Shin Ku Kim, Yun Ah Oh, Eun Hee Seo, Chang Min Dae, Yun Jung Bae
Abstract:
Objectives: The commonly used PRESS technique in magnetic resonance spectroscopy (MRS) has a limitation of incomplete water suppression. The recently developed sLASER technique is known for its improved effectiveness in suppressing water signal. However, no prior study has compared both sequences in a normal human brain. In this study, we firstly aimed to compare the performances of both techniques in brain MRS. Materials and methods: From January 2023 to July 2023, thirty healthy participants (mean age 38 years, 17 male, 13 female) without underlying neurological diseases were enrolled in this study. All participants underwent single-voxel MRS using both PRESS and sLASER techniques on 3T MRI. Two regions-of-interest were allocated in the left medial thalamus and left parietal white matter (WM) by a single reader. The SpectroView Analysis (SW5, Philips, Netherlands) provided automatic measurements, including signal-to-noise ratio (SNR) and peak_height of water, N-acetylaspartate (NAA)-water/Choline (Cho)-water/Creatine (Cr)-water ratios, and NAA-Cr/Cho-Cr ratios. The measurements from PRESS and sLASER techniques were compared using paired T-tests and Bland-Altman methods, and the variability was assessed using coefficients of variation (CV). Results: SNR and peak_heights of the water were significantly lower with sLASER compared to PRESS (left medial thalamus, sLASER SNR/peak_height 2092±475/328±85 vs. PRESS 2811±549/440±105); left parietal WM, 5422±1016/872±196 vs. 7152±1305/1150±278; all, P<0.001, respectively). Accordingly, NAA-water/Cho-water/Cr-water ratios and NAA-Cr/Cho-Cr ratios were significantly higher with sLASER than with PRESS (all, P< 0.001, respectively). The variabilities of NAA-water/Cho-water/Cr-water ratios and Cho-Cr ratio in the left medial thalamus were lower with sLASER than with PRESS (CV, sLASER vs. PRESS, 19.9 vs. 58.1/19.8 vs. 54.7/20.5 vs. 43.9 and 11.5 vs. 16.2) Conclusion: The sLASER technique demonstrated enhanced background water suppression, resulting in increased signals and reduced variability in brain metabolite measurements of MRS. Therefore, sLASER could offer a more precise and stable method for identifying brain metabolites.Keywords: Magnetic resonance spectroscopy, Brain, sLASER, PRESS
Procedia PDF Downloads 46200 Tracking the Effect of Ibutilide on Amplitude and Frequency of Fibrillatory Intracardiac Electrograms Using the Regression Analysis
Authors: H. Hajimolahoseini, J. Hashemi, D. Redfearn
Abstract:
Background: Catheter ablation is an effective therapy for symptomatic atrial fibrillation (AF). The intracardiac electrocardiogram (IEGM) collected during this procedure contains precious information that has not been explored to its full capacity. Novel processing techniques allow looking at these recordings from different perspectives which can lead to improved therapeutic approaches. In our previous study, we showed that variation in amplitude measured through Shannon Entropy could be used as an AF recurrence risk stratification factor in patients who received Ibutilide before the electrograms were recorded. The aim of this study is to further investigate the effect of Ibutilide on characteristics of the recorded signals from the left atrium (LA) of a patient with persistent AF before and after administration of the drug. Methods: The IEGMs collected from different intra-atrial sites of 12 patients were studied and compared before and after Ibutilide administration. First, the before and after Ibutilide IEGMs that were recorded within a Euclidian distance of 3 mm in LA were selected as pairs for comparison. For every selected pair of IEGMs, the Probability Distribution Function (PDF) of the amplitude in time domain and magnitude in frequency domain was estimated using the regression analysis. The PDF represents the relative likelihood of a variable falling within a specific range of values. Results: Our observations showed that in time domain, the PDF of amplitudes was fitted to a Gaussian distribution while in frequency domain, it was fitted to a Rayleigh distribution. Our observations also revealed that after Ibutilide administration, the IEGMs would have significantly narrower short-tailed PDFs both in time and frequency domains. Conclusion: This study shows that the PDFs of the IEGMs before and after administration of Ibutilide represents significantly different properties, both in time and frequency domains. Hence, by fitting the PDF of IEGMs in time domain to a Gaussian distribution or in frequency domain to a Rayleigh distribution, the effect of Ibutilide can easily be tracked using the statistics of their PDF (e.g., standard deviation) while this is difficult through the waveform of IEGMs itself.Keywords: atrial fibrillation, catheter ablation, probability distribution function, time-frequency characteristics
Procedia PDF Downloads 159199 Development of an Atmospheric Radioxenon Detection System for Nuclear Explosion Monitoring
Authors: V. Thomas, O. Delaune, W. Hennig, S. Hoover
Abstract:
Measurement of radioactive isotopes of atmospheric xenon is used to detect, locate and identify any confined nuclear tests as part of the Comprehensive Nuclear Test-Ban Treaty (CTBT). In this context, the Alternative Energies and French Atomic Energy Commission (CEA) has developed a fixed device to continuously measure the concentration of these fission products, the SPALAX process. During its atmospheric transport, the radioactive xenon will undergo a significant dilution between the source point and the measurement station. Regarding the distance between fixed stations located all over the globe, the typical volume activities measured are near 1 mBq m⁻³. To avoid the constraints induced by atmospheric dilution, the development of a mobile detection system is in progress; this system will allow on-site measurements in order to confirm or infringe a suspicious measurement detected by a fixed station. Furthermore, this system will use beta/gamma coincidence measurement technique in order to drastically reduce environmental background (which masks such activities). The detector prototype consists of a gas cell surrounded by two large silicon wafers, coupled with two square NaI(Tl) detectors. The gas cell has a sample volume of 30 cm³ and the silicon wafers are 500 µm thick with an active surface area of 3600 mm². In order to minimize leakage current, each wafer has been segmented into four independent silicon pixels. This cell is sandwiched between two low background NaI(Tl) detectors (70x70x40 mm³ crystal). The expected Minimal Detectable Concentration (MDC) for each radio-xenon is in the order of 1-10 mBq m⁻³. Three 4-channels digital acquisition modules (Pixie-NET) are used to process all the signals. Time synchronization is ensured by a dedicated PTP-network, using the IEEE 1588 Precision Time Protocol. We would like to present this system from its simulation to the laboratory tests.Keywords: beta/gamma coincidence technique, low level measurement, radioxenon, silicon pixels
Procedia PDF Downloads 126198 Measurement of Ionospheric Plasma Distribution over Myanmar Using Single Frequency Global Positioning System Receiver
Authors: Win Zaw Hein, Khin Sandar Linn, Su Su Yi Mon, Yoshitaka Goto
Abstract:
The Earth ionosphere is located at the altitude of about 70 km to several 100 km from the ground, and it is composed of ions and electrons called plasma. In the ionosphere, these plasma makes delay in GPS (Global Positioning System) signals and reflect in radio waves. The delay along the signal path from the satellite to the receiver is directly proportional to the total electron content (TEC) of plasma, and this delay is the largest error factor in satellite positioning and navigation. Sounding observation from the top and bottom of the ionosphere was popular to investigate such ionospheric plasma for a long time. Recently, continuous monitoring of the TEC using networks of GNSS (Global Navigation Satellite System) observation stations, which are basically built for land survey, has been conducted in several countries. However, in these stations, multi-frequency support receivers are installed to estimate the effect of plasma delay using their frequency dependence and the cost of multi-frequency support receivers are much higher than single frequency support GPS receiver. In this research, single frequency GPS receiver was used instead of expensive multi-frequency GNSS receivers to measure the ionospheric plasma variation such as vertical TEC distribution. In this measurement, single-frequency support ublox GPS receiver was used to probe ionospheric TEC. The location of observation was assigned at Mandalay Technological University in Myanmar. In the method, the ionospheric TEC distribution is represented by polynomial functions for latitude and longitude, and parameters of the functions are determined by least-squares fitting on pseudorange data obtained at a known location under an assumption of thin layer ionosphere. The validity of the method was evaluated by measurements obtained by the Japanese GNSS observation network called GEONET. The performance of measurement results using single-frequency of GPS receiver was compared with the results by dual-frequency measurement.Keywords: ionosphere, global positioning system, GPS, ionospheric delay, total electron content, TEC
Procedia PDF Downloads 137197 Analysis of Cell Cycle Status in Radiation Non-Targeted Hepatoma Cells Using Flow Cytometry: Evidence of Dose Dependent Response
Authors: Sharmi Mukherjee, Anindita Chakraborty
Abstract:
Cellular irradiation incites complex responses including arrest of cell cycle progression. This article accentuates the effects of radiation on cell cycle status of radiation non-targeted cells. Human Hepatoma HepG2 cells were exposed to increasing doses of γ radiations (1, 2, 4, 6 Gy) and their cell culture media was transferred to non-targeted HepG2 cells cultured in other Petri plates. These radiation non-targeted cells cultured in the ICCM (Irradiated cell conditioned media) were the bystander cells on which cell cycle analysis was performed using flow cytometry. An apparent decrease in the distribution of bystander cells at G0/G1 phase was observed with increased radiation doses upto 4 Gy representing a linear relationship. This was accompanied by a gradual increase in cellular distribution at G2/M phase. Interestingly the number of cells in G2/M phase at 1 and 2 Gy irradiation was not significantly different from each other. However, the percentage of G2 phase cells at 4 and 6 Gy doses were significantly higher than 2 Gy dose indicating the IC50 dose to be between 2 and 4 Gy. Cell cycle arrest is an indirect indicator of genotoxic damage in cells. In this study, bystander stress signals through the cell culture media of irradiated cells disseminated the radiation induced DNA damages in the non-targeted cells which resulted in arrest of the cell cycle progression at G2/M phase checkpoint. This implies that actual radiation biological effects represent a penumbra with effects encompassing a larger area than the actual beam. This article highlights the existence of genotoxic damages as bystander effects of γ rays in human Hepatoma cells by cell cycle analysis and opens up avenues for appraisal of bystander stress communications between tumor cells. Contemplation of underlying signaling mechanisms can be manipulated to maximize damaging effects of radiation with minimum dose and thus has therapeutic applications.Keywords: bystander effect, cell cycle, genotoxic damage, hepatoma
Procedia PDF Downloads 184196 Realizing Teleportation Using Black-White Hole Capsule Constructed by Space-Time Microstrip Circuit Control
Authors: Mapatsakon Sarapat, Mongkol Ketwongsa, Somchat Sonasang, Preecha Yupapin
Abstract:
The designed and performed preliminary tests on a space-time control circuit using a two-level system circuit with a 4-5 cm diameter microstrip for realistic teleportation have been demonstrated. It begins by calculating the parameters that allow a circuit that uses the alternative current (AC) at a specified frequency as the input signal. A method that causes electrons to move along the circuit perimeter starting at the speed of light, which found satisfaction based on the wave-particle duality. It is able to establish the supersonic speed (faster than light) for the electron cloud in the middle of the circuit, creating a timeline and propulsive force as well. The timeline is formed by the stretching and shrinking time cancellation in the relativistic regime, in which the absolute time has vanished. In fact, both black holes and white holes are created from time signals at the beginning, where the speed of electrons travels close to the speed of light. They entangle together like a capsule until they reach the point where they collapse and cancel each other out, which is controlled by the frequency of the circuit. Therefore, we can apply this method to large-scale circuits such as potassium, from which the same method can be applied to form the system to teleport living things. In fact, the black hole is a hibernation system environment that allows living things to live and travel to the destination of teleportation, which can be controlled from position and time relative to the speed of light. When the capsule reaches its destination, it increases the frequency of the black holes and white holes canceling each other out to a balanced environment. Therefore, life can safely teleport to the destination. Therefore, there must be the same system at the origin and destination, which could be a network. Moreover, it can also be applied to space travel as well. The design system will be tested on a small system using a microstrip circuit system that we can create in the laboratory on a limited budget that can be used in both wired and wireless systems.Keywords: quantum teleportation, black-white hole, time, timeline, relativistic electronics
Procedia PDF Downloads 75195 A Cooperative Signaling Scheme for Global Navigation Satellite Systems
Authors: Keunhong Chae, Seokho Yoon
Abstract:
Recently, the global navigation satellite system (GNSS) such as Galileo and GPS is employing more satellites to provide a higher degree of accuracy for the location service, thus calling for a more efficient signaling scheme among the satellites used in the overall GNSS network. In that the network throughput is improved, the spatial diversity can be one of the efficient signaling schemes; however, it requires multiple antenna that could cause a significant increase in the complexity of the GNSS. Thus, a diversity scheme called the cooperative signaling was proposed, where the virtual multiple-input multiple-output (MIMO) signaling is realized with using only a single antenna in the transmit satellite of interest and with modeling the neighboring satellites as relay nodes. The main drawback of the cooperative signaling is that the relay nodes receive the transmitted signal at different time instants, i.e., they operate in an asynchronous way, and thus, the overall performance of the GNSS network could degrade severely. To tackle the problem, several modified cooperative signaling schemes were proposed; however, all of them are difficult to implement due to a signal decoding at the relay nodes. Although the implementation at the relay nodes could be simpler to some degree by employing the time-reversal and conjugation operations instead of the signal decoding, it would be more efficient if we could implement the operations of the relay nodes at the source node having more resources than the relay nodes. So, in this paper, we propose a novel cooperative signaling scheme, where the data signals are combined in a unique way at the source node, thus obviating the need of the complex operations such as signal decoding, time-reversal and conjugation at the relay nodes. The numerical results confirm that the proposed scheme provides the same performance in the cooperative diversity and the bit error rate (BER) as the conventional scheme, while reducing the complexity at the relay nodes significantly. Acknowledgment: This work was supported by the National GNSS Research Center program of Defense Acquisition Program Administration and Agency for Defense Development.Keywords: global navigation satellite network, cooperative signaling, data combining, nodes
Procedia PDF Downloads 280194 Examining Statistical Monitoring Approach against Traditional Monitoring Techniques in Detecting Data Anomalies during Conduct of Clinical Trials
Authors: Sheikh Omar Sillah
Abstract:
Introduction: Monitoring is an important means of ensuring the smooth implementation and quality of clinical trials. For many years, traditional site monitoring approaches have been critical in detecting data errors but not optimal in identifying fabricated and implanted data as well as non-random data distributions that may significantly invalidate study results. The objective of this paper was to provide recommendations based on best statistical monitoring practices for detecting data-integrity issues suggestive of fabrication and implantation early in the study conduct to allow implementation of meaningful corrective and preventive actions. Methodology: Electronic bibliographic databases (Medline, Embase, PubMed, Scopus, and Web of Science) were used for the literature search, and both qualitative and quantitative studies were sought. Search results were uploaded into Eppi-Reviewer Software, and only publications written in the English language from 2012 were included in the review. Gray literature not considered to present reproducible methods was excluded. Results: A total of 18 peer-reviewed publications were included in the review. The publications demonstrated that traditional site monitoring techniques are not efficient in detecting data anomalies. By specifying project-specific parameters such as laboratory reference range values, visit schedules, etc., with appropriate interactive data monitoring, statistical monitoring can offer early signals of data anomalies to study teams. The review further revealed that statistical monitoring is useful to identify unusual data patterns that might be revealing issues that could impact data integrity or may potentially impact study participants' safety. However, subjective measures may not be good candidates for statistical monitoring. Conclusion: The statistical monitoring approach requires a combination of education, training, and experience sufficient to implement its principles in detecting data anomalies for the statistical aspects of a clinical trial.Keywords: statistical monitoring, data anomalies, clinical trials, traditional monitoring
Procedia PDF Downloads 75193 Atomic Decomposition Audio Data Compression and Denoising Using Sparse Dictionary Feature Learning
Authors: T. Bryan , V. Kepuska, I. Kostnaic
Abstract:
A method of data compression and denoising is introduced that is based on atomic decomposition of audio data using “basis vectors” that are learned from the audio data itself. The basis vectors are shown to have higher data compression and better signal-to-noise enhancement than the Gabor and gammatone “seed atoms” that were used to generate them. The basis vectors are the input weights of a Sparse AutoEncoder (SAE) that is trained using “envelope samples” of windowed segments of the audio data. The envelope samples are extracted from the audio data by performing atomic decomposition with Gabor or gammatone seed atoms. This process identifies segments of audio data that are locally coherent with the seed atoms. Envelope samples are extracted by identifying locally coherent audio data segments with Gabor or gammatone seed atoms, found by matching pursuit. The envelope samples are formed by taking the kronecker products of the atomic envelopes with the locally coherent data segments. Oracle signal-to-noise ratio (SNR) verses data compression curves are generated for the seed atoms as well as the basis vectors learned from Gabor and gammatone seed atoms. SNR data compression curves are generated for speech signals as well as early American music recordings. The basis vectors are shown to have higher denoising capability for data compression rates ranging from 90% to 99.84% for speech as well as music. Envelope samples are displayed as images by folding the time series into column vectors. This display method is used to compare of the output of the SAE with the envelope samples that produced them. The basis vectors are also displayed as images. Sparsity is shown to play an important role in producing the highest denoising basis vectors.Keywords: sparse dictionary learning, autoencoder, sparse autoencoder, basis vectors, atomic decomposition, envelope sampling, envelope samples, Gabor, gammatone, matching pursuit
Procedia PDF Downloads 252192 Rapid Formation of Ortho-Boronoimines and Derivatives for Reversible and Dynamic Bioconjugation Under Physiological Conditions
Authors: Nicholas C. Rose, Christopher D. Spicer
Abstract:
The regeneration of damaged or diseased tissues would provide an invaluable therapeutic tool in biological research and medicine. Cells must be provided with a number of different biochemical signals in order to form mature tissue through complex signaling networks that are difficult to recreate in synthetic materials. The ability to attach and detach bioactive proteins from material in an iterative and dynamic manner would therefore present a powerful way to mimic natural biochemical signaling cascades for tissue growth. We propose to reversibly attach these bioactive proteins using ortho-boronoimine (oBI) linkages and related derivatives formed by the reaction of an ortho-boronobenzaldehyde with a nucleophilic amine derivative. To enable the use of oBIs for biomaterial modification, we have studied binding and cleavage processes with precise detail in the context of small molecule models. A panel of oBI complexes has been synthesized and screened using a novel Förster resonance energy transfer (FRET) assay, using a cyanine dye FRET pair (Cy3 and Cy5), to identify the most reactive boron-aldehyde/amine nucleophile pairs. Upon conjugation of the dyes, FRET occurs under Cy3 excitation and the resultant ratio of Cy3:Cy5 emission directly correlates to conversion. Reaction kinetics and equilibria can be accurately quantified for reactive pairs, with dissociation constants of oBI derivatives in water (KD) found to span 9-orders of magnitude (10⁻²-10⁻¹¹ M). These studies have provided us with a better understanding of oBI linkages that we hope to exploit to reversibly attach bioconjugates to materials. The long-term aim of the project is to develop a modular biomaterial platform that can be used to help combat chronic diseases such as osteoarthritis, heart disease, and chronic wounds by providing cells with potent biological stimuli for tissue engineering.Keywords: dynamic, bioconjugation, bornoimine, rapid, physiological
Procedia PDF Downloads 96191 Relation of Optimal Pilot Offsets in the Shifted Constellation-Based Method for the Detection of Pilot Contamination Attacks
Authors: Dimitriya A. Mihaylova, Zlatka V. Valkova-Jarvis, Georgi L. Iliev
Abstract:
One possible approach for maintaining the security of communication systems relies on Physical Layer Security mechanisms. However, in wireless time division duplex systems, where uplink and downlink channels are reciprocal, the channel estimate procedure is exposed to attacks known as pilot contamination, with the aim of having an enhanced data signal sent to the malicious user. The Shifted 2-N-PSK method involves two random legitimate pilots in the training phase, each of which belongs to a constellation, shifted from the original N-PSK symbols by certain degrees. In this paper, legitimate pilots’ offset values and their influence on the detection capabilities of the Shifted 2-N-PSK method are investigated. As the implementation of the technique depends on the relation between the shift angles rather than their specific values, the optimal interconnection between the two legitimate constellations is investigated. The results show that no regularity exists in the relation between the pilot contamination attacks (PCA) detection probability and the choice of offset values. Therefore, an adversary who aims to obtain the exact offset values can only employ a brute-force attack but the large number of possible combinations for the shifted constellations makes such a type of attack difficult to successfully mount. For this reason, the number of optimal shift value pairs is also studied for both 100% and 98% probabilities of detecting pilot contamination attacks. Although the Shifted 2-N-PSK method has been broadly studied in different signal-to-noise ratio scenarios, in multi-cell systems the interference from the signals in other cells should be also taken into account. Therefore, the inter-cell interference impact on the performance of the method is investigated by means of a large number of simulations. The results show that the detection probability of the Shifted 2-N-PSK decreases inversely to the signal-to-interference-plus-noise ratio.Keywords: channel estimation, inter-cell interference, pilot contamination attacks, wireless communications
Procedia PDF Downloads 217190 Analysis of Biomarkers Intractable Epileptogenic Brain Networks with Independent Component Analysis and Deep Learning Algorithms: A Comprehensive Framework for Scalable Seizure Prediction with Unimodal Neuroimaging Data in Pediatric Patients
Authors: Bliss Singhal
Abstract:
Epilepsy is a prevalent neurological disorder affecting approximately 50 million individuals worldwide and 1.2 million Americans. There exist millions of pediatric patients with intractable epilepsy, a condition in which seizures fail to come under control. The occurrence of seizures can result in physical injury, disorientation, unconsciousness, and additional symptoms that could impede children's ability to participate in everyday tasks. Predicting seizures can help parents and healthcare providers take precautions, prevent risky situations, and mentally prepare children to minimize anxiety and nervousness associated with the uncertainty of a seizure. This research proposes a comprehensive framework to predict seizures in pediatric patients by evaluating machine learning algorithms on unimodal neuroimaging data consisting of electroencephalogram signals. The bandpass filtering and independent component analysis proved to be effective in reducing the noise and artifacts from the dataset. Various machine learning algorithms’ performance is evaluated on important metrics such as accuracy, precision, specificity, sensitivity, F1 score and MCC. The results show that the deep learning algorithms are more successful in predicting seizures than logistic Regression, and k nearest neighbors. The recurrent neural network (RNN) gave the highest precision and F1 Score, long short-term memory (LSTM) outperformed RNN in accuracy and convolutional neural network (CNN) resulted in the highest Specificity. This research has significant implications for healthcare providers in proactively managing seizure occurrence in pediatric patients, potentially transforming clinical practices, and improving pediatric care.Keywords: intractable epilepsy, seizure, deep learning, prediction, electroencephalogram channels
Procedia PDF Downloads 84189 Modeling and Performance Evaluation of an Urban Corridor under Mixed Traffic Flow Condition
Authors: Kavitha Madhu, Karthik K. Srinivasan, R. Sivanandan
Abstract:
Indian traffic can be considered as mixed and heterogeneous due to the presence of various types of vehicles that operate with weak lane discipline. Consequently, vehicles can position themselves anywhere in the traffic stream depending on availability of gaps. The choice of lateral positioning is an important component in representing and characterizing mixed traffic. The field data provides evidence that the trajectory of vehicles in Indian urban roads have significantly varying longitudinal and lateral components. Further, the notion of headway which is widely used for homogeneous traffic simulation is not well defined in conditions lacking lane discipline. From field data it is clear that following is not strict as in homogeneous and lane disciplined conditions and neighbouring vehicles ahead of a given vehicle and those adjacent to it could also influence the subject vehicles choice of position, speed and acceleration. Given these empirical features, the suitability of using headway distributions to characterize mixed traffic in Indian cities is questionable, and needs to be modified appropriately. To address these issues, this paper attempts to analyze the time gap distribution between consecutive vehicles (in a time-sense) crossing a section of roadway. More specifically, to characterize the complex interactions noted above, the influence of composition, manoeuvre types, and lateral placement characteristics on time gap distribution is quantified in this paper. The developed model is used for evaluating various performance measures such as link speed, midblock delay and intersection delay which further helps to characterise the vehicular fuel consumption and emission on urban roads of India. Identifying and analyzing exact interactions between various classes of vehicles in the traffic stream is essential for increasing the accuracy and realism of microscopic traffic flow modelling. In this regard, this study aims to develop and analyze time gap distribution models and quantify it by lead lag pair, manoeuvre type and lateral position characteristics in heterogeneous non-lane based traffic. Once the modelling scheme is developed, this can be used for estimating the vehicle kilometres travelled for the entire traffic system which helps to determine the vehicular fuel consumption and emission. The approach to this objective involves: data collection, statistical modelling and parameter estimation, simulation using calibrated time-gap distribution and its validation, empirical analysis of simulation result and associated traffic flow parameters, and application to analyze illustrative traffic policies. In particular, video graphic methods are used for data extraction from urban mid-block sections in Chennai, where the data comprises of vehicle type, vehicle position (both longitudinal and lateral), speed and time gap. Statistical tests are carried out to compare the simulated data with the actual data and the model performance is evaluated. The effect of integration of above mentioned factors in vehicle generation is studied by comparing the performance measures like density, speed, flow, capacity, area occupancy etc under various traffic conditions and policies. The implications of the quantified distributions and simulation model for estimating the PCU (Passenger Car Units), capacity and level of service of the system are also discussed.Keywords: lateral movement, mixed traffic condition, simulation modeling, vehicle following models
Procedia PDF Downloads 342188 Quantifying the Impact of Intermittent Signal Priority given to BRT on Ridership and Climate-A Case Study of Ahmadabad
Authors: Smita Chaudhary
Abstract:
Traffic in India are observed uncontrolled, and are characterized by chaotic (not follows the lane discipline) traffic situation. Bus Rapid Transit (BRT) has emerged as a viable option to enhance transportation capacity and provide increased levels of mobility and accessibility. At present in Ahmadabad there are as many intersections which face the congestion and delay at signalized intersection due to transit (BRT) lanes. Most of the intersection in spite of being signalized is operated manually due to the conflict between BRT buses and heterogeneous traffic. Though BRTS in Ahmadabad has an exclusive lane of its own but with this comes certain limitations which Ahmadabad is facing right now. At many intersections in Ahmadabad due to these conflicts, interference, and congestion both heterogeneous traffic as well as transit buses suffer traffic delays of remarkable 3-4 minutes at each intersection which has a become an issue of great concern. There is no provision of BRT bus priority due to which existing signals have their least role to play in managing the traffic that ultimately call for manual operation. There is an immense decrement in the daily ridership of BRTS because people are finding this transit mode no more time saving in their routine, there is an immense fall in ridership ultimately leading to increased number of private vehicles, idling of vehicles at intersection cause air and noise pollution. In order to bring back these commuters’ transit facilities need to be improvised. Classified volume count survey, travel time delay survey was conducted and revised signal design was done for whole study stretch having three intersections and one roundabout, later one intersection was simulated in order to see the effect of giving priority to BRT on side street queue length and travel time for heterogeneous traffic. This paper aims at suggesting the recommendations in signal cycle, introduction of intermittent priority for transit buses, simulation of intersection in study stretch with proposed signal cycle using VISSIM in order to make this transit amenity feasible and attracting for commuters in Ahmadabad.Keywords: BRT, priority, Ridership, Signal, VISSIM
Procedia PDF Downloads 441187 A General Framework for Measuring the Internal Fraud Risk of an Enterprise Resource Planning System
Authors: Imran Dayan, Ashiqul Khan
Abstract:
Internal corporate fraud, which is fraud carried out by internal stakeholders of a company, affects the well-being of the organisation just like its external counterpart. Even if such an act is carried out for the short-term benefit of a corporation, the act is ultimately harmful to the entity in the long run. Internal fraud is often carried out by relying upon aberrations from usual business processes. Business processes are the lifeblood of a company in modern managerial context. Such processes are developed and fine-tuned over time as a corporation grows through its life stages. Modern corporations have embraced technological innovations into their business processes, and Enterprise Resource Planning (ERP) systems being at the heart of such business processes is a testimony to that. Since ERP systems record a huge amount of data in their event logs, the logs are a treasure trove for anyone trying to detect any sort of fraudulent activities hidden within the day-to-day business operations and processes. This research utilises the ERP systems in place within corporations to assess the likelihood of prospective internal fraud through developing a framework for measuring the risks of fraud through Process Mining techniques and hence finds risky designs and loose ends within these business processes. This framework helps not only in identifying existing cases of fraud in the records of the event log, but also signals the overall riskiness of certain business processes, and hence draws attention for carrying out a redesign of such processes to reduce the chance of future internal fraud while improving internal control within the organisation. The research adds value by applying the concepts of Process Mining into the analysis of data from modern day applications of business process records, which is the ERP event logs, and develops a framework that should be useful to internal stakeholders for strengthening internal control as well as provide external auditors with a tool of use in case of suspicion. The research proves its usefulness through a few case studies conducted with respect to big corporations with complex business processes and an ERP in place.Keywords: enterprise resource planning, fraud risk framework, internal corporate fraud, process mining
Procedia PDF Downloads 334186 Jointly Optimal Statistical Process Control and Maintenance Policy for Deteriorating Processes
Authors: Lucas Paganin, Viliam Makis
Abstract:
With the advent of globalization, the market competition has become a major issue for most companies. One of the main strategies to overcome this situation is the quality improvement of the product at a lower cost to meet customers’ expectations. In order to achieve the desired quality of products, it is important to control the process to meet the specifications, and to implement the optimal maintenance policy for the machines and the production lines. Thus, the overall objective is to reduce process variation and the production and maintenance costs. In this paper, an integrated model involving Statistical Process Control (SPC) and maintenance is developed to achieve this goal. Therefore, the main focus of this paper is to develop the jointly optimal maintenance and statistical process control policy minimizing the total long run expected average cost per unit time. In our model, the production process can go out of control due to either the deterioration of equipment or other assignable causes. The equipment is also subject to failures in any of the operating states due to deterioration and aging. Hence, the process mean is controlled by an Xbar control chart using equidistant sampling epochs. We assume that the machine inspection epochs are the times when the control chart signals an out-of-control condition, considering both true and false alarms. At these times, the production process will be stopped, and an investigation will be conducted not only to determine whether it is a true or false alarm, but also to identify the causes of the true alarm, whether it was caused by the change in the machine setting, by other assignable causes, or by both. If the system is out of control, the proper actions will be taken to bring it back to the in-control state. At these epochs, a maintenance action can be taken, which can be no action, or preventive replacement of the unit. When the equipment is in the failure state, a corrective maintenance action is performed, which can be minimal repair or replacement of the machine and the process is brought to the in-control state. SMDP framework is used to formulate and solve the joint control problem. Numerical example is developed to demonstrate the effectiveness of the control policy.Keywords: maintenance, semi-Markov decision process, statistical process control, Xbar control chart
Procedia PDF Downloads 91185 The Role of Middle Managers SBU's in Context of Change: Sense-Making Approach
Authors: Hala Alioua, Alberic Tellier
Abstract:
This paper is designed to spotlight the research on corporate strategic planning, by emphasizing the role of middle manager of SBU’s and related issues such as the context of vision change. Previous research on strategic vision has been focused principally at the SME, with relatively limited consideration given to the role of middle managers SBU’s in the context of change. This project of research has been done by using a single case study. We formulated through our immersion for 2.5 years on the ground and by a qualitative method and abduction approach. This entity that we analyze is a subsidiary of multinational companies headquartered in Germany, specialized in manufacturing automotive equipment. The "Delta Company" is a French manufacturing plant that has undergone numerous changes over the past three years. The two major strategic changes that have a significant impact on the Delta plant are the strengths of its core business through « lead plant strategy» in 2011 and the implementation of a new strategic vision in 2014. These consecutive changes impact the purpose of the mission of the middle managers. The plant managers ask the following questions: How the middle managers make sense of the corporate strategic planning imposed by the parent company? How they appropriate the new vision and decline it into actions on the ground? We chose the individual interview technique through open-ended questions as the source of data collection. We first of all carried out an exploratory approach by interviewing 8 members of the Management committee’s decision and 19 heads of services. The first findings and results show that exist a divergence of opinion and interpretations of the corporate strategic planning among organization members and there are difficulties to make sense and interpretations of the signals of the environment. The lead plant strategy enables new projects which insure the workload of Delta Company. Nevertheless, it creates a tension and stress among the middle managers because its provoke lack of resources to the detriment of their main jobs as manufacturer plant. The middle managers does not have a clear vision and they are wondering if the new strategic vision means more autonomy and less support from the group.Keywords: change, middle managers, vision, sensemaking
Procedia PDF Downloads 401