Search results for: Extended Kalman Filter (EKF)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1977

Search results for: Extended Kalman Filter (EKF)

477 Structure-Guided Optimization of Sulphonamide as Gamma–Secretase Inhibitors for the Treatment of Alzheimer’s Disease

Authors: Vaishali Patil, Neeraj Masand

Abstract:

In older people, Alzheimer’s disease (AD) is turning out to be a lethal disease. According to the amyloid hypothesis, aggregation of the amyloid β–protein (Aβ), particularly its 42-residue variant (Aβ42), plays direct role in the pathogenesis of AD. Aβ is generated through sequential cleavage of amyloid precursor protein (APP) by β–secretase (BACE) and γ–secretase (GS). Thus in the treatment of AD, γ-secretase modulators (GSMs) are potential disease-modifying as they selectively lower pathogenic Aβ42 levels by shifting the enzyme cleavage sites without inhibiting γ–secretase activity. This possibly avoids known adverse effects observed with complete inhibition of the enzyme complex. Virtual screening, via drug-like ADMET filter, QSAR and molecular docking analyses, has been utilized to identify novel γ–secretase modulators with sulphonamide nucleus. Based on QSAR analyses and docking score, some novel analogs have been synthesized. The results obtained by in silico studies have been validated by performing in vivo analysis. In the first step, behavioral assessment has been carried out using Scopolamine induced amnesia methodology. Later the same series has been evaluated for neuroprotective potential against the oxidative stress induced by Scopolamine. Biochemical estimation was performed to evaluate the changes in biochemical markers of Alzheimer’s disease such as lipid peroxidation (LPO), Glutathione reductase (GSH), and Catalase. The Scopolamine induced amnesia model has shown increased Acetylcholinesterase (AChE) levels and the inhibitory effect of test compounds in the brain AChE levels have been evaluated. In all the studies Donapezil (Dose: 50µg/kg) has been used as reference drug. The reduced AChE activity is shown by compounds 3f, 3c, and 3e. In the later stage, the most potent compounds have been evaluated for Aβ42 inhibitory profile. It can be hypothesized that this series of alkyl-aryl sulphonamides exhibit anti-AD activity by inhibition of Acetylcholinesterase (AChE) enzyme as well as inhibition of plaque formation on prolong dosage along with neuroprotection from oxidative stress.

Keywords: gamma-secretase inhibitors, Alzzheimer's disease, sulphonamides, QSAR

Procedia PDF Downloads 248
476 Aesthetic Embodiment of the Visual and/or Non-Visual: the Becoming of a Spatial Installation Exhibition Influenced by Shamanic Healing

Authors: Ningfei Xiao, Simon Twose, Hannah Hopewell

Abstract:

In urban settings worldwide, artists and researchers have drawn from shamanic healing, providing insightful responses to the environment. This project is a transdisciplinary creative research project where architecture and art practice draw from shamanic healing and provide the potential to expand knowledge of public space and inspire more aesthetic explorations of public spatial visions. The research started from the encounters with the Ewengki/Evenki shaman tribe in settlement areas of northern China in 2019 and extended through the partnerships with Maori artists in Poneke Aotearoa, New Zealand, in 2023. Based on the learnings and collaborations with female indigenous tradition practitioners and the healing that the researcher received from the land, a spatial installation exhibition was developed in this project. Indigenous practices are intricately woven with contemporary technology, merging visuals, soundscapes, and other non-visual aesthetics influenced by the researcher's personal experiences of embodied shamanic healing with brainwave generative technology. This synthesis seeks to ritualize and reimagine future public spaces, encompassing streetscapes and greenscapes from China to Aotearoa, and fostering connections between urbanized human body, mind, spirit, and land. In doing so, the project presents a feminist posthuman inquiry into how individuals perceive materiality within the context of a future city. Grounded in creative research and embodied methodologies, this paper focuses on the conceptual and autoethnographic aspects of visual-non-visual aesthetics and their creative representation. Through the exploration of aesthetics beyond the visual realm within urban and spatial contexts, this project showcases the spatial installation exhibition as an example of shamanic influence and related response to public space through embodied artistry and transdisciplinary creative inquiry.

Keywords: aesthetic, embodiment, visual and/or non-visual, spatial installation, shamanic healing, public space

Procedia PDF Downloads 53
475 Training a Neural Network to Segment, Detect and Recognize Numbers

Authors: Abhisek Dash

Abstract:

This study had three neural networks, one for number segmentation, one for number detection and one for number recognition all of which are coupled to one another. All networks were trained on the MNIST dataset and were convolutional. It was assumed that the images had lighter background and darker foreground. The segmentation network took 28x28 images as input and had sixteen outputs. Segmentation training starts when a dark pixel is encountered. Taking a window(7x7) over that pixel as focus, the eight neighborhood of the focus was checked for further dark pixels. The segmentation network was then trained to move in those directions which had dark pixels. To this end the segmentation network had 16 outputs. They were arranged as “go east”, ”don’t go east ”, “go south east”, “don’t go south east”, “go south”, “don’t go south” and so on w.r.t focus window. The focus window was resized into a 28x28 image and the network was trained to consider those neighborhoods which had dark pixels. The neighborhoods which had dark pixels were pushed into a queue in a particular order. The neighborhoods were then popped one at a time stitched to the existing partial image of the number one at a time and trained on which neighborhoods to consider when the new partial image was presented. The above process was repeated until the image was fully covered by the 7x7 neighborhoods and there were no more uncovered black pixels. During testing the network scans and looks for the first dark pixel. From here on the network predicts which neighborhoods to consider and segments the image. After this step the group of neighborhoods are passed into the detection network. The detection network took 28x28 images as input and had two outputs denoting whether a number was detected or not. Since the ground truth of the bounds of a number was known during training the detection network outputted in favor of number not found until the bounds were not met and vice versa. The recognition network was a standard CNN that also took 28x28 images and had 10 outputs for recognition of numbers from 0 to 9. This network was activated only when the detection network votes in favor of number detected. The above methodology could segment connected and overlapping numbers. Additionally the recognition unit was only invoked when a number was detected which minimized false positives. It also eliminated the need for rules of thumb as segmentation is learned. The strategy can also be extended to other characters as well.

Keywords: convolutional neural networks, OCR, text detection, text segmentation

Procedia PDF Downloads 153
474 New Roles of Telomerase and Telomere-Associated Proteins in the Regulation of Telomere Length

Authors: Qin Yang, Fan Zhang, Juan Du, Chongkui Sun, Krishna Kota, Yun-Ling Zheng

Abstract:

Telomeres are specialized structures at chromosome ends consisting of tandem repetitive DNA sequences [(TTAGGG)n in humans] and associated proteins, which are necessary for telomere function. Telomere lengths are tightly regulated within a narrow range in normal human somatic cells, the basis of cellular senescence and aging. Previous studies have extensively focused on how short telomeres are extended and have demonstrated that telomerase plays a central role in telomere maintenance through elongating the short telomeres. However, the molecular mechanisms of regulating excessively long telomeres are unknown. Here, we found that telomerase enzymatic component hTERT plays a dual role in the regulation of telomeres length. We analyzed single telomere alterations at each chromosomal end led to the discoveries that hTERT shortens excessively long telomeres and elongates short telomeres simultaneously, thus maintaining the optimal telomere length at each chromosomal end for an efficient protection. The hTERT-mediated telomere shortening removes large segments of telomere DNA rapidly without inducing telomere dysfunction foci or affecting cell proliferation, thus it is mechanistically distinct from rapid telomere deletion. We found that expression of hTERT generates telomeric circular DNA, suggesting that telomere homologous recombination may be involved in this telomere shortening process. Moreover, the hTERT-mediated telomere shortening is required its enzymatic activity, but telomerase RNA component hTR is not involved in it. Furthermore, shelterin protein TPP1 interacts with hTERT and recruits it on telomeres to mediate telomere shortening. In addition, telomere-associated proteins, DKC1 and TCAB1 also play roles in this process. This novel hTERT-mediated telomere shortening mechanism not only exists in cancer cells, but also in primary human cells. Thus, the hTERT-mediated telomere shortening is expected to shift the paradigm on current molecular models of telomere length maintenance, with wide-reaching consequences in cancer and aging fields.

Keywords: aging, hTERT, telomerase, telomeres, human cells

Procedia PDF Downloads 422
473 Field Prognostic Factors on Discharge Prediction of Traumatic Brain Injuries

Authors: Mohammad Javad Behzadnia, Amir Bahador Boroumand

Abstract:

Introduction: Limited facility situations require allocating the most available resources for most casualties. Accordingly, Traumatic Brain Injury (TBI) is the one that may need to transport the patient as soon as possible. In a mass casualty event, deciding when the facilities are restricted is hard. The Extended Glasgow Outcome Score (GOSE) has been introduced to assess the global outcome after brain injuries. Therefore, we aimed to evaluate the prognostic factors associated with GOSE. Materials and Methods: In a multicenter cross-sectional study conducted on 144 patients with TBI admitted to trauma emergency centers. All the patients with isolated TBI who were mentally and physically healthy before the trauma entered the study. The patient’s information was evaluated, including demographic characteristics, duration of hospital stays, mechanical ventilation on admission laboratory measurements, and on-admission vital signs. We recorded the patients’ TBI-related symptoms and brain computed tomography (CT) scan findings. Results: GOSE assessments showed an increasing trend by the comparison of on-discharge (7.47 ± 1.30), within a month (7.51 ± 1.30), and within three months (7.58 ± 1.21) evaluations (P < 0.001). On discharge, GOSE was positively correlated with Glasgow Coma Scale (GCS) (r = 0.729, P < 0.001) and motor GCS (r = 0.812, P < 0.001), and inversely with age (r = −0.261, P = 0.002), hospitalization period (r = −0.678, P < 0.001), pulse rate (r = −0.256, P = 0.002) and white blood cell (WBC). Among imaging signs and trauma-related symptoms in univariate analysis, intracranial hemorrhage (ICH), interventricular hemorrhage (IVH) (P = 0.006), subarachnoid hemorrhage (SAH) (P = 0.06; marginally at P < 0.1), subdural hemorrhage (SDH) (P = 0.032), and epidural hemorrhage (EDH) (P = 0.037) were significantly associated with GOSE at discharge in multivariable analysis. Conclusion: Our study showed some predictive factors that could help to decide which casualty should transport earlier to a trauma center. According to the current study findings, GCS, pulse rate, WBC, and among imaging signs and trauma-related symptoms, ICH, IVH, SAH, SDH, and EDH are significant independent predictors of GOSE at discharge in TBI patients.

Keywords: field, Glasgow outcome score, prediction, traumatic brain injury.

Procedia PDF Downloads 72
472 Effect of Electropolymerization Method in the Charge Transfer Properties and Photoactivity of Polyaniline Photoelectrodes

Authors: Alberto Enrique Molina Lozano, María Teresa Cortés Montañez

Abstract:

Polyaniline (PANI) photoelectrodes were electrochemically synthesized through electrodeposition employing three techniques: chronoamperometry (CA), cyclic voltammetry (CV), and potential pulse (PP) methods. The substrate used for electrodeposition was a fluorine-doped tin oxide (FTO) glass with dimensions of 2.5 cm x 1.3 cm. Subsequently, structural and optical characterization was conducted utilizing Fourier-transform infrared (FTIR) spectroscopy and UV-visible (UV-vis) spectroscopy, respectively. The FTIR analysis revealed variations in the molar ratio of benzenoid to quinonoid rings within the PANI polymer matrix, indicative of differing oxidation states arising from the distinct electropolymerization methodologies employed. In the optical characterization, differences in the energy band gap (Eg) values and positions of the highest occupied molecular orbital (HOMO) and lowest unoccupied molecular orbital (LUMO) were observed, attributable to variations in doping levels and structural irregularities introduced during the electropolymerization procedures. To assess the charge transfer properties of the PANI photoelectrodes, electrochemical impedance spectroscopy (EIS) experiments were carried out within a 0.1 M sodium sulfate (Na₂SO₄) electrolyte. The results displayed a substantial decrease in charge transfer resistance with the PANI coatings compared to uncoated substrates, with PANI obtained through cyclic voltammetry (CV) presenting the lowest charge transfer resistance, contrasting PANI obtained via chronoamperometry (CA) and potential pulses (PP). Subsequently, the photoactive response of the PANI photoelectrodes was measured through linear sweep voltammetry (LSV) and chronoamperometry. The photoelectrochemical measurements revealed a discernible photoactivity in all PANI-coated electrodes. However, PANI electropolymerized through CV displayed the highest photocurrent. Interestingly, PANI derived from chronoamperometry (CA) exhibited the highest degree of stable photocurrent over an extended temporal interval.

Keywords: PANI, photocurrent, photoresponse, charge separation, recombination

Procedia PDF Downloads 60
471 Human Lens Metabolome: A Combined LC-MS and NMR Study

Authors: Vadim V. Yanshole, Lyudmila V. Yanshole, Alexey S. Kiryutin, Timofey D. Verkhovod, Yuri P. Tsentalovich

Abstract:

Cataract, or clouding of the eye lens, is the leading cause of vision impairment in the world. The lens tissue have very specific structure: It does not have vascular system, the lens proteins – crystallins – do not turnover throughout lifespan. The protection of lens proteins is provided by the metabolites which diffuse inside the lens from the aqueous humor or synthesized in the lens epithelial layer. Therefore, the study of changes in the metabolite composition of a cataractous lens as compared to a normal lens may elucidate the possible mechanisms of the cataract formation. Quantitative metabolomic profiles of normal and cataractous human lenses were obtained with the combined use of high-frequency nuclear magnetic resonance (NMR) and ion-pairing high-performance liquid chromatography with high-resolution mass-spectrometric detection (LC-MS) methods. The quantitative content of more than fifty metabolites has been determined in this work for normal aged and cataractous human lenses. The most abundant metabolites in the normal lens are myo-inositol, lactate, creatine, glutathione, glutamate, and glucose. For the majority of metabolites, their levels in the lens cortex and nucleus are similar, with the few exceptions including antioxidants and UV filters: The concentrations of glutathione, ascorbate and NAD in the lens nucleus decrease as compared to the cortex, while the levels of the secondary UV filters formed from primary UV filters in redox processes increase. That confirms that the lens core is metabolically inert, and the metabolic activity in the lens nucleus is mostly restricted by protection from the oxidative stress caused by UV irradiation, UV filter spontaneous decomposition, or other factors. It was found that the metabolomic composition of normal and age-matched cataractous human lenses differ significantly. The content of the most important metabolites – antioxidants, UV filters, and osmolytes – in the cataractous nucleus is at least ten fold lower than in the normal nucleus. One may suppose that the majority of these metabolites are synthesized in the lens epithelial layer, and that age-related cataractogenesis might originate from the dysfunction of the lens epithelial cells. Comprehensive quantitative metabolic profiles of the human eye lens have been acquired for the first time. The obtained data can be used for the analysis of changes in the lens chemical composition occurring with age and with the cataract development.

Keywords: cataract, lens, NMR, LC-MS, metabolome

Procedia PDF Downloads 313
470 Effect of Locally Produced Sweetened Pediatric Antibiotics on Streptococcus mutans Isolated from the Oral Cavity of Pediatric Patients in Syria - in Vitro Study

Authors: Omar Nasani, Chaza Kouchaji, Muznah Alkhani, Maisaa Abd-alkareem

Abstract:

Objective: To evaluate the influence of sweetening agents used in pediatric medications on the growth of Streptococcus mutans colonies and its effect on the cariogenic activity in the oral cavity. No previous studies are registered yet in Syrian children. Methods: Specimens were isolated from the oral cavity of pediatric patients, then in-vitro study is applied on locally manufactured liquid pediatric antibiotic drugs, containing natural or synthetic sweeteners. The selected antibiotics are Ampicillin (sucrose), Amoxicillin (sucrose), Amoxicillin + Flucloxacillin (sorbitol), Amoxicillin+Clavulanic acid (Sorbitol or sucrose). These antibiotics have a known inhibitory effect on gram positive aerobic/anaerobic bacteria especially Streptococcus mutans strains in children’s oral biofilm. Five colonies are studied with each antibiotic. Saturated antibiotics were spread on a 6mm diameter filter disc. Incubated culture media were compared with each other and with the control antibiotic discs. Results were evaluated by measuring the diameter of the inhibition zones. The control group of antibiotic discs was resourced from Abtek Biologicals Ltd. Results: The diameter of inhibition zones around discs of antibiotics sweetened with sorbitol was larger than those sweetened with sucrose. The effect was most important when comparing Amoxicillin + Clavulanic Acid (sucrose 25mm; versus sorbitol 27mm). The highest inhibitory effect was observed with the usage of Amoxicillin + Flucloxacillin sweetened with sorbitol (38mm). Whereas the lowest inhibitory effect was observed with Amoxicillin and Ampicillin sweetened with sucrose (22mm and 21mm). Conclusion: The results of this study indicate that although all selected antibiotic produced an inhibitory effect on S. mutans, sucrose weakened the inhibitory action of the antibiotic to varying degrees, meanwhile antibiotic formulations containing sorbitol simulated the effects of the control antibiotic. This study calls attention to effects of sweeteners included in pediatric drugs on the oral hygiene and tooth decay.

Keywords: pediatric, dentistry, antibiotics, streptococcus mutans, biofilm, sucrose, sugar free

Procedia PDF Downloads 67
469 Recommendations for Data Quality Filtering of Opportunistic Species Occurrence Data

Authors: Camille Van Eupen, Dirk Maes, Marc Herremans, Kristijn R. R. Swinnen, Ben Somers, Stijn Luca

Abstract:

In ecology, species distribution models are commonly implemented to study species-environment relationships. These models increasingly rely on opportunistic citizen science data when high-quality species records collected through standardized recording protocols are unavailable. While these opportunistic data are abundant, uncertainty is usually high, e.g., due to observer effects or a lack of metadata. Data quality filtering is often used to reduce these types of uncertainty in an attempt to increase the value of studies relying on opportunistic data. However, filtering should not be performed blindly. In this study, recommendations are built for data quality filtering of opportunistic species occurrence data that are used as input for species distribution models. Using an extensive database of 5.7 million citizen science records from 255 species in Flanders, the impact on model performance was quantified by applying three data quality filters, and these results were linked to species traits. More specifically, presence records were filtered based on record attributes that provide information on the observation process or post-entry data validation, and changes in the area under the receiver operating characteristic (AUC), sensitivity, and specificity were analyzed using the Maxent algorithm with and without filtering. Controlling for sample size enabled us to study the combined impact of data quality filtering, i.e., the simultaneous impact of an increase in data quality and a decrease in sample size. Further, the variation among species in their response to data quality filtering was explored by clustering species based on four traits often related to data quality: commonness, popularity, difficulty, and body size. Findings show that model performance is affected by i) the quality of the filtered data, ii) the proportional reduction in sample size caused by filtering and the remaining absolute sample size, and iii) a species ‘quality profile’, resulting from a species classification based on the four traits related to data quality. The findings resulted in recommendations on when and how to filter volunteer generated and opportunistically collected data. This study confirms that correctly processed citizen science data can make a valuable contribution to ecological research and species conservation.

Keywords: citizen science, data quality filtering, species distribution models, trait profiles

Procedia PDF Downloads 195
468 Alternative Housing Systems: Influence on Blood Profile of Egg-Type Chickens in Humid Tropics

Authors: Olufemi M. Alabi, Foluke A. Aderemi, Adebayo A. Adewumi, Banwo O. Alabi

Abstract:

General well-being of animals is of paramount interest in some developed countries and of global importance hence the shift onto alternative housing systems for egg-type chickens as replacement for conventional battery cage system. However, there is paucity of information on the effect of this shift on physiological status of the hens to judge their health via the blood profile. Therefore, investigation was carried out on two strains of hen kept in three different housing systems in humid tropics to evaluate changes in their blood parameters. 108, 17-weeks old super black (SBL) hens and 108, 17-weeks old super brown (SBR) hens were randomly allotted to three different intensive systems Partitioned Conventional Cage (PCC), Extended Conventional Cage (ECC) and Deep Litter System (DLS) in a randomized complete block design with 36 hens per housing system, each with three replicates. The experiment lasted 37 weeks during which blood samples were collected at 18th week of age and bi-weekly thereafter for analyses. Parameters measured are packed cell volume (PCV), hemoglobin concentration (Hb), red blood counts (RBC), white blood counts (WBC) and serum metabolites such as total protein (TP), albumin (Alb), globulin (Glb), glucose, cholesterol, urea, bilirubin, serum cortisol while blood indices such as mean corpuscular hemoglobin (MCH), mean cell volume (MCV) and mean corpuscular hemoglobin concentration (MCHC) were calculated. The hematological values of the hens were not significantly (p>0.05) affected by the housing system and strain, so also the serum metabolites except for the serum cortisol which was significantly (p<0.05) affected by the housing system only. Hens housed on PCC had higher values (20.05 ng/ml for SBL and 20.55 ng/ml for SBR) followed by hens on ECC (18.15ng/ml for SBL and 18.38ng/ml for SBL) while hens on DLS had the lowest value (16.50ng/ml for SBL and 16.00ng/ml for SBR) thereby confirming indication of stress with conventionally caged birds. Alternative housing systems can also be adopted for egg-type chickens in the humid tropics from welfare point of view with the results of this work confirming stress among caged hens.

Keywords: blood, housing, humid-tropics, layers

Procedia PDF Downloads 462
467 Accelerator Mass Spectrometry Analysis of Isotopes of Plutonium in PM₂.₅

Authors: C. G. Mendez-Garcia, E. T. Romero-Guzman, H. Hernandez-Mendoza, C. Solis, E. Chavez-Lomeli, E. Chamizo, R. Garcia-Tenorio

Abstract:

Plutonium is present in different concentrations in the environment and biological samples related to nuclear weapons testing, nuclear waste recycling and accidental discharges of nuclear plants. This radioisotope is considered the most radiotoxic substance, particularly when it enters the human body through inhalation of powders insoluble or aerosols. This is the main reason of the determination of the concentration of this radioisotope in the atmosphere. Besides that, the isotopic ratio of ²⁴⁰Pu/²³⁹Pu provides information about the origin of the source. PM₂.₅ sampling was carried out in the Metropolitan Zone of the Valley of Mexico (MZVM) from February 18th to March 17th in 2015 on quartz filter. There have been significant developments recently due to the establishment of new methods for sample preparation and accurate measurement to detect ultra trace levels as the plutonium is found in the environment. The accelerator mass spectrometry (AMS) is a technique that allows measuring levels of detection around of femtograms (10-15 g). The AMS determinations include the chemical isolation of Pu. The Pu separation involved an acidic digestion and a radiochemical purification using an anion exchange resin. Finally, the source is prepared, when Pu is pressed in the corresponding cathodes. According to the author's knowledge on these aerosols showed variations on the ²³⁵U/²³⁸U ratio of the natural value, suggesting that could be an anthropogenic source altering it. The determination of the concentration of the isotopes of Pu can be a useful tool in order the clarify this presence in the atmosphere. The first results showed a mean value of activity concentration of ²³⁹Pu of 280 nBq m⁻³ thus the ²⁴⁰Pu/²³⁹Pu was 0.025 corresponding to the weapon production source; these results corroborate that there is an anthropogenic influence that is increasing the concentration of radioactive material in PM₂.₅. According to the author's knowledge in Total Suspended Particles (TSP) have been reported activity concentrations of ²³⁹⁺²⁴⁰Pu around few tens of nBq m⁻³ and 0.17 of ²⁴⁰Pu/²³⁹Pu ratios. The preliminary results in MZVM show high activity concentrations of isotopes of Pu (40 and 700 nBq m⁻³) and low ²⁴⁰Pu/²³⁹Pu ratio than reported. These results are in the order of the activity concentrations of Pu in weapons-grade of high purity.

Keywords: aerosols, fallout, mass spectrometry, radiochemistry, tracer, ²⁴⁰Pu/²³⁹Pu ratio

Procedia PDF Downloads 161
466 Impact of Mhealth Tools on Psycho-Social Predictors of Behaviour Regarding Contraceptive Use

Authors: Preeti Tiwari, Jay Wood, Duncan Babbage

Abstract:

Family planning plays a role in saving lives across the globe by preventing unwanted pregnancies. The purpose of this multidisciplinary research was to determine the impact of mHealth tools have on psychosocial determinants of behaviour for family planning. The present study examines a topic that is very relevant in times where human-technology interaction is at its peak. It is probably one of the first studies that have investigated the impact of mobile phone technology on the underlying mechanisms of behaviour change for family planning using primary data. To examine the association between exposure to mHealth tools and predictors of behaviour, data was collected from mHealth intervention areas in India. A post-intervention quasi-experimental study with a 2x2 factorial design was conducted among 831 men and women from the state of Bihar. The quantitative data analysis evaluated the extent of influence that predictors of behaviour (beliefs, social norms, perceived behaviour control, and outcome behaviour) have on a woman’s decisions about family planning. The results indicated an association between exposure to mHealth tools and improved communication about family planning among various family members after receiving health information from a health worker (H1). A relationship between exposure to mHealth tools and increased support women received from their husbands and extended family (mothers-in-law specifically) and peers (H2) was also found. A further result showed that knowledge about family planning was greater among users of family planning (H4). mHealth tools empower women to communicate with family members. This has important implications for developing mobile phone-based tools, as they can be used as a crucial communication channel that can be an effective method of increasing communication among family members about contraceptives. Thus, it can be implied that where women feel nervous talking about contraception, the successful application of mHealth tools can strengthen the interactivity of the health communication and could increase the likelihood of using contraception. However, while it may improve health communication that can inform health decisions, it may be insufficient on its own to cause behaviour change.

Keywords: contraceptive, e-health, psycho-social, women

Procedia PDF Downloads 116
465 Collaborative Approaches in Achieving Sustainable Private-Public Transportation Services in Inner-City Areas: A Case of Durban Minibus Taxis

Authors: Lonna Mabandla, Godfrey Musvoto

Abstract:

Transportation is a catalytic feature in cities. Transport and land use activity are interdependent and have a feedback loop between how land is developed and how transportation systems are designed and used. This recursive relationship between land use and transportation is reflected in how public transportation routes internal to the inner-city enhance accessibility, therefore creating spaces that are conducive to business activity, while the business activity also informs public transportation routes. It is for this reason that the focus of this research is on public transportation within inner-city areas where the dynamic is evident. Durban is the chosen case study where the dominating form of public transportation within the central business district (CBD) is minibus taxis. The paradox here is that minibus taxis still form part of the informal economy even though they are the leading form of public transportation in South Africa. There have been many attempts to formalise this industry to follow more regulatory practices, but minibus taxis are privately owned, therefore complicating any proposed intervention. The argument of this study is that the application of collaborative planning through a sustainable partnership between the public and private sectors will improve the social and environmental sustainability of public transportation. One of the major challenges that exist within such collaborative endeavors is power dynamics. As a result, a key focus of the study is on power relations. Practically, power relations should be observed over an extended period, specifically when the different stakeholders engage with each other, to reflect valid data. However, a lengthy data collection process was not possible to observe during the data collection phase of this research. Instead, interviews were conducted focusing on existing procedural planning practices between the inner-city minibus taxi association (South and North Beach Taxi Association), the eThekwini Transport Authority (ETA), and the eThekwini Town Planning Department. Conclusions and recommendations were then generated based on these data.

Keywords: collaborative planning, sustainability, public transport, minibus taxis

Procedia PDF Downloads 57
464 Monetary Evaluation of Dispatching Decisions in Consideration of Choice of Transport

Authors: Marcel Schneider, Nils Nießen

Abstract:

Microscopic simulation programs enable the description of the two processes of railway operation and the previous timetabling. Occupation conflicts are often solved based on defined train priorities on both process levels. These conflict resolutions produce knock-on delays for the involved trains. The sum of knock-on delays is commonly used to evaluate the quality of railway operations. It is either compared to an acceptable level-of-service or the delays are evaluated economically by linearly monetary functions. It is impossible to properly evaluate dispatching decisions without a well-founded objective function. This paper presents a new approach for evaluation of dispatching decisions. It uses models of choice of transport and considers the behaviour of the end-costumers. These models evaluate the knock-on delays in more detail than linearly monetary functions and consider other competing modes of transport. The new approach pursues the coupling of a microscopic model of railway operation with the macroscopic model of choice of transport. First it will be implemented for the railway operations process, but it can also be used for timetabling. The evaluation considers the possibility to change over to other transport modes by the end-costumers. The new approach first looks at the rail-mounted and road transport, but it can also be extended to air transport. The split of the end-costumers is described by the modal-split. The reactions by the end-costumers have an effect on the revenues of the railway undertakings. Various travel purposes has different pavement reserves and tolerances towards delays. Longer journey times affect besides revenue changes also additional costs. The costs depend either on time or track and arise from circulation of workers and vehicles. Only the variable values are summarised in the contribution margin, which is the base for the monetary evaluation of the delays. The contribution margin is calculated for different resolution decisions of the same conflict. The conflict resolution is improved until the monetary loss becomes minimised. The iterative process therefore determines an optimum conflict resolution by observing the change of the contribution margin. Furthermore, a monetary value of each dispatching decision can also be determined.

Keywords: choice of transport, knock-on delays, monetary evaluation, railway operations

Procedia PDF Downloads 324
463 Partial Least Square Regression for High-Dimentional and High-Correlated Data

Authors: Mohammed Abdullah Alshahrani

Abstract:

The research focuses on investigating the use of partial least squares (PLS) methodology for addressing challenges associated with high-dimensional correlated data. Recent technological advancements have led to experiments producing data characterized by a large number of variables compared to observations, with substantial inter-variable correlations. Such data patterns are common in chemometrics, where near-infrared (NIR) spectrometer calibrations record chemical absorbance levels across hundreds of wavelengths, and in genomics, where thousands of genomic regions' copy number alterations (CNA) are recorded from cancer patients. PLS serves as a widely used method for analyzing high-dimensional data, functioning as a regression tool in chemometrics and a classification method in genomics. It handles data complexity by creating latent variables (components) from original variables. However, applying PLS can present challenges. The study investigates key areas to address these challenges, including unifying interpretations across three main PLS algorithms and exploring unusual negative shrinkage factors encountered during model fitting. The research presents an alternative approach to addressing the interpretation challenge of predictor weights associated with PLS. Sparse estimation of predictor weights is employed using a penalty function combining a lasso penalty for sparsity and a Cauchy distribution-based penalty to account for variable dependencies. The results demonstrate sparse and grouped weight estimates, aiding interpretation and prediction tasks in genomic data analysis. High-dimensional data scenarios, where predictors outnumber observations, are common in regression analysis applications. Ordinary least squares regression (OLS), the standard method, performs inadequately with high-dimensional and highly correlated data. Copy number alterations (CNA) in key genes have been linked to disease phenotypes, highlighting the importance of accurate classification of gene expression data in bioinformatics and biology using regularized methods like PLS for regression and classification.

Keywords: partial least square regression, genetics data, negative filter factors, high dimensional data, high correlated data

Procedia PDF Downloads 47
462 Perception of Greek Vowels by Arabic-Greek Bilinguals: An Experimental Study

Authors: Georgios P. Georgiou

Abstract:

Infants are able to discriminate a number of sound contrasts in most languages. However, this ability is not available in adults who might face difficulties in discriminating accurately second language sound contrasts as they filter second language speech through the phonological categories of their native language. For example, Spanish speakers often struggle to perceive the difference between the English /ε/ and /æ/ because both vowels do not exist in their native language; so they assimilate these vowels to the closest phonological category of their first language. The present study aims to uncover the perceptual patterns of Arabic adult speakers in regard to the vowels of their second language (Greek). Still, there is not any study that investigates the perception of Greek vowels by Arabic speakers and, thus, the present study would contribute to the enrichment of the literature with cross-linguistic research in new languages. To the purpose of the present study, 15 native speakers of Egyptian Arabic who permanently live in Cyprus and have adequate knowledge of Greek as a second language passed through vowel assimilation and vowel contrast discrimination tests (AXB) in their second language. The perceptual stimuli included non-sense words that contained vowels in both stressed and unstressed positions. The second language listeners’ patterns were analyzed through the Perceptual Assimilation Model which makes testable hypotheses about the assimilation of second language sounds to the speakers’ native phonological categories and the discrimination accuracy over second language sound contrasts. The results indicated that second language listeners assimilated pairs of Greek vowels in a single phonological category of their native language resulting in a Category Goodness difference assimilation type for the Greek stressed /i/-/e/ and the Greek stressed-unstressed /o/-/u/ vowel contrasts. On the contrary, the members of the Greek unstressed /i/-/e/ vowel contrast were assimilated to two different categories resulting in a Two Category assimilation type. Furthermore, they could discriminate the Greek stressed /i/-/e/ and the Greek stressed-unstressed /o/-/u/ contrasts only in a moderate degree while the Greek unstressed /i/-/e/ contrast could be discriminated in an excellent degree. Two main implications emerge from the results. First, there is a strong influence of the listeners’ native language on the perception of the second language vowels. In Egyptian Arabic, contiguous vowel categories such as [i]-[e] and [u]-[o] do not have phonemic difference but they are subject to allophonic variation; by contrast, the vowel contrasts /i/-/e/ and /o/-/u/ are phonemic in Greek. Second, the role of stress is significant for second language perception since stressed vs. unstressed vowel contrasts were perceived in a different manner by the Greek listeners.

Keywords: Arabic, bilingual, Greek, vowel perception

Procedia PDF Downloads 135
461 Pathologies in the Left Atrium Reproduced Using a Low-Order Synergistic Numerical Model of the Cardiovascular System

Authors: Nicholas Pearce, Eun-jin Kim

Abstract:

Pathologies of the cardiovascular (CV) system remain a serious and deadly health problem for human society. Computational modelling provides a relatively accessible tool for diagnosis, treatment, and research into CV disorders. However, numerical models of the CV system have largely focused on the function of the ventricles, frequently overlooking the behaviour of the atria. Furthermore, in the study of the pressure-volume relationship of the heart, which is a key diagnosis of cardiac vascular pathologies, previous works often evoke popular yet questionable time-varying elastance (TVE) method that imposes the pressure-volume relationship instead of calculating it consistently. Despite the convenience of the TVE method, there have been various indications of its limitations and the need for checking its validity in different scenarios. A model of the combined left ventricle (LV) and left atrium (LA) is presented, which consistently considers various feedback mechanisms in the heart without having to use the TVE method. Specifically, a synergistic model of the left ventricle is extended and modified to include the function of the LA. The synergy of the original model is preserved by modelling the electro-mechanical and chemical functions of the micro-scale myofiber for the LA and integrating it with the microscale and macro-organ-scale heart dynamics of the left ventricle and CV circulation. The atrioventricular node function is included and forms the conduction pathway for electrical signals between the atria and ventricle. The model reproduces the essential features of LA behaviour, such as the two-phase pressure-volume relationship and the classic figure of eight pressure-volume loops. Using this model, disorders in the internal cardiac electrical signalling are investigated by recreating the mechano-electric feedback (MEF), which is impossible where the time-varying elastance method is used. The effects of AV node block and slow conduction are then investigated in the presence of an atrial arrhythmia. It is found that electrical disorders and arrhythmia in the LA degrade the CV system by reducing the cardiac output, power, and heart rate.

Keywords: cardiovascular system, left atrium, numerical model, MEF

Procedia PDF Downloads 111
460 Trend Analysis of Rainfall: A Climate Change Paradigm

Authors: Shyamli Singh, Ishupinder Kaur, Vinod K. Sharma

Abstract:

Climate Change refers to the change in climate for extended period of time. Climate is changing from the past history of earth but anthropogenic activities accelerate this rate of change and which is now being a global issue. Increase in greenhouse gas emissions is causing global warming and climate change related issues at an alarming rate. Increasing temperature results in climate variability across the globe. Changes in rainfall patterns, intensity and extreme events are some of the impacts of climate change. Rainfall variability refers to the degree to which rainfall patterns varies over a region (spatial) or through time period (temporal). Temporal rainfall variability can be directly or indirectly linked to climate change. Such variability in rainfall increases the vulnerability of communities towards climate change. Increasing urbanization and unplanned developmental activities, the air quality is deteriorating. This paper mainly focuses on the rainfall variability due to increasing level of greenhouse gases. Rainfall data of 65 years (1951-2015) of Safdarjung station of Delhi was collected from Indian Meteorological Department and analyzed using Mann-Kendall test for time-series data analysis. Mann-Kendall test is a statistical tool helps in analysis of trend in the given data sets. The slope of the trend can be measured through Sen’s slope estimator. Data was analyzed monthly, seasonally and yearly across the period of 65 years. The monthly rainfall data for the said period do not follow any increasing or decreasing trend. Monsoon season shows no increasing trend but here was an increasing trend in the pre-monsoon season. Hence, the actual rainfall differs from the normal trend of the rainfall. Through this analysis, it can be projected that there will be an increase in pre-monsoon rainfall than the actual monsoon season. Pre-monsoon rainfall causes cooling effect and results in drier monsoon season. This will increase the vulnerability of communities towards climate change and also effect related developmental activities.

Keywords: greenhouse gases, Mann-Kendall test, rainfall variability, Sen's slope

Procedia PDF Downloads 200
459 PLO-AIM: Potential-Based Lane Organization in Autonomous Intersection Management

Authors: Berk Ecer, Ebru Akcapinar Sezer

Abstract:

Traditional management models of intersections, such as no-light intersections or signalized intersection, are not the most effective way of passing the intersections if the vehicles are intelligent. To this end, Dresner and Stone proposed a new intersection control model called Autonomous Intersection Management (AIM). In the AIM simulation, they were examining the problem from a multi-agent perspective, demonstrating that intelligent intersection control can be made more efficient than existing control mechanisms. In this study, autonomous intersection management has been investigated. We extended their works and added a potential-based lane organization layer. In order to distribute vehicles evenly to each lane, this layer triggers vehicles to analyze near lanes, and they change their lane if other lanes have an advantage. We can observe this behavior in real life, such as drivers, change their lane by considering their intuitions. Basic intuition on selecting the correct lane for traffic is selecting a less crowded lane in order to reduce delay. We model that behavior without any change in the AIM workflow. Experiment results show us that intersection performance is directly connected with the vehicle distribution in lanes of roads of intersections. We see the advantage of handling lane management with a potential approach in performance metrics such as average delay of intersection and average travel time. Therefore, lane management and intersection management are problems that need to be handled together. This study shows us that the lane through which vehicles enter the intersection is an effective parameter for intersection management. Our study draws attention to this parameter and suggested a solution for it. We observed that the regulation of AIM inputs, which are vehicles in lanes, was as effective as contributing to aim intersection management. PLO-AIM model outperforms AIM in evaluation metrics such as average delay of intersection and average travel time for reasonable traffic rates, which is in between 600 vehicle/hour per lane to 1300 vehicle/hour per lane. The proposed model reduced the average travel time reduced in between %0.2 - %17.3 and reduced the average delay of intersection in between %1.6 - %17.1 for 4-lane and 6-lane scenarios.

Keywords: AIM project, autonomous intersection management, lane organization, potential-based approach

Procedia PDF Downloads 133
458 Constraint-Based Computational Modelling of Bioenergetic Pathway Switching in Synaptic Mitochondria from Parkinson's Disease Patients

Authors: Diana C. El Assal, Fatima Monteiro, Caroline May, Peter Barbuti, Silvia Bolognin, Averina Nicolae, Hulda Haraldsdottir, Lemmer R. P. El Assal, Swagatika Sahoo, Longfei Mao, Jens Schwamborn, Rejko Kruger, Ines Thiele, Kathrin Marcus, Ronan M. T. Fleming

Abstract:

Degeneration of substantia nigra pars compacta dopaminergic neurons is one of the hallmarks of Parkinson's disease. These neurons have a highly complex axonal arborisation and a high energy demand, so any reduction in ATP synthesis could lead to an imbalance between supply and demand, thereby impeding normal neuronal bioenergetic requirements. Synaptic mitochondria exhibit increased vulnerability to dysfunction in Parkinson's disease. After biogenesis in and transport from the cell body, synaptic mitochondria become highly dependent upon oxidative phosphorylation. We applied a systems biochemistry approach to identify the metabolic pathways used by neuronal mitochondria for energy generation. The mitochondrial component of an existing manual reconstruction of human metabolism was extended with manual curation of the biochemical literature and specialised using omics data from Parkinson's disease patients and controls, to generate reconstructions of synaptic and somal mitochondrial metabolism. These reconstructions were converted into stoichiometrically- and fluxconsistent constraint-based computational models. These models predict that Parkinson's disease is accompanied by an increase in the rate of glycolysis and a decrease in the rate of oxidative phosphorylation within synaptic mitochondria. This is consistent with independent experimental reports of a compensatory switching of bioenergetic pathways in the putamen of post-mortem Parkinson's disease patients. Ongoing work, in the context of the SysMedPD project is aimed at computational prediction of mitochondrial drug targets to slow the progression of neurodegeneration in the subset of Parkinson's disease patients with overt mitochondrial dysfunction.

Keywords: bioenergetics, mitochondria, Parkinson's disease, systems biochemistry

Procedia PDF Downloads 290
457 Development of a Turbulent Boundary Layer Wall-pressure Fluctuations Power Spectrum Model Using a Stepwise Regression Algorithm

Authors: Zachary Huffman, Joana Rocha

Abstract:

Wall-pressure fluctuations induced by the turbulent boundary layer (TBL) developed over aircraft are a significant source of aircraft cabin noise. Since the power spectral density (PSD) of these pressure fluctuations is directly correlated with the amount of sound radiated into the cabin, the development of accurate empirical models that predict the PSD has been an important ongoing research topic. The sound emitted can be represented from the pressure fluctuations term in the Reynoldsaveraged Navier-Stokes equations (RANS). Therefore, early TBL empirical models (including those from Lowson, Robertson, Chase, and Howe) were primarily derived by simplifying and solving the RANS for pressure fluctuation and adding appropriate scales. Most subsequent models (including Goody, Efimtsov, Laganelli, Smol’yakov, and Rackl and Weston models) were derived by making modifications to these early models or by physical principles. Overall, these models have had varying levels of accuracy, but, in general, they are most accurate under the specific Reynolds and Mach numbers they were developed for, while being less accurate under other flow conditions. Despite this, recent research into the possibility of using alternative methods for deriving the models has been rather limited. More recent studies have demonstrated that an artificial neural network model was more accurate than traditional models and could be applied more generally, but the accuracy of other machine learning techniques has not been explored. In the current study, an original model is derived using a stepwise regression algorithm in the statistical programming language R, and TBL wall-pressure fluctuations PSD data gathered at the Carleton University wind tunnel. The theoretical advantage of a stepwise regression approach is that it will automatically filter out redundant or uncorrelated input variables (through the process of feature selection), and it is computationally faster than machine learning. The main disadvantage is the potential risk of overfitting. The accuracy of the developed model is assessed by comparing it to independently sourced datasets.

Keywords: aircraft noise, machine learning, power spectral density models, regression models, turbulent boundary layer wall-pressure fluctuations

Procedia PDF Downloads 133
456 Effect of Different Methods to Control the Parasitic Weed Phelipanche ramosa (L. Pomel) in Tomato Crop

Authors: Disciglio G., Lops F., Carlucci A., Gatta G., Tarantino A., Frabboni L, Tarantino E.

Abstract:

The Phelipanche ramosa is considered the most damaging obligate flowering parasitic weed on a wide species of cultivated plants. The semiarid regions of the world are considered the main center of this parasitic weed, where heavy infestation are due to the ability to produce high numbers of seeds (up to 200,000), that remain viable for extended period (more than 19 years). In this paper 13 treatments of parasitic weed control, as physical, chemical, biological and agronomic methods, including the use of the resistant plants, have been carried out. In 2014 a trial was performed on processing tomato (cv Docet), grown in pots filled with soil taken from a plot heavily infested by Phelipanche ramosa, at the Department of Agriculture, Food and Environment, University of Foggia (southern Italy). Tomato seedlings were transplanted on August 8, 2014 on a clay soil (USDA) 100 kg ha-1 of N; 60 kg ha-1 of P2O5 and 20 kg ha-1 of S. Afterwards, top dressing was performed with 70 kg ha-1 of N. The randomized block design with 3 replicates was adopted. During the growing cycle of the tomato, at 70-75-81 and 88 days after transplantation the number of parasitic shoots emerged in each pot was detected. Also values of leaf chlorophyll Meter SPAD of tomato plants were measured. All data were subjected to analysis of variance (ANOVA) using the JMP software (SAS Institute Inc., Cary, NC, USA), and for comparison of means was used Tukey's test. The results show lower values of the color index SPAD in tomato plants parasitized compared to those healthy. In addition, each treatment studied did not provide complete control against Phelipanche ramosa. However the virulence of the attacks was mitigated by some treatments: radicon product, compost activated with Fusarium, mineral fertilizer nitrogen, sulfur, enzone and resistant tomato genotype. It is assumed that these effects can be improved by combining some of these treatments each other, especially for a gradual and continuing reduction of the “seed bank” of the parasite in the soil.

Keywords: control methods, Phelipanche ramose, tomato crop

Procedia PDF Downloads 611
455 Abridging Pharmaceutical Analysis and Drug Discovery via LC-MS-TOF, NMR, in-silico Toxicity-Bioactivity Profiling for Therapeutic Purposing Zileuton Impurities: Need of Hour

Authors: Saurabh B. Ganorkar, Atul A. Shirkhedkar

Abstract:

The need for investigations protecting against toxic impurities though seems to be a primary requirement; the impurities which may prove non - toxic can be explored for their therapeutic potential if any to assist advanced drug discovery. The essential role of pharmaceutical analysis can thus be extended effectively to achieve it. The present study successfully achieved these objectives with characterization of major degradation products as impurities for Zileuton which has been used for to treat asthma since years. The forced degradation studies were performed to identify the potential degradation products using Ultra-fine Liquid-chromatography. Liquid-chromatography-Mass spectrometry (Time of Flight) and Proton Nuclear Magnetic Resonance Studies were utilized effectively to characterize the drug along with five major oxidative and hydrolytic degradation products (DP’s). The mass fragments were identified for Zileuton and path for the degradation was investigated. The characterized DP’s were subjected to In-Silico studies as XP Molecular Docking to compare the gain or loss in binding affinity with 5-Lipooxygenase enzyme. One of the impurity of was found to have the binding affinity more than the drug itself indicating for its potential to be more bioactive as better Antiasthmatic. The close structural resemblance has the ability to potentiate or reduce bioactivity and or toxicity. The chances of being active biologically at other sites cannot be denied and the same is achieved to some extent by predictions for probability of being active with Prediction of Activity Spectrum for Substances (PASS) The impurities found to be bio-active as Antineoplastic, Antiallergic, and inhibitors of Complement Factor D. The toxicological abilities as Ames-Mutagenicity, Carcinogenicity, Developmental Toxicity and Skin Irritancy were evaluated using Toxicity Prediction by Komputer Assisted Technology (TOPKAT). Two of the impurities were found to be non-toxic as compared to original drug Zileuton. As the drugs are purposed and repurposed effectively the impurities can also be; as they can have more binding affinity; less toxicity and better ability to be bio-active at other biological targets.

Keywords: UFLC, LC-MS-TOF, NMR, Zileuton, impurities, toxicity, bio-activity

Procedia PDF Downloads 190
454 Deconstruction of the Term 'Shaman' in the Metaphorical Pair 'Artist as a Shaman'

Authors: Ilona Ivova Anachkova

Abstract:

The analogy between the artist and the shaman as both being practitioners that more easily recognize and explore spiritual matters, and thus contribute to the society in a unique way has been implied in both Modernity and Postmodernity. The Romantic conception of the shaman as a great artist who helps common men see and understand messages of a higher consciousness has been employed throughout Modernity and is active even now. This paper deconstructs the term ‘shaman’ in the metaphorical analogy ‘artist – shaman’ that was developed more fully in Modernity in different artistic and scientific discourses. The shaman is a figure that to a certain extent adequately reflects the late modern and postmodern holistic views on the world. Such views aim at distancing from traditional religious and overly rationalistic discourses. However, the term ‘shaman’ can be well substituted by other concepts such as the priest, for example. The concept ‘shaman’ is based on modern ethnographic and historical investigations. Its later philosophical, psychological and artistic appropriations designate the role of the artist as a spiritual and cultural leader. However, the artist and the shaman are not fully interchangeable terms. The figure of the shaman in ‘primitive’ societies has performed many social functions that are now delegated to different institutions and positions. The shaman incorporates the functions of a judge, a healer. He is a link to divine entities. He is the creative, aspiring human being that has heightened sensitivity to the world in both its spiritual and material aspects. Building the metaphorical analogy between the shaman and the artist comes in many ways. Both are seen as healers of the society, having propensity towards connection to spiritual entities, or being more inclined to creativity than others. The ‘shaman’ however is a fashionable word for a spiritual person used perhaps because of the anti-traditionalist religious modern and postmodern views. The figure of the priest is associated with a too rational, theoretical and detached attitude towards spiritual matters, while the practices of the shaman and the artist are considered engaged with spirituality on a deeper existential level. The term ‘shaman’ however does not have priority of other words/figures that can explore and deploy spiritual aspects of reality. Having substituted the term ‘shaman’ in the pair ‘artist as a shaman’ with ‘the priest’ or literally ‘anybody,' we witness destruction of spiritual hierarchies and come to the view that everybody is responsible for their own spiritual and creative evolution.

Keywords: artist as a shaman, creativity, extended theory of art, functions of art, priest as an artist

Procedia PDF Downloads 228
453 Cybernetic Model-Based Optimization of a Fed-Batch Process for High Cell Density Cultivation of E. Coli In Shake Flasks

Authors: Snehal D. Ganjave, Hardik Dodia, Avinash V. Sunder, Swati Madhu, Pramod P. Wangikar

Abstract:

Batch cultivation of recombinant bacteria in shake flasks results in low cell density due to nutrient depletion. Previous protocols on high cell density cultivation in shake flasks have relied mainly on controlled release mechanisms and extended cultivation protocols. In the present work, we report an optimized fed-batch process for high cell density cultivation of recombinant E. coli BL21(DE3) for protein production. A cybernetic model-based, multi-objective optimization strategy was implemented to obtain the optimum operating variables to achieve maximum biomass and minimized substrate feed rate. A syringe pump was used to feed a mixture of glycerol and yeast extract into the shake flask. Preliminary experiments were conducted with online monitoring of dissolved oxygen (DO) and offline measurements of biomass and glycerol to estimate the model parameters. Multi-objective optimization was performed to obtain the pareto front surface. The selected optimized recipe was tested for a range of proteins that show different extent soluble expression in E. coli. These included eYFP and LkADH, which are largely expressed in soluble fractions, CbFDH and GcanADH , which are partially soluble, and human PDGF, which forms inclusion bodies. The biomass concentrations achieved in 24 h were in the range 19.9-21.5 g/L, while the model predicted value was 19.44 g/L. The process was successfully reproduced in a standard laboratory shake flask without online monitoring of DO and pH. The optimized fed-batch process showed significant improvement in both the biomass and protein production of the tested recombinant proteins compared to batch cultivation. The proposed process will have significant implications in the routine cultivation of E. coli for various applications.

Keywords: cybernetic model, E. coli, high cell density cultivation, multi-objective optimization

Procedia PDF Downloads 250
452 Prediction of Alzheimer's Disease Based on Blood Biomarkers and Machine Learning Algorithms

Authors: Man-Yun Liu, Emily Chia-Yu Su

Abstract:

Alzheimer's disease (AD) is the public health crisis of the 21st century. AD is a degenerative brain disease and the most common cause of dementia, a costly disease on the healthcare system. Unfortunately, the cause of AD is poorly understood, furthermore; the treatments of AD so far can only alleviate symptoms rather cure or stop the progress of the disease. Currently, there are several ways to diagnose AD; medical imaging can be used to distinguish between AD, other dementias, and early onset AD, and cerebrospinal fluid (CSF). Compared with other diagnostic tools, blood (plasma) test has advantages as an approach to population-based disease screening because it is simpler, less invasive also cost effective. In our study, we used blood biomarkers dataset of The Alzheimer’s disease Neuroimaging Initiative (ADNI) which was funded by National Institutes of Health (NIH) to do data analysis and develop a prediction model. We used independent analysis of datasets to identify plasma protein biomarkers predicting early onset AD. Firstly, to compare the basic demographic statistics between the cohorts, we used SAS Enterprise Guide to do data preprocessing and statistical analysis. Secondly, we used logistic regression, neural network, decision tree to validate biomarkers by SAS Enterprise Miner. This study generated data from ADNI, contained 146 blood biomarkers from 566 participants. Participants include cognitive normal (healthy), mild cognitive impairment (MCI), and patient suffered Alzheimer’s disease (AD). Participants’ samples were separated into two groups, healthy and MCI, healthy and AD, respectively. We used the two groups to compare important biomarkers of AD and MCI. In preprocessing, we used a t-test to filter 41/47 features between the two groups (healthy and AD, healthy and MCI) before using machine learning algorithms. Then we have built model with 4 machine learning methods, the best AUC of two groups separately are 0.991/0.709. We want to stress the importance that the simple, less invasive, common blood (plasma) test may also early diagnose AD. As our opinion, the result will provide evidence that blood-based biomarkers might be an alternative diagnostics tool before further examination with CSF and medical imaging. A comprehensive study on the differences in blood-based biomarkers between AD patients and healthy subjects is warranted. Early detection of AD progression will allow physicians the opportunity for early intervention and treatment.

Keywords: Alzheimer's disease, blood-based biomarkers, diagnostics, early detection, machine learning

Procedia PDF Downloads 318
451 Enhancing Sell-In and Sell-Out Forecasting Using Ensemble Machine Learning Method

Authors: Vishal Das, Tianyi Mao, Zhicheng Geng, Carmen Flores, Diego Pelloso, Fang Wang

Abstract:

Accurate sell-in and sell-out forecasting is a ubiquitous problem in the retail industry. It is an important element of any demand planning activity. As a global food and beverage company, Nestlé has hundreds of products in each geographical location that they operate in. Each product has its sell-in and sell-out time series data, which are forecasted on a weekly and monthly scale for demand and financial planning. To address this challenge, Nestlé Chilein collaboration with Amazon Machine Learning Solutions Labhas developed their in-house solution of using machine learning models for forecasting. Similar products are combined together such that there is one model for each product category. In this way, the models learn from a larger set of data, and there are fewer models to maintain. The solution is scalable to all product categories and is developed to be flexible enough to include any new product or eliminate any existing product in a product category based on requirements. We show how we can use the machine learning development environment on Amazon Web Services (AWS) to explore a set of forecasting models and create business intelligence dashboards that can be used with the existing demand planning tools in Nestlé. We explored recent deep learning networks (DNN), which show promising results for a variety of time series forecasting problems. Specifically, we used a DeepAR autoregressive model that can group similar time series together and provide robust predictions. To further enhance the accuracy of the predictions and include domain-specific knowledge, we designed an ensemble approach using DeepAR and XGBoost regression model. As part of the ensemble approach, we interlinked the sell-out and sell-in information to ensure that a future sell-out influences the current sell-in predictions. Our approach outperforms the benchmark statistical models by more than 50%. The machine learning (ML) pipeline implemented in the cloud is currently being extended for other product categories and is getting adopted by other geomarkets.

Keywords: sell-in and sell-out forecasting, demand planning, DeepAR, retail, ensemble machine learning, time-series

Procedia PDF Downloads 264
450 Prediction of Springback in U-bending of W-Temper AA6082 Aluminum Alloy

Authors: Jemal Ebrahim Dessie, Lukács Zsolt

Abstract:

High-strength aluminum alloys have drawn a lot of attention because of the expanding demand for lightweight vehicle design in the automotive sector. Due to poor formability at room temperature, warm and hot forming have been advised. However, warm and hot forming methods need more steps in the production process and an advanced tooling system. In contrast, since ordinary tools can be used, forming sheets at room temperature in the W temper condition is advantageous. However, springback of supersaturated sheets and their thinning are critical challenges and must be resolved during the use of this technique. In this study, AA6082-T6 aluminum alloy was solution heat treated at different oven temperatures and times using a specially designed and developed furnace in order to optimize the W-temper heat treatment temperature. A U-shaped bending test was carried out at different time periods between W-temper heat treatment and forming operation. Finite element analysis (FEA) of U-bending was conducted using AutoForm aiming to validate the experimental result. The uniaxial tensile and unload test was performed in order to determine the kinematic hardening behavior of the material and has been optimized in the Finite element code using systematic process improvement (SPI). In the simulation, the effect of friction coefficient & blank holder force was considered. Springback parameters were evaluated by the geometry adopted from the NUMISHEET ’93 benchmark problem. It is noted that the change of shape was higher at the more extended time periods between W-temper heat treatment and forming operation. Die radius was the most influential parameter at the flange springback. However, the change of shape shows an overall increasing tendency on the sidewall as the increase of radius of the punch than the radius of the die. The springback angles on the flange and sidewall seem to be highly influenced by the coefficient of friction than blank holding force, and the effect becomes increases as increasing the blank holding force.

Keywords: aluminum alloy, FEA, springback, SPI, U-bending, W-temper

Procedia PDF Downloads 95
449 Flood Simulation and Forecasting for Sustainable Planning of Response in Municipalities

Authors: Mariana Damova, Stanko Stankov, Emil Stoyanov, Hristo Hristov, Hermand Pessek, Plamen Chernev

Abstract:

We will present one of the first use cases on the DestinE platform, a joint initiative of the European Commission, European Space Agency and EUMETSAT, providing access to global earth observation, meteorological and statistical data, and emphasize the good practice of intergovernmental agencies acting in concert. Further, we will discuss the importance of space-bound disruptive solutions for improving the balance between the ever-increasing water-related disasters coming from climate change and minimizing their economic and societal impact. The use case focuses on forecasting floods and estimating the impact of flood events on the urban environment and the ecosystems in the affected areas with the purpose of helping municipal decision-makers to analyze and plan resource needs and to forge human-environment relationships by providing farmers with insightful information for improving their agricultural productivity. For the forecast, we will adopt an EO4AI method of our platform ISME-HYDRO, in which we employ a pipeline of neural networks applied to in-situ measurements and satellite data of meteorological factors influencing the hydrological and hydrodynamic status of rivers and dams, such as precipitations, soil moisture, vegetation index, snow cover to model flood events and their span. ISME-HYDRO platform is an e-infrastructure for water resources management based on linked data, extended with further intelligence that generates forecasts with the method described above, throws alerts, formulates queries, provides superior interactivity and drives communication with the users. It provides synchronized visualization of table views, graphviews and interactive maps. It will be federated with the DestinE platform.

Keywords: flood simulation, AI, Earth observation, e-Infrastructure, flood forecasting, flood areas localization, response planning, resource estimation

Procedia PDF Downloads 15
448 Inverterless Grid Compatible Micro Turbine Generator

Authors: S. Ozeri, D. Shmilovitz

Abstract:

Micro‐Turbine Generators (MTG) are small size power plants that consist of a high speed, gas turbine driving an electrical generator. MTGs may be fueled by either natural gas or kerosene and may also use sustainable and recycled green fuels such as biomass, landfill or digester gas. The typical ratings of MTGs start from 20 kW up to 200 kW. The primary use of MTGs is for backup for sensitive load sites such as hospitals, and they are also considered a feasible power source for Distributed Generation (DG) providing on-site generation in proximity to remote loads. The MTGs have the compressor, the turbine, and the electrical generator mounted on a single shaft. For this reason, the electrical energy is generated at high frequency and is incompatible with the power grid. Therefore, MTGs must contain, in addition, a power conditioning unit to generate an AC voltage at the grid frequency. Presently, this power conditioning unit consists of a rectifier followed by a DC/AC inverter, both rated at the full MTG’s power. The losses of the power conditioning unit account to some 3-5%. Moreover, the full-power processing stage is a bulky and costly piece of equipment that also lowers the overall system reliability. In this study, we propose a new type of power conditioning stage in which only a small fraction of the power is processed. A low power converter is used only to program the rotor current (i.e. the excitation current which is substantially lower). Thus, the MTG's output voltage is shaped to the desired amplitude and frequency by proper programming of the excitation current. The control is realized by causing the rotor current to track the electrical frequency (which is related to the shaft frequency) with a difference that is exactly equal to the line frequency. Since the phasor of the rotation speed and the phasor of the rotor magnetic field are multiplied, the spectrum of the MTG generator voltage contains the sum and the difference components. The desired difference component is at the line frequency (50/60 Hz), whereas the unwanted sum component is at about twice the electrical frequency of the stator. The unwanted high frequency component can be filtered out by a low-pass filter leaving only the low-frequency output. This approach allows elimination of the large power conditioning unit incorporated in conventional MTGs. Instead, a much smaller and cheaper fractional power stage can be used. The proposed technology is also applicable to other high rotation generator sets such as aircraft power units.

Keywords: gas turbine, inverter, power multiplier, distributed generation

Procedia PDF Downloads 235