Search results for: extended FIR filter
490 Walking the Tightrope: Balancing Project Governance, Complexity, and Servant Leadership for Megaproject Success
Authors: Muhammad Shoaib Iqbal, Shih Ping Ho
Abstract:
Megaprojects are large-scale, complex ventures with significant financial investments, numerous stakeholders, and extended timelines, requiring meticulous management for successful completion. This study explores the interplay between project governance, project complexity, and servant leadership and their combined effects on project success, specifically within the context of Pakistani megaprojects. The primary objectives are to examine the direct impact of project governance on project success, understand the negative influence of project complexity, assess the positive role of servant leadership, explore the moderating effect of servant leadership on the relationship between governance and success, and investigate how servant leadership mitigates the adverse effects of complexity. Using a quantitative approach, survey data were collected from project managers and team members involved in Pakistani megaprojects. Using a Comprehensive empirical model, 257 Valid responses were analyzed. Multiple regression analysis tested the hypothesized relationships and interaction effects using PLS-SEM. Findings reveal that project governance significantly enhances project success, emphasizing the need for robust governance structures. Conversely, project complexity negatively impacts success, highlighting the challenges of managing complex projects. Servant leadership significantly boosts project success by prioritizing team support and empowerment. Although the interaction between governance and servant leadership is not significant, suggesting no significant change in project success, servant leadership significantly mitigates the negative effects of project complexity, enhancing team resilience and adaptability. These results underscore the necessity for a balanced approach integrating strong governance with flexible, supportive leadership. The study offers valuable insights for practitioners, recommending adaptive governance frameworks and promoting servant leadership to improve the management and success rates of megaprojects. This research contributes to the broader understanding of effective project management practices in complex environments.Keywords: project governance, project complexity, servant leadership, project success, megaprojects, Pakistan
Procedia PDF Downloads 34489 Fault Tolerant and Testable Designs of Reversible Sequential Building Blocks
Authors: Vishal Pareek, Shubham Gupta, Sushil Chandra Jain
Abstract:
With increasing high-speed computation demand the power consumption, heat dissipation and chip size issues are posing challenges for logic design with conventional technologies. Recovery of bit loss and bit errors is other issues that require reversibility and fault tolerance in the computation. The reversible computing is emerging as an alternative to conventional technologies to overcome the above problems and helpful in a diverse area such as low-power design, nanotechnology, quantum computing. Bit loss issue can be solved through unique input-output mapping which require reversibility and bit error issue require the capability of fault tolerance in design. In order to incorporate reversibility a number of combinational reversible logic based circuits have been developed. However, very few sequential reversible circuits have been reported in the literature. To make the circuit fault tolerant, a number of fault model and test approaches have been proposed for reversible logic. In this paper, we have attempted to incorporate fault tolerance in sequential reversible building blocks such as D flip-flop, T flip-flop, JK flip-flop, R-S flip-flop, Master-Slave D flip-flop, and double edge triggered D flip-flop by making them parity preserving. The importance of this proposed work lies in the fact that it provides the design of reversible sequential circuits completely testable for any stuck-at fault and single bit fault. In our opinion our design of reversible building blocks is superior to existing designs in term of quantum cost, hardware complexity, constant input, garbage output, number of gates and design of online testable D flip-flop have been proposed for the first time. We hope our work can be extended for building complex reversible sequential circuits.Keywords: parity preserving gate, quantum computing, fault tolerance, flip-flop, sequential reversible logic
Procedia PDF Downloads 545488 Aesthetic Embodiment of the Visual and/or Non-Visual: the Becoming of a Spatial Installation Exhibition Influenced by Shamanic Healing
Authors: Ningfei Xiao, Simon Twose, Hannah Hopewell
Abstract:
In urban settings worldwide, artists and researchers have drawn from shamanic healing, providing insightful responses to the environment. This project is a transdisciplinary creative research project where architecture and art practice draw from shamanic healing and provide the potential to expand knowledge of public space and inspire more aesthetic explorations of public spatial visions. The research started from the encounters with the Ewengki/Evenki shaman tribe in settlement areas of northern China in 2019 and extended through the partnerships with Maori artists in Poneke Aotearoa, New Zealand, in 2023. Based on the learnings and collaborations with female indigenous tradition practitioners and the healing that the researcher received from the land, a spatial installation exhibition was developed in this project. Indigenous practices are intricately woven with contemporary technology, merging visuals, soundscapes, and other non-visual aesthetics influenced by the researcher's personal experiences of embodied shamanic healing with brainwave generative technology. This synthesis seeks to ritualize and reimagine future public spaces, encompassing streetscapes and greenscapes from China to Aotearoa, and fostering connections between urbanized human body, mind, spirit, and land. In doing so, the project presents a feminist posthuman inquiry into how individuals perceive materiality within the context of a future city. Grounded in creative research and embodied methodologies, this paper focuses on the conceptual and autoethnographic aspects of visual-non-visual aesthetics and their creative representation. Through the exploration of aesthetics beyond the visual realm within urban and spatial contexts, this project showcases the spatial installation exhibition as an example of shamanic influence and related response to public space through embodied artistry and transdisciplinary creative inquiry.Keywords: aesthetic, embodiment, visual and/or non-visual, spatial installation, shamanic healing, public space
Procedia PDF Downloads 59487 Training a Neural Network to Segment, Detect and Recognize Numbers
Authors: Abhisek Dash
Abstract:
This study had three neural networks, one for number segmentation, one for number detection and one for number recognition all of which are coupled to one another. All networks were trained on the MNIST dataset and were convolutional. It was assumed that the images had lighter background and darker foreground. The segmentation network took 28x28 images as input and had sixteen outputs. Segmentation training starts when a dark pixel is encountered. Taking a window(7x7) over that pixel as focus, the eight neighborhood of the focus was checked for further dark pixels. The segmentation network was then trained to move in those directions which had dark pixels. To this end the segmentation network had 16 outputs. They were arranged as “go east”, ”don’t go east ”, “go south east”, “don’t go south east”, “go south”, “don’t go south” and so on w.r.t focus window. The focus window was resized into a 28x28 image and the network was trained to consider those neighborhoods which had dark pixels. The neighborhoods which had dark pixels were pushed into a queue in a particular order. The neighborhoods were then popped one at a time stitched to the existing partial image of the number one at a time and trained on which neighborhoods to consider when the new partial image was presented. The above process was repeated until the image was fully covered by the 7x7 neighborhoods and there were no more uncovered black pixels. During testing the network scans and looks for the first dark pixel. From here on the network predicts which neighborhoods to consider and segments the image. After this step the group of neighborhoods are passed into the detection network. The detection network took 28x28 images as input and had two outputs denoting whether a number was detected or not. Since the ground truth of the bounds of a number was known during training the detection network outputted in favor of number not found until the bounds were not met and vice versa. The recognition network was a standard CNN that also took 28x28 images and had 10 outputs for recognition of numbers from 0 to 9. This network was activated only when the detection network votes in favor of number detected. The above methodology could segment connected and overlapping numbers. Additionally the recognition unit was only invoked when a number was detected which minimized false positives. It also eliminated the need for rules of thumb as segmentation is learned. The strategy can also be extended to other characters as well.Keywords: convolutional neural networks, OCR, text detection, text segmentation
Procedia PDF Downloads 161486 Human Lens Metabolome: A Combined LC-MS and NMR Study
Authors: Vadim V. Yanshole, Lyudmila V. Yanshole, Alexey S. Kiryutin, Timofey D. Verkhovod, Yuri P. Tsentalovich
Abstract:
Cataract, or clouding of the eye lens, is the leading cause of vision impairment in the world. The lens tissue have very specific structure: It does not have vascular system, the lens proteins – crystallins – do not turnover throughout lifespan. The protection of lens proteins is provided by the metabolites which diffuse inside the lens from the aqueous humor or synthesized in the lens epithelial layer. Therefore, the study of changes in the metabolite composition of a cataractous lens as compared to a normal lens may elucidate the possible mechanisms of the cataract formation. Quantitative metabolomic profiles of normal and cataractous human lenses were obtained with the combined use of high-frequency nuclear magnetic resonance (NMR) and ion-pairing high-performance liquid chromatography with high-resolution mass-spectrometric detection (LC-MS) methods. The quantitative content of more than fifty metabolites has been determined in this work for normal aged and cataractous human lenses. The most abundant metabolites in the normal lens are myo-inositol, lactate, creatine, glutathione, glutamate, and glucose. For the majority of metabolites, their levels in the lens cortex and nucleus are similar, with the few exceptions including antioxidants and UV filters: The concentrations of glutathione, ascorbate and NAD in the lens nucleus decrease as compared to the cortex, while the levels of the secondary UV filters formed from primary UV filters in redox processes increase. That confirms that the lens core is metabolically inert, and the metabolic activity in the lens nucleus is mostly restricted by protection from the oxidative stress caused by UV irradiation, UV filter spontaneous decomposition, or other factors. It was found that the metabolomic composition of normal and age-matched cataractous human lenses differ significantly. The content of the most important metabolites – antioxidants, UV filters, and osmolytes – in the cataractous nucleus is at least ten fold lower than in the normal nucleus. One may suppose that the majority of these metabolites are synthesized in the lens epithelial layer, and that age-related cataractogenesis might originate from the dysfunction of the lens epithelial cells. Comprehensive quantitative metabolic profiles of the human eye lens have been acquired for the first time. The obtained data can be used for the analysis of changes in the lens chemical composition occurring with age and with the cataract development.Keywords: cataract, lens, NMR, LC-MS, metabolome
Procedia PDF Downloads 322485 New Roles of Telomerase and Telomere-Associated Proteins in the Regulation of Telomere Length
Authors: Qin Yang, Fan Zhang, Juan Du, Chongkui Sun, Krishna Kota, Yun-Ling Zheng
Abstract:
Telomeres are specialized structures at chromosome ends consisting of tandem repetitive DNA sequences [(TTAGGG)n in humans] and associated proteins, which are necessary for telomere function. Telomere lengths are tightly regulated within a narrow range in normal human somatic cells, the basis of cellular senescence and aging. Previous studies have extensively focused on how short telomeres are extended and have demonstrated that telomerase plays a central role in telomere maintenance through elongating the short telomeres. However, the molecular mechanisms of regulating excessively long telomeres are unknown. Here, we found that telomerase enzymatic component hTERT plays a dual role in the regulation of telomeres length. We analyzed single telomere alterations at each chromosomal end led to the discoveries that hTERT shortens excessively long telomeres and elongates short telomeres simultaneously, thus maintaining the optimal telomere length at each chromosomal end for an efficient protection. The hTERT-mediated telomere shortening removes large segments of telomere DNA rapidly without inducing telomere dysfunction foci or affecting cell proliferation, thus it is mechanistically distinct from rapid telomere deletion. We found that expression of hTERT generates telomeric circular DNA, suggesting that telomere homologous recombination may be involved in this telomere shortening process. Moreover, the hTERT-mediated telomere shortening is required its enzymatic activity, but telomerase RNA component hTR is not involved in it. Furthermore, shelterin protein TPP1 interacts with hTERT and recruits it on telomeres to mediate telomere shortening. In addition, telomere-associated proteins, DKC1 and TCAB1 also play roles in this process. This novel hTERT-mediated telomere shortening mechanism not only exists in cancer cells, but also in primary human cells. Thus, the hTERT-mediated telomere shortening is expected to shift the paradigm on current molecular models of telomere length maintenance, with wide-reaching consequences in cancer and aging fields.Keywords: aging, hTERT, telomerase, telomeres, human cells
Procedia PDF Downloads 427484 Field Prognostic Factors on Discharge Prediction of Traumatic Brain Injuries
Authors: Mohammad Javad Behzadnia, Amir Bahador Boroumand
Abstract:
Introduction: Limited facility situations require allocating the most available resources for most casualties. Accordingly, Traumatic Brain Injury (TBI) is the one that may need to transport the patient as soon as possible. In a mass casualty event, deciding when the facilities are restricted is hard. The Extended Glasgow Outcome Score (GOSE) has been introduced to assess the global outcome after brain injuries. Therefore, we aimed to evaluate the prognostic factors associated with GOSE. Materials and Methods: In a multicenter cross-sectional study conducted on 144 patients with TBI admitted to trauma emergency centers. All the patients with isolated TBI who were mentally and physically healthy before the trauma entered the study. The patient’s information was evaluated, including demographic characteristics, duration of hospital stays, mechanical ventilation on admission laboratory measurements, and on-admission vital signs. We recorded the patients’ TBI-related symptoms and brain computed tomography (CT) scan findings. Results: GOSE assessments showed an increasing trend by the comparison of on-discharge (7.47 ± 1.30), within a month (7.51 ± 1.30), and within three months (7.58 ± 1.21) evaluations (P < 0.001). On discharge, GOSE was positively correlated with Glasgow Coma Scale (GCS) (r = 0.729, P < 0.001) and motor GCS (r = 0.812, P < 0.001), and inversely with age (r = −0.261, P = 0.002), hospitalization period (r = −0.678, P < 0.001), pulse rate (r = −0.256, P = 0.002) and white blood cell (WBC). Among imaging signs and trauma-related symptoms in univariate analysis, intracranial hemorrhage (ICH), interventricular hemorrhage (IVH) (P = 0.006), subarachnoid hemorrhage (SAH) (P = 0.06; marginally at P < 0.1), subdural hemorrhage (SDH) (P = 0.032), and epidural hemorrhage (EDH) (P = 0.037) were significantly associated with GOSE at discharge in multivariable analysis. Conclusion: Our study showed some predictive factors that could help to decide which casualty should transport earlier to a trauma center. According to the current study findings, GCS, pulse rate, WBC, and among imaging signs and trauma-related symptoms, ICH, IVH, SAH, SDH, and EDH are significant independent predictors of GOSE at discharge in TBI patients.Keywords: field, Glasgow outcome score, prediction, traumatic brain injury.
Procedia PDF Downloads 75483 Effect of Electropolymerization Method in the Charge Transfer Properties and Photoactivity of Polyaniline Photoelectrodes
Authors: Alberto Enrique Molina Lozano, María Teresa Cortés Montañez
Abstract:
Polyaniline (PANI) photoelectrodes were electrochemically synthesized through electrodeposition employing three techniques: chronoamperometry (CA), cyclic voltammetry (CV), and potential pulse (PP) methods. The substrate used for electrodeposition was a fluorine-doped tin oxide (FTO) glass with dimensions of 2.5 cm x 1.3 cm. Subsequently, structural and optical characterization was conducted utilizing Fourier-transform infrared (FTIR) spectroscopy and UV-visible (UV-vis) spectroscopy, respectively. The FTIR analysis revealed variations in the molar ratio of benzenoid to quinonoid rings within the PANI polymer matrix, indicative of differing oxidation states arising from the distinct electropolymerization methodologies employed. In the optical characterization, differences in the energy band gap (Eg) values and positions of the highest occupied molecular orbital (HOMO) and lowest unoccupied molecular orbital (LUMO) were observed, attributable to variations in doping levels and structural irregularities introduced during the electropolymerization procedures. To assess the charge transfer properties of the PANI photoelectrodes, electrochemical impedance spectroscopy (EIS) experiments were carried out within a 0.1 M sodium sulfate (Na₂SO₄) electrolyte. The results displayed a substantial decrease in charge transfer resistance with the PANI coatings compared to uncoated substrates, with PANI obtained through cyclic voltammetry (CV) presenting the lowest charge transfer resistance, contrasting PANI obtained via chronoamperometry (CA) and potential pulses (PP). Subsequently, the photoactive response of the PANI photoelectrodes was measured through linear sweep voltammetry (LSV) and chronoamperometry. The photoelectrochemical measurements revealed a discernible photoactivity in all PANI-coated electrodes. However, PANI electropolymerized through CV displayed the highest photocurrent. Interestingly, PANI derived from chronoamperometry (CA) exhibited the highest degree of stable photocurrent over an extended temporal interval.Keywords: PANI, photocurrent, photoresponse, charge separation, recombination
Procedia PDF Downloads 65482 Effect of Locally Produced Sweetened Pediatric Antibiotics on Streptococcus mutans Isolated from the Oral Cavity of Pediatric Patients in Syria - in Vitro Study
Authors: Omar Nasani, Chaza Kouchaji, Muznah Alkhani, Maisaa Abd-alkareem
Abstract:
Objective: To evaluate the influence of sweetening agents used in pediatric medications on the growth of Streptococcus mutans colonies and its effect on the cariogenic activity in the oral cavity. No previous studies are registered yet in Syrian children. Methods: Specimens were isolated from the oral cavity of pediatric patients, then in-vitro study is applied on locally manufactured liquid pediatric antibiotic drugs, containing natural or synthetic sweeteners. The selected antibiotics are Ampicillin (sucrose), Amoxicillin (sucrose), Amoxicillin + Flucloxacillin (sorbitol), Amoxicillin+Clavulanic acid (Sorbitol or sucrose). These antibiotics have a known inhibitory effect on gram positive aerobic/anaerobic bacteria especially Streptococcus mutans strains in children’s oral biofilm. Five colonies are studied with each antibiotic. Saturated antibiotics were spread on a 6mm diameter filter disc. Incubated culture media were compared with each other and with the control antibiotic discs. Results were evaluated by measuring the diameter of the inhibition zones. The control group of antibiotic discs was resourced from Abtek Biologicals Ltd. Results: The diameter of inhibition zones around discs of antibiotics sweetened with sorbitol was larger than those sweetened with sucrose. The effect was most important when comparing Amoxicillin + Clavulanic Acid (sucrose 25mm; versus sorbitol 27mm). The highest inhibitory effect was observed with the usage of Amoxicillin + Flucloxacillin sweetened with sorbitol (38mm). Whereas the lowest inhibitory effect was observed with Amoxicillin and Ampicillin sweetened with sucrose (22mm and 21mm). Conclusion: The results of this study indicate that although all selected antibiotic produced an inhibitory effect on S. mutans, sucrose weakened the inhibitory action of the antibiotic to varying degrees, meanwhile antibiotic formulations containing sorbitol simulated the effects of the control antibiotic. This study calls attention to effects of sweeteners included in pediatric drugs on the oral hygiene and tooth decay.Keywords: pediatric, dentistry, antibiotics, streptococcus mutans, biofilm, sucrose, sugar free
Procedia PDF Downloads 72481 Recommendations for Data Quality Filtering of Opportunistic Species Occurrence Data
Authors: Camille Van Eupen, Dirk Maes, Marc Herremans, Kristijn R. R. Swinnen, Ben Somers, Stijn Luca
Abstract:
In ecology, species distribution models are commonly implemented to study species-environment relationships. These models increasingly rely on opportunistic citizen science data when high-quality species records collected through standardized recording protocols are unavailable. While these opportunistic data are abundant, uncertainty is usually high, e.g., due to observer effects or a lack of metadata. Data quality filtering is often used to reduce these types of uncertainty in an attempt to increase the value of studies relying on opportunistic data. However, filtering should not be performed blindly. In this study, recommendations are built for data quality filtering of opportunistic species occurrence data that are used as input for species distribution models. Using an extensive database of 5.7 million citizen science records from 255 species in Flanders, the impact on model performance was quantified by applying three data quality filters, and these results were linked to species traits. More specifically, presence records were filtered based on record attributes that provide information on the observation process or post-entry data validation, and changes in the area under the receiver operating characteristic (AUC), sensitivity, and specificity were analyzed using the Maxent algorithm with and without filtering. Controlling for sample size enabled us to study the combined impact of data quality filtering, i.e., the simultaneous impact of an increase in data quality and a decrease in sample size. Further, the variation among species in their response to data quality filtering was explored by clustering species based on four traits often related to data quality: commonness, popularity, difficulty, and body size. Findings show that model performance is affected by i) the quality of the filtered data, ii) the proportional reduction in sample size caused by filtering and the remaining absolute sample size, and iii) a species ‘quality profile’, resulting from a species classification based on the four traits related to data quality. The findings resulted in recommendations on when and how to filter volunteer generated and opportunistically collected data. This study confirms that correctly processed citizen science data can make a valuable contribution to ecological research and species conservation.Keywords: citizen science, data quality filtering, species distribution models, trait profiles
Procedia PDF Downloads 202480 Accelerator Mass Spectrometry Analysis of Isotopes of Plutonium in PM₂.₅
Authors: C. G. Mendez-Garcia, E. T. Romero-Guzman, H. Hernandez-Mendoza, C. Solis, E. Chavez-Lomeli, E. Chamizo, R. Garcia-Tenorio
Abstract:
Plutonium is present in different concentrations in the environment and biological samples related to nuclear weapons testing, nuclear waste recycling and accidental discharges of nuclear plants. This radioisotope is considered the most radiotoxic substance, particularly when it enters the human body through inhalation of powders insoluble or aerosols. This is the main reason of the determination of the concentration of this radioisotope in the atmosphere. Besides that, the isotopic ratio of ²⁴⁰Pu/²³⁹Pu provides information about the origin of the source. PM₂.₅ sampling was carried out in the Metropolitan Zone of the Valley of Mexico (MZVM) from February 18th to March 17th in 2015 on quartz filter. There have been significant developments recently due to the establishment of new methods for sample preparation and accurate measurement to detect ultra trace levels as the plutonium is found in the environment. The accelerator mass spectrometry (AMS) is a technique that allows measuring levels of detection around of femtograms (10-15 g). The AMS determinations include the chemical isolation of Pu. The Pu separation involved an acidic digestion and a radiochemical purification using an anion exchange resin. Finally, the source is prepared, when Pu is pressed in the corresponding cathodes. According to the author's knowledge on these aerosols showed variations on the ²³⁵U/²³⁸U ratio of the natural value, suggesting that could be an anthropogenic source altering it. The determination of the concentration of the isotopes of Pu can be a useful tool in order the clarify this presence in the atmosphere. The first results showed a mean value of activity concentration of ²³⁹Pu of 280 nBq m⁻³ thus the ²⁴⁰Pu/²³⁹Pu was 0.025 corresponding to the weapon production source; these results corroborate that there is an anthropogenic influence that is increasing the concentration of radioactive material in PM₂.₅. According to the author's knowledge in Total Suspended Particles (TSP) have been reported activity concentrations of ²³⁹⁺²⁴⁰Pu around few tens of nBq m⁻³ and 0.17 of ²⁴⁰Pu/²³⁹Pu ratios. The preliminary results in MZVM show high activity concentrations of isotopes of Pu (40 and 700 nBq m⁻³) and low ²⁴⁰Pu/²³⁹Pu ratio than reported. These results are in the order of the activity concentrations of Pu in weapons-grade of high purity.Keywords: aerosols, fallout, mass spectrometry, radiochemistry, tracer, ²⁴⁰Pu/²³⁹Pu ratio
Procedia PDF Downloads 167479 Alternative Housing Systems: Influence on Blood Profile of Egg-Type Chickens in Humid Tropics
Authors: Olufemi M. Alabi, Foluke A. Aderemi, Adebayo A. Adewumi, Banwo O. Alabi
Abstract:
General well-being of animals is of paramount interest in some developed countries and of global importance hence the shift onto alternative housing systems for egg-type chickens as replacement for conventional battery cage system. However, there is paucity of information on the effect of this shift on physiological status of the hens to judge their health via the blood profile. Therefore, investigation was carried out on two strains of hen kept in three different housing systems in humid tropics to evaluate changes in their blood parameters. 108, 17-weeks old super black (SBL) hens and 108, 17-weeks old super brown (SBR) hens were randomly allotted to three different intensive systems Partitioned Conventional Cage (PCC), Extended Conventional Cage (ECC) and Deep Litter System (DLS) in a randomized complete block design with 36 hens per housing system, each with three replicates. The experiment lasted 37 weeks during which blood samples were collected at 18th week of age and bi-weekly thereafter for analyses. Parameters measured are packed cell volume (PCV), hemoglobin concentration (Hb), red blood counts (RBC), white blood counts (WBC) and serum metabolites such as total protein (TP), albumin (Alb), globulin (Glb), glucose, cholesterol, urea, bilirubin, serum cortisol while blood indices such as mean corpuscular hemoglobin (MCH), mean cell volume (MCV) and mean corpuscular hemoglobin concentration (MCHC) were calculated. The hematological values of the hens were not significantly (p>0.05) affected by the housing system and strain, so also the serum metabolites except for the serum cortisol which was significantly (p<0.05) affected by the housing system only. Hens housed on PCC had higher values (20.05 ng/ml for SBL and 20.55 ng/ml for SBR) followed by hens on ECC (18.15ng/ml for SBL and 18.38ng/ml for SBL) while hens on DLS had the lowest value (16.50ng/ml for SBL and 16.00ng/ml for SBR) thereby confirming indication of stress with conventionally caged birds. Alternative housing systems can also be adopted for egg-type chickens in the humid tropics from welfare point of view with the results of this work confirming stress among caged hens.Keywords: blood, housing, humid-tropics, layers
Procedia PDF Downloads 468478 Partial Least Square Regression for High-Dimentional and High-Correlated Data
Authors: Mohammed Abdullah Alshahrani
Abstract:
The research focuses on investigating the use of partial least squares (PLS) methodology for addressing challenges associated with high-dimensional correlated data. Recent technological advancements have led to experiments producing data characterized by a large number of variables compared to observations, with substantial inter-variable correlations. Such data patterns are common in chemometrics, where near-infrared (NIR) spectrometer calibrations record chemical absorbance levels across hundreds of wavelengths, and in genomics, where thousands of genomic regions' copy number alterations (CNA) are recorded from cancer patients. PLS serves as a widely used method for analyzing high-dimensional data, functioning as a regression tool in chemometrics and a classification method in genomics. It handles data complexity by creating latent variables (components) from original variables. However, applying PLS can present challenges. The study investigates key areas to address these challenges, including unifying interpretations across three main PLS algorithms and exploring unusual negative shrinkage factors encountered during model fitting. The research presents an alternative approach to addressing the interpretation challenge of predictor weights associated with PLS. Sparse estimation of predictor weights is employed using a penalty function combining a lasso penalty for sparsity and a Cauchy distribution-based penalty to account for variable dependencies. The results demonstrate sparse and grouped weight estimates, aiding interpretation and prediction tasks in genomic data analysis. High-dimensional data scenarios, where predictors outnumber observations, are common in regression analysis applications. Ordinary least squares regression (OLS), the standard method, performs inadequately with high-dimensional and highly correlated data. Copy number alterations (CNA) in key genes have been linked to disease phenotypes, highlighting the importance of accurate classification of gene expression data in bioinformatics and biology using regularized methods like PLS for regression and classification.Keywords: partial least square regression, genetics data, negative filter factors, high dimensional data, high correlated data
Procedia PDF Downloads 49477 Impact of Mhealth Tools on Psycho-Social Predictors of Behaviour Regarding Contraceptive Use
Authors: Preeti Tiwari, Jay Wood, Duncan Babbage
Abstract:
Family planning plays a role in saving lives across the globe by preventing unwanted pregnancies. The purpose of this multidisciplinary research was to determine the impact of mHealth tools have on psychosocial determinants of behaviour for family planning. The present study examines a topic that is very relevant in times where human-technology interaction is at its peak. It is probably one of the first studies that have investigated the impact of mobile phone technology on the underlying mechanisms of behaviour change for family planning using primary data. To examine the association between exposure to mHealth tools and predictors of behaviour, data was collected from mHealth intervention areas in India. A post-intervention quasi-experimental study with a 2x2 factorial design was conducted among 831 men and women from the state of Bihar. The quantitative data analysis evaluated the extent of influence that predictors of behaviour (beliefs, social norms, perceived behaviour control, and outcome behaviour) have on a woman’s decisions about family planning. The results indicated an association between exposure to mHealth tools and improved communication about family planning among various family members after receiving health information from a health worker (H1). A relationship between exposure to mHealth tools and increased support women received from their husbands and extended family (mothers-in-law specifically) and peers (H2) was also found. A further result showed that knowledge about family planning was greater among users of family planning (H4). mHealth tools empower women to communicate with family members. This has important implications for developing mobile phone-based tools, as they can be used as a crucial communication channel that can be an effective method of increasing communication among family members about contraceptives. Thus, it can be implied that where women feel nervous talking about contraception, the successful application of mHealth tools can strengthen the interactivity of the health communication and could increase the likelihood of using contraception. However, while it may improve health communication that can inform health decisions, it may be insufficient on its own to cause behaviour change.Keywords: contraceptive, e-health, psycho-social, women
Procedia PDF Downloads 122476 Perception of Greek Vowels by Arabic-Greek Bilinguals: An Experimental Study
Authors: Georgios P. Georgiou
Abstract:
Infants are able to discriminate a number of sound contrasts in most languages. However, this ability is not available in adults who might face difficulties in discriminating accurately second language sound contrasts as they filter second language speech through the phonological categories of their native language. For example, Spanish speakers often struggle to perceive the difference between the English /ε/ and /æ/ because both vowels do not exist in their native language; so they assimilate these vowels to the closest phonological category of their first language. The present study aims to uncover the perceptual patterns of Arabic adult speakers in regard to the vowels of their second language (Greek). Still, there is not any study that investigates the perception of Greek vowels by Arabic speakers and, thus, the present study would contribute to the enrichment of the literature with cross-linguistic research in new languages. To the purpose of the present study, 15 native speakers of Egyptian Arabic who permanently live in Cyprus and have adequate knowledge of Greek as a second language passed through vowel assimilation and vowel contrast discrimination tests (AXB) in their second language. The perceptual stimuli included non-sense words that contained vowels in both stressed and unstressed positions. The second language listeners’ patterns were analyzed through the Perceptual Assimilation Model which makes testable hypotheses about the assimilation of second language sounds to the speakers’ native phonological categories and the discrimination accuracy over second language sound contrasts. The results indicated that second language listeners assimilated pairs of Greek vowels in a single phonological category of their native language resulting in a Category Goodness difference assimilation type for the Greek stressed /i/-/e/ and the Greek stressed-unstressed /o/-/u/ vowel contrasts. On the contrary, the members of the Greek unstressed /i/-/e/ vowel contrast were assimilated to two different categories resulting in a Two Category assimilation type. Furthermore, they could discriminate the Greek stressed /i/-/e/ and the Greek stressed-unstressed /o/-/u/ contrasts only in a moderate degree while the Greek unstressed /i/-/e/ contrast could be discriminated in an excellent degree. Two main implications emerge from the results. First, there is a strong influence of the listeners’ native language on the perception of the second language vowels. In Egyptian Arabic, contiguous vowel categories such as [i]-[e] and [u]-[o] do not have phonemic difference but they are subject to allophonic variation; by contrast, the vowel contrasts /i/-/e/ and /o/-/u/ are phonemic in Greek. Second, the role of stress is significant for second language perception since stressed vs. unstressed vowel contrasts were perceived in a different manner by the Greek listeners.Keywords: Arabic, bilingual, Greek, vowel perception
Procedia PDF Downloads 138475 Collaborative Approaches in Achieving Sustainable Private-Public Transportation Services in Inner-City Areas: A Case of Durban Minibus Taxis
Authors: Lonna Mabandla, Godfrey Musvoto
Abstract:
Transportation is a catalytic feature in cities. Transport and land use activity are interdependent and have a feedback loop between how land is developed and how transportation systems are designed and used. This recursive relationship between land use and transportation is reflected in how public transportation routes internal to the inner-city enhance accessibility, therefore creating spaces that are conducive to business activity, while the business activity also informs public transportation routes. It is for this reason that the focus of this research is on public transportation within inner-city areas where the dynamic is evident. Durban is the chosen case study where the dominating form of public transportation within the central business district (CBD) is minibus taxis. The paradox here is that minibus taxis still form part of the informal economy even though they are the leading form of public transportation in South Africa. There have been many attempts to formalise this industry to follow more regulatory practices, but minibus taxis are privately owned, therefore complicating any proposed intervention. The argument of this study is that the application of collaborative planning through a sustainable partnership between the public and private sectors will improve the social and environmental sustainability of public transportation. One of the major challenges that exist within such collaborative endeavors is power dynamics. As a result, a key focus of the study is on power relations. Practically, power relations should be observed over an extended period, specifically when the different stakeholders engage with each other, to reflect valid data. However, a lengthy data collection process was not possible to observe during the data collection phase of this research. Instead, interviews were conducted focusing on existing procedural planning practices between the inner-city minibus taxi association (South and North Beach Taxi Association), the eThekwini Transport Authority (ETA), and the eThekwini Town Planning Department. Conclusions and recommendations were then generated based on these data.Keywords: collaborative planning, sustainability, public transport, minibus taxis
Procedia PDF Downloads 59474 Monetary Evaluation of Dispatching Decisions in Consideration of Choice of Transport
Authors: Marcel Schneider, Nils Nießen
Abstract:
Microscopic simulation programs enable the description of the two processes of railway operation and the previous timetabling. Occupation conflicts are often solved based on defined train priorities on both process levels. These conflict resolutions produce knock-on delays for the involved trains. The sum of knock-on delays is commonly used to evaluate the quality of railway operations. It is either compared to an acceptable level-of-service or the delays are evaluated economically by linearly monetary functions. It is impossible to properly evaluate dispatching decisions without a well-founded objective function. This paper presents a new approach for evaluation of dispatching decisions. It uses models of choice of transport and considers the behaviour of the end-costumers. These models evaluate the knock-on delays in more detail than linearly monetary functions and consider other competing modes of transport. The new approach pursues the coupling of a microscopic model of railway operation with the macroscopic model of choice of transport. First it will be implemented for the railway operations process, but it can also be used for timetabling. The evaluation considers the possibility to change over to other transport modes by the end-costumers. The new approach first looks at the rail-mounted and road transport, but it can also be extended to air transport. The split of the end-costumers is described by the modal-split. The reactions by the end-costumers have an effect on the revenues of the railway undertakings. Various travel purposes has different pavement reserves and tolerances towards delays. Longer journey times affect besides revenue changes also additional costs. The costs depend either on time or track and arise from circulation of workers and vehicles. Only the variable values are summarised in the contribution margin, which is the base for the monetary evaluation of the delays. The contribution margin is calculated for different resolution decisions of the same conflict. The conflict resolution is improved until the monetary loss becomes minimised. The iterative process therefore determines an optimum conflict resolution by observing the change of the contribution margin. Furthermore, a monetary value of each dispatching decision can also be determined.Keywords: choice of transport, knock-on delays, monetary evaluation, railway operations
Procedia PDF Downloads 328473 Pathologies in the Left Atrium Reproduced Using a Low-Order Synergistic Numerical Model of the Cardiovascular System
Authors: Nicholas Pearce, Eun-jin Kim
Abstract:
Pathologies of the cardiovascular (CV) system remain a serious and deadly health problem for human society. Computational modelling provides a relatively accessible tool for diagnosis, treatment, and research into CV disorders. However, numerical models of the CV system have largely focused on the function of the ventricles, frequently overlooking the behaviour of the atria. Furthermore, in the study of the pressure-volume relationship of the heart, which is a key diagnosis of cardiac vascular pathologies, previous works often evoke popular yet questionable time-varying elastance (TVE) method that imposes the pressure-volume relationship instead of calculating it consistently. Despite the convenience of the TVE method, there have been various indications of its limitations and the need for checking its validity in different scenarios. A model of the combined left ventricle (LV) and left atrium (LA) is presented, which consistently considers various feedback mechanisms in the heart without having to use the TVE method. Specifically, a synergistic model of the left ventricle is extended and modified to include the function of the LA. The synergy of the original model is preserved by modelling the electro-mechanical and chemical functions of the micro-scale myofiber for the LA and integrating it with the microscale and macro-organ-scale heart dynamics of the left ventricle and CV circulation. The atrioventricular node function is included and forms the conduction pathway for electrical signals between the atria and ventricle. The model reproduces the essential features of LA behaviour, such as the two-phase pressure-volume relationship and the classic figure of eight pressure-volume loops. Using this model, disorders in the internal cardiac electrical signalling are investigated by recreating the mechano-electric feedback (MEF), which is impossible where the time-varying elastance method is used. The effects of AV node block and slow conduction are then investigated in the presence of an atrial arrhythmia. It is found that electrical disorders and arrhythmia in the LA degrade the CV system by reducing the cardiac output, power, and heart rate.Keywords: cardiovascular system, left atrium, numerical model, MEF
Procedia PDF Downloads 115472 Trend Analysis of Rainfall: A Climate Change Paradigm
Authors: Shyamli Singh, Ishupinder Kaur, Vinod K. Sharma
Abstract:
Climate Change refers to the change in climate for extended period of time. Climate is changing from the past history of earth but anthropogenic activities accelerate this rate of change and which is now being a global issue. Increase in greenhouse gas emissions is causing global warming and climate change related issues at an alarming rate. Increasing temperature results in climate variability across the globe. Changes in rainfall patterns, intensity and extreme events are some of the impacts of climate change. Rainfall variability refers to the degree to which rainfall patterns varies over a region (spatial) or through time period (temporal). Temporal rainfall variability can be directly or indirectly linked to climate change. Such variability in rainfall increases the vulnerability of communities towards climate change. Increasing urbanization and unplanned developmental activities, the air quality is deteriorating. This paper mainly focuses on the rainfall variability due to increasing level of greenhouse gases. Rainfall data of 65 years (1951-2015) of Safdarjung station of Delhi was collected from Indian Meteorological Department and analyzed using Mann-Kendall test for time-series data analysis. Mann-Kendall test is a statistical tool helps in analysis of trend in the given data sets. The slope of the trend can be measured through Sen’s slope estimator. Data was analyzed monthly, seasonally and yearly across the period of 65 years. The monthly rainfall data for the said period do not follow any increasing or decreasing trend. Monsoon season shows no increasing trend but here was an increasing trend in the pre-monsoon season. Hence, the actual rainfall differs from the normal trend of the rainfall. Through this analysis, it can be projected that there will be an increase in pre-monsoon rainfall than the actual monsoon season. Pre-monsoon rainfall causes cooling effect and results in drier monsoon season. This will increase the vulnerability of communities towards climate change and also effect related developmental activities.Keywords: greenhouse gases, Mann-Kendall test, rainfall variability, Sen's slope
Procedia PDF Downloads 207471 PLO-AIM: Potential-Based Lane Organization in Autonomous Intersection Management
Authors: Berk Ecer, Ebru Akcapinar Sezer
Abstract:
Traditional management models of intersections, such as no-light intersections or signalized intersection, are not the most effective way of passing the intersections if the vehicles are intelligent. To this end, Dresner and Stone proposed a new intersection control model called Autonomous Intersection Management (AIM). In the AIM simulation, they were examining the problem from a multi-agent perspective, demonstrating that intelligent intersection control can be made more efficient than existing control mechanisms. In this study, autonomous intersection management has been investigated. We extended their works and added a potential-based lane organization layer. In order to distribute vehicles evenly to each lane, this layer triggers vehicles to analyze near lanes, and they change their lane if other lanes have an advantage. We can observe this behavior in real life, such as drivers, change their lane by considering their intuitions. Basic intuition on selecting the correct lane for traffic is selecting a less crowded lane in order to reduce delay. We model that behavior without any change in the AIM workflow. Experiment results show us that intersection performance is directly connected with the vehicle distribution in lanes of roads of intersections. We see the advantage of handling lane management with a potential approach in performance metrics such as average delay of intersection and average travel time. Therefore, lane management and intersection management are problems that need to be handled together. This study shows us that the lane through which vehicles enter the intersection is an effective parameter for intersection management. Our study draws attention to this parameter and suggested a solution for it. We observed that the regulation of AIM inputs, which are vehicles in lanes, was as effective as contributing to aim intersection management. PLO-AIM model outperforms AIM in evaluation metrics such as average delay of intersection and average travel time for reasonable traffic rates, which is in between 600 vehicle/hour per lane to 1300 vehicle/hour per lane. The proposed model reduced the average travel time reduced in between %0.2 - %17.3 and reduced the average delay of intersection in between %1.6 - %17.1 for 4-lane and 6-lane scenarios.Keywords: AIM project, autonomous intersection management, lane organization, potential-based approach
Procedia PDF Downloads 139470 Constraint-Based Computational Modelling of Bioenergetic Pathway Switching in Synaptic Mitochondria from Parkinson's Disease Patients
Authors: Diana C. El Assal, Fatima Monteiro, Caroline May, Peter Barbuti, Silvia Bolognin, Averina Nicolae, Hulda Haraldsdottir, Lemmer R. P. El Assal, Swagatika Sahoo, Longfei Mao, Jens Schwamborn, Rejko Kruger, Ines Thiele, Kathrin Marcus, Ronan M. T. Fleming
Abstract:
Degeneration of substantia nigra pars compacta dopaminergic neurons is one of the hallmarks of Parkinson's disease. These neurons have a highly complex axonal arborisation and a high energy demand, so any reduction in ATP synthesis could lead to an imbalance between supply and demand, thereby impeding normal neuronal bioenergetic requirements. Synaptic mitochondria exhibit increased vulnerability to dysfunction in Parkinson's disease. After biogenesis in and transport from the cell body, synaptic mitochondria become highly dependent upon oxidative phosphorylation. We applied a systems biochemistry approach to identify the metabolic pathways used by neuronal mitochondria for energy generation. The mitochondrial component of an existing manual reconstruction of human metabolism was extended with manual curation of the biochemical literature and specialised using omics data from Parkinson's disease patients and controls, to generate reconstructions of synaptic and somal mitochondrial metabolism. These reconstructions were converted into stoichiometrically- and fluxconsistent constraint-based computational models. These models predict that Parkinson's disease is accompanied by an increase in the rate of glycolysis and a decrease in the rate of oxidative phosphorylation within synaptic mitochondria. This is consistent with independent experimental reports of a compensatory switching of bioenergetic pathways in the putamen of post-mortem Parkinson's disease patients. Ongoing work, in the context of the SysMedPD project is aimed at computational prediction of mitochondrial drug targets to slow the progression of neurodegeneration in the subset of Parkinson's disease patients with overt mitochondrial dysfunction.Keywords: bioenergetics, mitochondria, Parkinson's disease, systems biochemistry
Procedia PDF Downloads 294469 Development of a Turbulent Boundary Layer Wall-pressure Fluctuations Power Spectrum Model Using a Stepwise Regression Algorithm
Authors: Zachary Huffman, Joana Rocha
Abstract:
Wall-pressure fluctuations induced by the turbulent boundary layer (TBL) developed over aircraft are a significant source of aircraft cabin noise. Since the power spectral density (PSD) of these pressure fluctuations is directly correlated with the amount of sound radiated into the cabin, the development of accurate empirical models that predict the PSD has been an important ongoing research topic. The sound emitted can be represented from the pressure fluctuations term in the Reynoldsaveraged Navier-Stokes equations (RANS). Therefore, early TBL empirical models (including those from Lowson, Robertson, Chase, and Howe) were primarily derived by simplifying and solving the RANS for pressure fluctuation and adding appropriate scales. Most subsequent models (including Goody, Efimtsov, Laganelli, Smol’yakov, and Rackl and Weston models) were derived by making modifications to these early models or by physical principles. Overall, these models have had varying levels of accuracy, but, in general, they are most accurate under the specific Reynolds and Mach numbers they were developed for, while being less accurate under other flow conditions. Despite this, recent research into the possibility of using alternative methods for deriving the models has been rather limited. More recent studies have demonstrated that an artificial neural network model was more accurate than traditional models and could be applied more generally, but the accuracy of other machine learning techniques has not been explored. In the current study, an original model is derived using a stepwise regression algorithm in the statistical programming language R, and TBL wall-pressure fluctuations PSD data gathered at the Carleton University wind tunnel. The theoretical advantage of a stepwise regression approach is that it will automatically filter out redundant or uncorrelated input variables (through the process of feature selection), and it is computationally faster than machine learning. The main disadvantage is the potential risk of overfitting. The accuracy of the developed model is assessed by comparing it to independently sourced datasets.Keywords: aircraft noise, machine learning, power spectral density models, regression models, turbulent boundary layer wall-pressure fluctuations
Procedia PDF Downloads 135468 Image-Based UAV Vertical Distance and Velocity Estimation Algorithm during the Vertical Landing Phase Using Low-Resolution Images
Authors: Seyed-Yaser Nabavi-Chashmi, Davood Asadi, Karim Ahmadi, Eren Demir
Abstract:
The landing phase of a UAV is very critical as there are many uncertainties in this phase, which can easily entail a hard landing or even a crash. In this paper, the estimation of relative distance and velocity to the ground, as one of the most important processes during the landing phase, is studied. Using accurate measurement sensors as an alternative approach can be very expensive for sensors like LIDAR, or with a limited operational range, for sensors like ultrasonic sensors. Additionally, absolute positioning systems like GPS or IMU cannot provide distance to the ground independently. The focus of this paper is to determine whether we can measure the relative distance and velocity of UAV and ground in the landing phase using just low-resolution images taken by a monocular camera. The Lucas-Konda feature detection technique is employed to extract the most suitable feature in a series of images taken during the UAV landing. Two different approaches based on Extended Kalman Filters (EKF) have been proposed, and their performance in estimation of the relative distance and velocity are compared. The first approach uses the kinematics of the UAV as the process and the calculated optical flow as the measurement; On the other hand, the second approach uses the feature’s projection on the camera plane (pixel position) as the measurement while employing both the kinematics of the UAV and the dynamics of variation of projected point as the process to estimate both relative distance and relative velocity. To verify the results, a sequence of low-quality images taken by a camera that is moving on a specifically developed testbed has been used to compare the performance of the proposed algorithm. The case studies show that the quality of images results in considerable noise, which reduces the performance of the first approach. On the other hand, using the projected feature position is much less sensitive to the noise and estimates the distance and velocity with relatively high accuracy. This approach also can be used to predict the future projected feature position, which can drastically decrease the computational workload, as an important criterion for real-time applications.Keywords: altitude estimation, drone, image processing, trajectory planning
Procedia PDF Downloads 113467 Effect of Different Methods to Control the Parasitic Weed Phelipanche ramosa (L. Pomel) in Tomato Crop
Authors: Disciglio G., Lops F., Carlucci A., Gatta G., Tarantino A., Frabboni L, Tarantino E.
Abstract:
The Phelipanche ramosa is considered the most damaging obligate flowering parasitic weed on a wide species of cultivated plants. The semiarid regions of the world are considered the main center of this parasitic weed, where heavy infestation are due to the ability to produce high numbers of seeds (up to 200,000), that remain viable for extended period (more than 19 years). In this paper 13 treatments of parasitic weed control, as physical, chemical, biological and agronomic methods, including the use of the resistant plants, have been carried out. In 2014 a trial was performed on processing tomato (cv Docet), grown in pots filled with soil taken from a plot heavily infested by Phelipanche ramosa, at the Department of Agriculture, Food and Environment, University of Foggia (southern Italy). Tomato seedlings were transplanted on August 8, 2014 on a clay soil (USDA) 100 kg ha-1 of N; 60 kg ha-1 of P2O5 and 20 kg ha-1 of S. Afterwards, top dressing was performed with 70 kg ha-1 of N. The randomized block design with 3 replicates was adopted. During the growing cycle of the tomato, at 70-75-81 and 88 days after transplantation the number of parasitic shoots emerged in each pot was detected. Also values of leaf chlorophyll Meter SPAD of tomato plants were measured. All data were subjected to analysis of variance (ANOVA) using the JMP software (SAS Institute Inc., Cary, NC, USA), and for comparison of means was used Tukey's test. The results show lower values of the color index SPAD in tomato plants parasitized compared to those healthy. In addition, each treatment studied did not provide complete control against Phelipanche ramosa. However the virulence of the attacks was mitigated by some treatments: radicon product, compost activated with Fusarium, mineral fertilizer nitrogen, sulfur, enzone and resistant tomato genotype. It is assumed that these effects can be improved by combining some of these treatments each other, especially for a gradual and continuing reduction of the “seed bank” of the parasite in the soil.Keywords: control methods, Phelipanche ramose, tomato crop
Procedia PDF Downloads 614466 Prediction of Alzheimer's Disease Based on Blood Biomarkers and Machine Learning Algorithms
Authors: Man-Yun Liu, Emily Chia-Yu Su
Abstract:
Alzheimer's disease (AD) is the public health crisis of the 21st century. AD is a degenerative brain disease and the most common cause of dementia, a costly disease on the healthcare system. Unfortunately, the cause of AD is poorly understood, furthermore; the treatments of AD so far can only alleviate symptoms rather cure or stop the progress of the disease. Currently, there are several ways to diagnose AD; medical imaging can be used to distinguish between AD, other dementias, and early onset AD, and cerebrospinal fluid (CSF). Compared with other diagnostic tools, blood (plasma) test has advantages as an approach to population-based disease screening because it is simpler, less invasive also cost effective. In our study, we used blood biomarkers dataset of The Alzheimer’s disease Neuroimaging Initiative (ADNI) which was funded by National Institutes of Health (NIH) to do data analysis and develop a prediction model. We used independent analysis of datasets to identify plasma protein biomarkers predicting early onset AD. Firstly, to compare the basic demographic statistics between the cohorts, we used SAS Enterprise Guide to do data preprocessing and statistical analysis. Secondly, we used logistic regression, neural network, decision tree to validate biomarkers by SAS Enterprise Miner. This study generated data from ADNI, contained 146 blood biomarkers from 566 participants. Participants include cognitive normal (healthy), mild cognitive impairment (MCI), and patient suffered Alzheimer’s disease (AD). Participants’ samples were separated into two groups, healthy and MCI, healthy and AD, respectively. We used the two groups to compare important biomarkers of AD and MCI. In preprocessing, we used a t-test to filter 41/47 features between the two groups (healthy and AD, healthy and MCI) before using machine learning algorithms. Then we have built model with 4 machine learning methods, the best AUC of two groups separately are 0.991/0.709. We want to stress the importance that the simple, less invasive, common blood (plasma) test may also early diagnose AD. As our opinion, the result will provide evidence that blood-based biomarkers might be an alternative diagnostics tool before further examination with CSF and medical imaging. A comprehensive study on the differences in blood-based biomarkers between AD patients and healthy subjects is warranted. Early detection of AD progression will allow physicians the opportunity for early intervention and treatment.Keywords: Alzheimer's disease, blood-based biomarkers, diagnostics, early detection, machine learning
Procedia PDF Downloads 322465 Abridging Pharmaceutical Analysis and Drug Discovery via LC-MS-TOF, NMR, in-silico Toxicity-Bioactivity Profiling for Therapeutic Purposing Zileuton Impurities: Need of Hour
Authors: Saurabh B. Ganorkar, Atul A. Shirkhedkar
Abstract:
The need for investigations protecting against toxic impurities though seems to be a primary requirement; the impurities which may prove non - toxic can be explored for their therapeutic potential if any to assist advanced drug discovery. The essential role of pharmaceutical analysis can thus be extended effectively to achieve it. The present study successfully achieved these objectives with characterization of major degradation products as impurities for Zileuton which has been used for to treat asthma since years. The forced degradation studies were performed to identify the potential degradation products using Ultra-fine Liquid-chromatography. Liquid-chromatography-Mass spectrometry (Time of Flight) and Proton Nuclear Magnetic Resonance Studies were utilized effectively to characterize the drug along with five major oxidative and hydrolytic degradation products (DP’s). The mass fragments were identified for Zileuton and path for the degradation was investigated. The characterized DP’s were subjected to In-Silico studies as XP Molecular Docking to compare the gain or loss in binding affinity with 5-Lipooxygenase enzyme. One of the impurity of was found to have the binding affinity more than the drug itself indicating for its potential to be more bioactive as better Antiasthmatic. The close structural resemblance has the ability to potentiate or reduce bioactivity and or toxicity. The chances of being active biologically at other sites cannot be denied and the same is achieved to some extent by predictions for probability of being active with Prediction of Activity Spectrum for Substances (PASS) The impurities found to be bio-active as Antineoplastic, Antiallergic, and inhibitors of Complement Factor D. The toxicological abilities as Ames-Mutagenicity, Carcinogenicity, Developmental Toxicity and Skin Irritancy were evaluated using Toxicity Prediction by Komputer Assisted Technology (TOPKAT). Two of the impurities were found to be non-toxic as compared to original drug Zileuton. As the drugs are purposed and repurposed effectively the impurities can also be; as they can have more binding affinity; less toxicity and better ability to be bio-active at other biological targets.Keywords: UFLC, LC-MS-TOF, NMR, Zileuton, impurities, toxicity, bio-activity
Procedia PDF Downloads 195464 Deconstruction of the Term 'Shaman' in the Metaphorical Pair 'Artist as a Shaman'
Authors: Ilona Ivova Anachkova
Abstract:
The analogy between the artist and the shaman as both being practitioners that more easily recognize and explore spiritual matters, and thus contribute to the society in a unique way has been implied in both Modernity and Postmodernity. The Romantic conception of the shaman as a great artist who helps common men see and understand messages of a higher consciousness has been employed throughout Modernity and is active even now. This paper deconstructs the term ‘shaman’ in the metaphorical analogy ‘artist – shaman’ that was developed more fully in Modernity in different artistic and scientific discourses. The shaman is a figure that to a certain extent adequately reflects the late modern and postmodern holistic views on the world. Such views aim at distancing from traditional religious and overly rationalistic discourses. However, the term ‘shaman’ can be well substituted by other concepts such as the priest, for example. The concept ‘shaman’ is based on modern ethnographic and historical investigations. Its later philosophical, psychological and artistic appropriations designate the role of the artist as a spiritual and cultural leader. However, the artist and the shaman are not fully interchangeable terms. The figure of the shaman in ‘primitive’ societies has performed many social functions that are now delegated to different institutions and positions. The shaman incorporates the functions of a judge, a healer. He is a link to divine entities. He is the creative, aspiring human being that has heightened sensitivity to the world in both its spiritual and material aspects. Building the metaphorical analogy between the shaman and the artist comes in many ways. Both are seen as healers of the society, having propensity towards connection to spiritual entities, or being more inclined to creativity than others. The ‘shaman’ however is a fashionable word for a spiritual person used perhaps because of the anti-traditionalist religious modern and postmodern views. The figure of the priest is associated with a too rational, theoretical and detached attitude towards spiritual matters, while the practices of the shaman and the artist are considered engaged with spirituality on a deeper existential level. The term ‘shaman’ however does not have priority of other words/figures that can explore and deploy spiritual aspects of reality. Having substituted the term ‘shaman’ in the pair ‘artist as a shaman’ with ‘the priest’ or literally ‘anybody,' we witness destruction of spiritual hierarchies and come to the view that everybody is responsible for their own spiritual and creative evolution.Keywords: artist as a shaman, creativity, extended theory of art, functions of art, priest as an artist
Procedia PDF Downloads 229463 Cybernetic Model-Based Optimization of a Fed-Batch Process for High Cell Density Cultivation of E. Coli In Shake Flasks
Authors: Snehal D. Ganjave, Hardik Dodia, Avinash V. Sunder, Swati Madhu, Pramod P. Wangikar
Abstract:
Batch cultivation of recombinant bacteria in shake flasks results in low cell density due to nutrient depletion. Previous protocols on high cell density cultivation in shake flasks have relied mainly on controlled release mechanisms and extended cultivation protocols. In the present work, we report an optimized fed-batch process for high cell density cultivation of recombinant E. coli BL21(DE3) for protein production. A cybernetic model-based, multi-objective optimization strategy was implemented to obtain the optimum operating variables to achieve maximum biomass and minimized substrate feed rate. A syringe pump was used to feed a mixture of glycerol and yeast extract into the shake flask. Preliminary experiments were conducted with online monitoring of dissolved oxygen (DO) and offline measurements of biomass and glycerol to estimate the model parameters. Multi-objective optimization was performed to obtain the pareto front surface. The selected optimized recipe was tested for a range of proteins that show different extent soluble expression in E. coli. These included eYFP and LkADH, which are largely expressed in soluble fractions, CbFDH and GcanADH , which are partially soluble, and human PDGF, which forms inclusion bodies. The biomass concentrations achieved in 24 h were in the range 19.9-21.5 g/L, while the model predicted value was 19.44 g/L. The process was successfully reproduced in a standard laboratory shake flask without online monitoring of DO and pH. The optimized fed-batch process showed significant improvement in both the biomass and protein production of the tested recombinant proteins compared to batch cultivation. The proposed process will have significant implications in the routine cultivation of E. coli for various applications.Keywords: cybernetic model, E. coli, high cell density cultivation, multi-objective optimization
Procedia PDF Downloads 258462 Enhancing Sell-In and Sell-Out Forecasting Using Ensemble Machine Learning Method
Authors: Vishal Das, Tianyi Mao, Zhicheng Geng, Carmen Flores, Diego Pelloso, Fang Wang
Abstract:
Accurate sell-in and sell-out forecasting is a ubiquitous problem in the retail industry. It is an important element of any demand planning activity. As a global food and beverage company, Nestlé has hundreds of products in each geographical location that they operate in. Each product has its sell-in and sell-out time series data, which are forecasted on a weekly and monthly scale for demand and financial planning. To address this challenge, Nestlé Chilein collaboration with Amazon Machine Learning Solutions Labhas developed their in-house solution of using machine learning models for forecasting. Similar products are combined together such that there is one model for each product category. In this way, the models learn from a larger set of data, and there are fewer models to maintain. The solution is scalable to all product categories and is developed to be flexible enough to include any new product or eliminate any existing product in a product category based on requirements. We show how we can use the machine learning development environment on Amazon Web Services (AWS) to explore a set of forecasting models and create business intelligence dashboards that can be used with the existing demand planning tools in Nestlé. We explored recent deep learning networks (DNN), which show promising results for a variety of time series forecasting problems. Specifically, we used a DeepAR autoregressive model that can group similar time series together and provide robust predictions. To further enhance the accuracy of the predictions and include domain-specific knowledge, we designed an ensemble approach using DeepAR and XGBoost regression model. As part of the ensemble approach, we interlinked the sell-out and sell-in information to ensure that a future sell-out influences the current sell-in predictions. Our approach outperforms the benchmark statistical models by more than 50%. The machine learning (ML) pipeline implemented in the cloud is currently being extended for other product categories and is getting adopted by other geomarkets.Keywords: sell-in and sell-out forecasting, demand planning, DeepAR, retail, ensemble machine learning, time-series
Procedia PDF Downloads 273461 Prediction of Springback in U-bending of W-Temper AA6082 Aluminum Alloy
Authors: Jemal Ebrahim Dessie, Lukács Zsolt
Abstract:
High-strength aluminum alloys have drawn a lot of attention because of the expanding demand for lightweight vehicle design in the automotive sector. Due to poor formability at room temperature, warm and hot forming have been advised. However, warm and hot forming methods need more steps in the production process and an advanced tooling system. In contrast, since ordinary tools can be used, forming sheets at room temperature in the W temper condition is advantageous. However, springback of supersaturated sheets and their thinning are critical challenges and must be resolved during the use of this technique. In this study, AA6082-T6 aluminum alloy was solution heat treated at different oven temperatures and times using a specially designed and developed furnace in order to optimize the W-temper heat treatment temperature. A U-shaped bending test was carried out at different time periods between W-temper heat treatment and forming operation. Finite element analysis (FEA) of U-bending was conducted using AutoForm aiming to validate the experimental result. The uniaxial tensile and unload test was performed in order to determine the kinematic hardening behavior of the material and has been optimized in the Finite element code using systematic process improvement (SPI). In the simulation, the effect of friction coefficient & blank holder force was considered. Springback parameters were evaluated by the geometry adopted from the NUMISHEET ’93 benchmark problem. It is noted that the change of shape was higher at the more extended time periods between W-temper heat treatment and forming operation. Die radius was the most influential parameter at the flange springback. However, the change of shape shows an overall increasing tendency on the sidewall as the increase of radius of the punch than the radius of the die. The springback angles on the flange and sidewall seem to be highly influenced by the coefficient of friction than blank holding force, and the effect becomes increases as increasing the blank holding force.Keywords: aluminum alloy, FEA, springback, SPI, U-bending, W-temper
Procedia PDF Downloads 100