Search results for: response function
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9581

Search results for: response function

971 Virtual Metering and Prediction of Heating, Ventilation, and Air Conditioning Systems Energy Consumption by Using Artificial Intelligence

Authors: Pooria Norouzi, Nicholas Tsang, Adam van der Goes, Joseph Yu, Douglas Zheng, Sirine Maleej

Abstract:

In this study, virtual meters will be designed and used for energy balance measurements of an air handling unit (AHU). The method aims to replace traditional physical sensors in heating, ventilation, and air conditioning (HVAC) systems with simulated virtual meters. Due to the inability to manage and monitor these systems, many HVAC systems have a high level of inefficiency and energy wastage. Virtual meters are implemented and applied in an actual HVAC system, and the result confirms the practicality of mathematical sensors for alternative energy measurement. While most residential buildings and offices are commonly not equipped with advanced sensors, adding, exploiting, and monitoring sensors and measurement devices in the existing systems can cost thousands of dollars. The first purpose of this study is to provide an energy consumption rate based on available sensors and without any physical energy meters. It proves the performance of virtual meters in HVAC systems as reliable measurement devices. To demonstrate this concept, mathematical models are created for AHU-07, located in building NE01 of the British Columbia Institute of Technology (BCIT) Burnaby campus. The models will be created and integrated with the system’s historical data and physical spot measurements. The actual measurements will be investigated to prove the models' accuracy. Based on preliminary analysis, the resulting mathematical models are successful in plotting energy consumption patterns, and it is concluded confidently that the results of the virtual meter will be close to the results that physical meters could achieve. In the second part of this study, the use of virtual meters is further assisted by artificial intelligence (AI) in the HVAC systems of building to improve energy management and efficiency. By the data mining approach, virtual meters’ data is recorded as historical data, and HVAC system energy consumption prediction is also implemented in order to harness great energy savings and manage the demand and supply chain effectively. Energy prediction can lead to energy-saving strategies and considerations that can open a window in predictive control in order to reach lower energy consumption. To solve these challenges, the energy prediction could optimize the HVAC system and automates energy consumption to capture savings. This study also investigates AI solutions possibility for autonomous HVAC efficiency that will allow quick and efficient response to energy consumption and cost spikes in the energy market.

Keywords: virtual meters, HVAC, artificial intelligence, energy consumption prediction

Procedia PDF Downloads 91
970 Epigenetic and Archeology: A Quest to Re-Read Humanity

Authors: Salma A. Mahmoud

Abstract:

Epigenetic, or alteration in gene expression influenced by extragenetic factors, has emerged as one of the most promising areas that will address some of the gaps in our current knowledge in understanding patterns of human variation. In the last decade, the research investigating epigenetic mechanisms in many fields has flourished and witnessed significant progress. It paved the way for a new era of integrated research especially between anthropology/archeology and life sciences. Skeletal remains are considered the most significant source of information for studying human variations across history, and by utilizing these valuable remains, we can interpret the past events, cultures and populations. In addition to archeological, historical and anthropological importance, studying bones has great implications in other fields such as medicine and science. Bones also can hold within them the secrets of the future as they can act as predictive tools for health, society characteristics and dietary requirements. Bones in their basic forms are composed of cells (osteocytes) that are affected by both genetic and environmental factors, which can only explain a small part of their variability. The primary objective of this project is to examine the epigenetic landscape/signature within bones of archeological remains as a novel marker that could reveal new ways to conceptualize chronological events, gender differences, social status and ecological variations. We attempted here to address discrepancies in common variants such as methylome as well as novel epigenetic regulators such as chromatin remodelers, which to our best knowledge have not yet been investigated by anthropologists/ paleoepigenetists using plethora of techniques (biological, computational, and statistical). Moreover, extracting epigenetic information from bones will highlight the importance of osseous material as a vector to study human beings in several contexts (social, cultural and environmental), and strengthen their essential role as model systems that can be used to investigate and construct various cultural, political and economic events. We also address all steps required to plan and conduct an epigenetic analysis from bone materials (modern and ancient) as well as discussing the key challenges facing researchers aiming to investigate this field. In conclusion, this project will serve as a primer for bioarcheologists/anthropologists and human biologists interested in incorporating epigenetic data into their research programs. Understanding the roles of epigenetic mechanisms in bone structure and function will be very helpful for a better comprehension of their biology and highlighting their essentiality as interdisciplinary vectors and a key material in archeological research.

Keywords: epigenetics, archeology, bones, chromatin, methylome

Procedia PDF Downloads 97
969 Through the Robot’s Eyes: A Comparison of Robot-Piloted, Virtual Reality, and Computer Based Exposure for Fear of Injections

Authors: Bonnie Clough, Tamara Ownsworth, Vladimir Estivill-Castro, Matt Stainer, Rene Hexel, Andrew Bulmer, Wendy Moyle, Allison Waters, David Neumann, Jayke Bennett

Abstract:

The success of global vaccination programs is reliant on the uptake of vaccines to achieve herd immunity. Yet, many individuals do not obtain vaccines or venipuncture procedures when needed. Whilst health education may be effective for those individuals who are hesitant due to safety or efficacy concerns, for many of these individuals, the primary concern relates to blood or injection fear or phobia (BII). BII is highly prevalent and associated with a range of negative health impacts, both at individual and population levels. Exposure therapy is an efficacious treatment for specific phobias, including BII, but has high patient dropout and low implementation by therapists. Whilst virtual reality approaches exposure therapy may be more acceptable, they have similarly low rates of implementation by therapists and are often difficult to tailor to an individual client’s needs. It was proposed that a piloted robot may be able to adequately facilitate fear induction and be an acceptable approach to exposure therapy. The current study examined fear induction responses, acceptability, and feasibility of a piloted robot for BII exposure. A Nao humanoid robot was programmed to connect with a virtual reality head-mounted display, enabling live streaming and exploration of real environments from a distance. Thirty adult participants with BII fear were randomly assigned to robot-pilot or virtual reality exposure conditions in a laboratory-based fear exposure task. All participants also completed a computer-based two-dimensional exposure task, with an order of conditions counterbalanced across participants. Measures included fear (heart rate variability, galvanic skin response, stress indices, and subjective units of distress), engagement with a feared stimulus (eye gaze: time to first fixation and a total number of fixations), acceptability, and perceived treatment credibility. Preliminary results indicate that fear responses can be adequately induced via a robot-piloted platform. Further results will be discussed, as will implications for the treatment of BII phobia and other fears. It is anticipated that piloted robots may provide a useful platform for facilitating exposure therapy, being more acceptable than in-vivo exposure and more flexible than virtual reality exposure.

Keywords: anxiety, digital mental health, exposure therapy, phobia, robot, virtual reality

Procedia PDF Downloads 68
968 Identification of Natural Liver X Receptor Agonists as the Treatments or Supplements for the Management of Alzheimer and Metabolic Diseases

Authors: Hsiang-Ru Lin

Abstract:

Cholesterol plays an essential role in the regulation of the progression of numerous important diseases including atherosclerosis and Alzheimer disease so the generation of suitable cholesterol-lowering reagents is urgent to develop. Liver X receptor (LXR) is a ligand-activated transcription factor whose natural ligands are cholesterols, oxysterols and glucose. Once being activated, LXR can transactivate the transcription action of various genes including CYP7A1, ABCA1, and SREBP1c, involved in the lipid metabolism, glucose metabolism and inflammatory pathway. Essentially, the upregulation of ABCA1 facilitates cholesterol efflux from the cells and attenuates the production of beta-amyloid (ABeta) 42 in brain so LXR is a promising target to develop the cholesterol-lowering reagents and preventative treatment of Alzheimer disease. Engelhardia roxburghiana is a deciduous tree growing in India, China, and Taiwan. However, its chemical composition is only reported to exhibit antitubercular and anti-inflammatory effects. In this study, four compounds, engelheptanoxides A, C, engelhardiol A, and B isolated from the root of Engelhardia roxburghiana were evaluated for their agonistic activity against LXR by the transient transfection reporter assays in the HepG2 cells. Furthermore, their interactive modes with LXR ligand binding pocket were generated by molecular modeling programs. By using the cell-based biological assays, engelheptanoxides A, C, engelhardiol A, and B showing no cytotoxic effect against the proliferation of HepG2 cells, exerted obvious LXR agonistic effects with similar activity as T0901317, a novel synthetic LXR agonist. Further modeling studies including docking and SAR (structure-activity relationship) showed that these compounds can locate in LXR ligand binding pocket in the similar manner as T0901317. Thus, LXR is one of nuclear receptors targeted by pharmaceutical industry for developing treatments of Alzheimer and atherosclerosis diseases. Importantly, the cell-based assays, together with molecular modeling studies suggesting a plausible binding mode, demonstrate that engelheptanoxides A, C, engelhardiol A, and B function as LXR agonists. This is the first report to demonstrate that the extract of Engelhardia roxburghiana contains LXR agonists. As such, these active components of Engelhardia roxburghiana or subsequent analogs may show important therapeutic effects through selective modulation of the LXR pathway.

Keywords: Liver X receptor (LXR), Engelhardia roxburghiana, CYP7A1, ABCA1, SREBP1c, HepG2 cells

Procedia PDF Downloads 411
967 Adaptation of the Scenario Test for Greek-speaking People with Aphasia: Reliability and Validity Study

Authors: Marina Charalambous, Phivos Phylactou, Thekla Elriz, Loukia Psychogios, Jean-Marie Annoni

Abstract:

Background: Evidence-based practices for the evaluation and treatment of people with aphasia (PWA) in Greek are mainly impairment-based. Functional and multimodal communication is usually under assessed and neglected by clinicians. This study explores the adaptation and psychometric testing of the Greek (GR) version of The Scenario Test. The Scenario Test assesses the everyday functional communication of PWA in an interactive multimodal communication setting with the support of an active communication facilitator. Aims: To define the reliability and validity of The Scenario Test GR and discuss its clinical value. Methods & Procedures: The Scenario Test-GR was administered to 54 people with chronic stroke (6+ months post-stroke): 32 PWA and 22 people with stroke without aphasia. Participants were recruited from Greece and Cyprus. All measures were performed in an interview format. Standard psychometric criteria were applied to evaluate reliability (internal consistency, test-retest, and interrater reliability) and validity (construct and known – groups validity) of the Scenario Test GR. Video analysis was performed for the qualitative examination of the communication modes used. Outcomes & Results: The Scenario Test-GR shows high levels of reliability and validity. High scores of internal consistency (Cronbach’s α = .95), test-retest reliability (ICC = .99), and interrater reliability (ICC = .99) were found. Interrater agreement in scores on individual items fell between good and excellent levels of agreement. Correlations with a tool measuring language function in aphasia (the Aphasia Severity Rating Scale of the Boston Diagnostic Aphasia Examination), a measure of functional communication (the Communicative Effectiveness Index), and two instruments examining the psychosocial impact of aphasia (the Stroke and Aphasia Quality of Life questionnaire and the Aphasia Impact Questionnaire) revealed good convergent validity (all ps< .05). Results showed good known – groups validity (Mann-Whitney U = 96.5, p < .001), with significantly higher scores for participants without aphasia compared to those with aphasia. Conclusions: The psychometric qualities of The Scenario Test-GR support the reliability and validity of the tool for the assessment of functional communication for Greek-speaking PWA. The Scenario Test-GR can be used to assess multimodal functional communication, orient aphasia rehabilitation goal setting towards the activity and participation level, and be used as an outcome measure of everyday communication. Future studies will focus on the measurement of sensitivity to change in PWA with severe non-fluent aphasia.

Keywords: the scenario test GR, functional communication assessment, people with aphasia (PWA), tool validation

Procedia PDF Downloads 115
966 Cumulative Pressure Hotspot Assessment in the Red Sea and Arabian Gulf

Authors: Schröde C., Rodriguez D., Sánchez A., Abdul Malak, Churchill J., Boksmati T., Alharbi, Alsulmi H., Maghrabi S., Mowalad, Mutwalli R., Abualnaja Y.

Abstract:

Formulating a strategy for sustainable development of the Kingdom of Saudi Arabia’s coastal and marine environment is at the core of the “Marine and Coastal Protection Assessment Study for the Kingdom of Saudi Arabia Coastline (MCEP)”; that was set up in the context of the Vision 2030 by the Saudi Arabian government and aimed at providing a first comprehensive ‘Status Quo Assessment’ of the Kingdom’s marine environment to inform a sustainable development strategy and serve as a baseline assessment for future monitoring activities. This baseline assessment relied on scientific evidence of the drivers, pressures and their impact on the environments of the Red Sea and Arabian Gulf. A key element of the assessment was the cumulative pressure hotspot analysis developed for both national waters of the Kingdom following the principles of the Driver-Pressure-State-Impact-Response (DPSIR) framework and using the cumulative pressure and impact assessment methodology. The ultimate goals of the analysis were to map and assess the main hotspots of environmental pressures, and identify priority areas for further field surveillance and for urgent management actions. The study identified maritime transport, fisheries, aquaculture, oil, gas, energy, coastal industry, coastal and maritime tourism, and urban development as the main drivers of pollution in the Saudi Arabian marine waters. For each of these drivers, pressure indicators were defined to spatially assess the potential influence of the drivers on the coastal and marine environment. A list of hotspots of 90 locations could be identified based on the assessment. Spatially grouped the list could be reduced to come up with of 10 hotspot areas, two in the Arabian Gulf, 8 in the Red Sea. The hotspot mapping revealed clear spatial patterns of drivers, pressures and hotspots within the marine environment of waters under KSA’s maritime jurisdiction in the Red Sea and Arabian Gulf. The cascading assessment approach based on the DPSIR framework ensured that the root causes of the hotspot patterns, i.e. the human activities and other drivers, can be identified. The adapted CPIA methodology allowed for the combination of the available data to spatially assess the cumulative pressure in a consistent manner, and to identify the most critical hotspots by determining the overlap of cumulative pressure with areas of sensitive biodiversity. Further improvements are expected by enhancing the data sources of drivers and pressure indicators, fine-tuning the decay factors and distances of the pressure indicators, as well as including trans-boundary pressures across the regional seas.

Keywords: Arabian Gulf, DPSIR, hotspot, red sea

Procedia PDF Downloads 127
965 OnabotulinumtoxinA Injection for Glabellar Frown Lines as an Adjunctive Treatment for Depression

Authors: I. Witbooi, J. De Smidt, A. Oelofse

Abstract:

Negative emotions that are common in depression are coupled with the activation of the corrugator supercilli and procerus muscles in the glabellar region of the face. This research investigated the impact of OnabotulinumtoxinA (BOTOX) in the improvement of emotional states in depressed subjects by relaxing the mentioned muscles. The aim of the study was to investigate the effectiveness of BOTOX treatment for glabellar frown lines as an adjunctive therapy for Major Depressive Disorder (MDD) and to improve the quality of life and self-esteem of the subjects. It is hypothesized that BOTOX treatment for glabellar frown lines reduces depressive symptoms significantly and therefore augment conventional antidepressant medication. Forty-five (45) subjects diagnosed with MDD were assigned to a treatment (n = 15), placebo (n = 15), and control (n = 15) group. The treatment group received BOTOX injection, while the placebo group received saline injection into the Procerus and Corrugator supercilli muscles with follow-up visits every 3 weeks (weeks 3, 6 and 12 respectively). The control group received neither BOTOX nor saline injections and were only interviewed again on the 12th week. To evaluate the effect of BOTOX treatment in the glabellar region on depressive symptoms, the Montgomery-Asberg Depression Rating (MADRS) scale and the Beck Depression Inventory (BDI) were used. The Quality of Life Enjoyment and Satisfaction Questionnaire-Short Form (Q-LES-Q-SF) and Rosenberg Self-Esteem Scale (RSES) were used in the assessment of self-esteem and quality of life. Participants were followed up for a 12 week period. The expected primary outcome measure is the response to treatment, and it is defined as a ≥ 50% reduction in MADRS score from baseline. Other outcome measures include a clinically significant decrease in BDI scores and the increase in quality of life and self-esteem respectively. Initial results show a clear trend towards such differences. Results showed trends towards expected differences. Patients in the Botox group had a mean MADRS score of 14.0 at 3 weeks compared to 20.3 of the placebo group. This trend was still visible at 6 weeks with the Botox and placebo group scoring an average of 10 vs. 18 respectively. The mean difference in MDRS scores from baseline to 3 weeks were 9.3 and 2.0 for the Botox and placebo group respectively. Similarly, the BDI scores were lower in the Botox group (17.25) compared to the placebo group (19.43). The two self-esteem questionnaires showed expected results at this stage with the RSES 19.1 in the Botox group compared to 18.6 in the placebo group. Similarly, the Botox patients had a higher score for the Q-LES-Q-SF of 49.2 compared to 46.1 for the placebo group. Conclusions: Initial results clearly demonstrated that the use of Botox had positive effects on both scores of depressions and that of self-esteem when compared to a placebo group.

Keywords: adjunctive therapy, depression, glabellar area, OnabotulinumtoxinA

Procedia PDF Downloads 126
964 Microbiological Profile of UTI along with Their Antibiotic Sensitivity Pattern with Special Reference to Nitrofurantoin

Authors: Rupinder Bakshi, Geeta Walia, Anita Gupta

Abstract:

Introduction: Urinary tract infections (UTI) are considered to be one of the most common bacterial infections with an estimated annual global incidence of 150 million. Antimicrobial drug resistance is one of the major threats due to widespread usage of uncontrolled antibiotics. Materials and Methods: A total number of 9149 urine samples were collected from R.H Patiala and processed in the Department of Microbiology G.M.C Patiala. Urine samples were inoculated on MacConkey’s and blood agar plates by using calibrated loop delivering 0.001 ml of sample and incubated at 37 °C for 24 hrs. The organisms were identified by colony characters, gram’s staining and biochemical reactions. Antimicrobial susceptibility of the isolates was determined against various antimicrobial agents (Hi – Media Mumbai India) by Kirby-Bauer disk diffusion method on Muller Hinton agar plates. Results: Maximum patients were in the age group of 21-30 yrs followed by 31-40 yrs. Males (34%) are less prone to urinary tract infections than females (66%). Out of 9149 urine sample, the culture was positive in 25% (2290) samples. Esch. coli was the most common isolate 60.3% (n = 1378) followed by Klebsiella pneumoniae 13.5% (n = 310), Proteus spp. 9% (n = 209), Staphylococcus aureus 7.6 % (n = 173), Pseudomonas aeruginosa 3.7% (n = 84), Citrobacter spp. 3.1 % (70), Staphylococcus saprophyticus 1.8 % (n = 142), Enterococcus faecalis 0.8%(n=19) and Acinetobacter spp. 0.2%(n=5). Gram negative isolates showed higher sensitivity towards, Piperacillin +Tazobactum (67%), Amikacin (80%), Nitrofurantoin (82%), Aztreonam (100%), Imipenem (100%) and Meropenam (100%) while gram positive showed good response towards Netilmicin (69%), Nitrofurantoin (79%), Linezolid (98%), Vancomycin (100%) and Teicoplanin (100%). 465 (23%) isolates were resistant to Penicillins, 1st generation and 2nd generation Cehalosporins which were further tested by double disk approximation test and combined disk method for ESBL production. Out of 465 isolates, 375 were ESBLs consisting of n 264 (70.6%) Esch.coli and 111 (29.4%) Klebsiella pneumoniae. Susceptibility of ESBL producers to Imipenem, Nitrofurantoin and Amikacin were found to be 100%, 76%, and 75% respectively. Conclusion: Uropathogens are increasingly showing resistance to many antibiotics making empiric management of outpatients UTIs challenging. Ampicillin, Cotrimoxazole, and Ciprofloxacin should not be used in empiric treatment. Nitrofurantoin could be used in lower urinary tract infection. Knowledge of uropathogens and their antimicrobial susceptibility pattern in a geographical region will help inappropriate and judicious antibiotic usage in a health care setup.

Keywords: Urinary Tract Infection, UTI, antibiotic susceptibility pattern, ESBL

Procedia PDF Downloads 333
963 Impact of Reproductive Technologies on Women's Lives in New Delhi: A Study from Feminist Perspective

Authors: Zairunisha

Abstract:

This paper is concerned with the ways in which Assisted Reproductive Technologies (ARTs) affect women’s lives and perceptions regarding their infertility, contraception and reproductive health. Like other female animals, nature has ordained human female with the biological potential of procreation and becoming mother. However, during the last few decades, this phenomenal disposition of women has become a technological affair to achieve fertility and contraception. Medical practices in patriarchal societies are governed by male scientists, technical and medical professionals who try to control women as procreator instead of providing them choices. The use of ARTs presents innumerable waxed ethical questions and issues such as: the place and role of a child in a woman’s life, freedom of women to make their choices related to use of ARTs, challenges and complexities women face at social and personal levels regarding use of ARTs, effect of ARTs on their life as mothers and other relationships. The paper is based on a survey study to explore and analyze the above ethical issues arising from the use of Assisted Reproductive Technologies (ARTs) by women in New Delhi, the capital of India. A rapid rate of increase in fertility clinics has been noticed recently. It is claimed that these clinics serve women by using ARTs procedures for infertile couples and individuals who want to have child or terminate a pregnancy. The study is an attempt to articulate a critique of ARTs from a feminist perspective. A qualitative feminist research methodology has been adopted for conducting the survey study. An attempt has been made to identify the ways in which a woman’s life is affected in terms of her perceptions, apprehensions, choices and decisions regarding new reproductive technologies. A sample of 18 women of New Delhi was taken to conduct in-depth interviews to investigate their perception and response concerning the use of ARTs with a focus on (i) successful use of ARTs, (ii) unsuccessful use of ARTs, (iii) use of ARTs in progress with results yet to be known. The survey was done to investigate the impact of ARTs on women’s physical, emotional, psychological conditions as well as on their social relations and choices. The complexities and challenges faced by women in the voluntary and involuntary (forced) use of ARTs in Delhi have been illustrated. A critical analysis of interviews revealed that these technologies are used and developed for making profits at the cost of women’s lives through which economically privileged women and individuals are able to purchase services from lesser ones. In this way, the amalgamation of technology and cultural traditions are redefining and re-conceptualising the traditional patterns of motherhood, fatherhood, kinship and family relations within the realm of new ways of reproduction introduced through the use of ARTs.

Keywords: reproductive technologies, infertilities, voluntary, involuntary

Procedia PDF Downloads 365
962 De novo Transcriptome Assembly of Lumpfish (Cyclopterus lumpus L.) Brain Towards Understanding their Social and Cognitive Behavioural Traits

Authors: Likith Reddy Pinninti, Fredrik Ribsskog Staven, Leslie Robert Noble, Jorge Manuel de Oliveira Fernandes, Deepti Manjari Patel, Torstein Kristensen

Abstract:

Understanding fish behavior is essential to improve animal welfare in aquaculture research. Behavioral traits can have a strong influence on fish health and habituation. To identify the genes and biological pathways responsible for lumpfish behavior, we performed an experiment to understand the interspecies relationship (mutualism) between the lumpfish and salmon. Also, we tested the correlation between the gene expression data vs. observational/physiological data to know the essential genes that trigger stress and swimming behavior in lumpfish. After the de novo assembly of the brain transcriptome, all the samples were individually mapped to the available lumpfish (Cyclopterus lumpus L.) primary genome assembly (fCycLum1.pri, GCF_009769545.1). Out of ~16749 genes expressed in brain samples, we found 267 genes to be statistically significant (P > 0.05) found only in odor and control (1), model and control (41) and salmon and control (225) groups. However, genes with |LogFC| ≥0.5 were found to be only eight; these are considered as differentially expressed genes (DEG’s). Though, we are unable to find the differential genes related to the behavioral traits from RNA-Seq data analysis. From the correlation analysis, between the gene expression data vs. observational/physiological data (serotonin (5HT), dopamine (DA), 3,4-Dihydroxyphenylacetic acid (DOPAC), 5-hydroxy indole acetic acid (5-HIAA), Noradrenaline (NORAD)). We found 2495 genes found to be significant (P > 0.05) and among these, 1587 genes are positively correlated with the Noradrenaline (NORAD) hormone group. This suggests that Noradrenaline is triggering the change in pigmentation and skin color in lumpfish. Genes related to behavioral traits like rhythmic, locomotory, feeding, visual, pigmentation, stress, response to other organisms, taxis, dopamine synthesis and other neurotransmitter synthesis-related genes were obtained from the correlation analysis. In KEGG pathway enrichment analysis, we find important pathways, like the calcium signaling pathway and adrenergic signaling in cardiomyocytes, both involved in cell signaling, behavior, emotion, and stress. Calcium is an essential signaling molecule in the brain cells; it could affect the behavior of fish. Our results suggest that changes in calcium homeostasis and adrenergic receptor binding activity lead to changes in fish behavior during stress.

Keywords: behavior, De novo, lumpfish, salmon

Procedia PDF Downloads 162
961 Analyzing Transit Network Design versus Urban Dispersion

Authors: Hugo Badia

Abstract:

This research answers which is the most suitable transit network structure to serve specific demand requirements in an increasing urban dispersion process. Two main approaches of network design are found in the literature. On the one hand, a traditional answer, widespread in our cities, that develops a high number of lines to connect most of origin-destination pairs by direct trips; an approach based on the idea that users averse to transfers. On the other hand, some authors advocate an alternative design characterized by simple networks where transfer is essential to complete most of trips. To answer which of them is the best option, we use a two-step methodology. First, by means of an analytical model, three basic network structures are compared: a radial scheme, starting point for the other two structures, a direct trip-based network, and a transfer-based one, which represent the two alternative transit network designs. The model optimizes the network configuration with regard to the total cost for each structure. For a scenario of dispersion, the best alternative is the structure with the minimum cost. This dispersion degree is defined in a simple way considering that only a central area attracts all trips. If this area is small, we have a high concentrated mobility pattern; if this area is too large, the city is highly decentralized. In this first step, we can determine the area of applicability for each structure in function to that urban dispersion degree. The analytical results show that a radial structure is suitable when the demand is so centralized, however, when this demand starts to scatter, new transit lines should be implemented to avoid transfers. If the urban dispersion advances, the introduction of more lines is no longer a good alternative, in this case, the best solution is a change of structure, from direct trips to a network based on transfers. The area of applicability of each network strategy is not constant, it depends on the characteristics of demand, city and transport technology. In the second step, we translate analytical results to a real case study by the relationship between the parameters of dispersion of the model and direct measures of dispersion in a real city. Two dimensions of the urban sprawl process are considered: concentration, defined by Gini coefficient, and centralization by area based centralization index. Once it is estimated the real dispersion degree, we are able to identify in which area of applicability the city is located. In summary, from a strategic point of view, we can obtain with this methodology which is the best network design approach for a city, comparing the theoretical results with the real dispersion degree.

Keywords: analytical network design model, network structure, public transport, urban dispersion

Procedia PDF Downloads 222
960 Automatic Aggregation and Embedding of Microservices for Optimized Deployments

Authors: Pablo Chico De Guzman, Cesar Sanchez

Abstract:

Microservices are a software development methodology in which applications are built by composing a set of independently deploy-able, small, modular services. Each service runs a unique process and it gets instantiated and deployed in one or more machines (we assume that different microservices are deployed into different machines). Microservices are becoming the de facto standard for developing distributed cloud applications due to their reduced release cycles. In principle, the responsibility of a microservice can be as simple as implementing a single function, which can lead to the following issues: - Resource fragmentation due to the virtual machine boundary. - Poor communication performance between microservices. Two composition techniques can be used to optimize resource fragmentation and communication performance: aggregation and embedding of microservices. Aggregation allows the deployment of a set of microservices on the same machine using a proxy server. Aggregation helps to reduce resource fragmentation, and is particularly useful when the aggregated services have a similar scalability behavior. Embedding deals with communication performance by deploying on the same virtual machine those microservices that require a communication channel (localhost bandwidth is reported to be about 40 times faster than cloud vendor local networks and it offers better reliability). Embedding can also reduce dependencies on load balancer services since the communication takes place on a single virtual machine. For example, assume that microservice A has two instances, a1 and a2, and it communicates with microservice B, which also has two instances, b1 and b2. One embedding can deploy a1 and b1 on machine m1, and a2 and b2 are deployed on a different machine m2. This deployment configuration allows each pair (a1-b1), (a2-b2) to communicate using the localhost interface without the need of a load balancer between microservices A and B. Aggregation and embedding techniques are complex since different microservices might have incompatible runtime dependencies which forbid them from being installed on the same machine. There is also a security concern since the attack surface between microservices can be larger. Luckily, container technology allows to run several processes on the same machine in an isolated manner, solving the incompatibility of running dependencies and the previous security concern, thus greatly simplifying aggregation/embedding implementations by just deploying a microservice container on the same machine as the aggregated/embedded microservice container. Therefore, a wide variety of deployment configurations can be described by combining aggregation and embedding to create an efficient and robust microservice architecture. This paper presents a formal method that receives a declarative definition of a microservice architecture and proposes different optimized deployment configurations by aggregating/embedding microservices. The first prototype is based on i2kit, a deployment tool also submitted to ICWS 2018. The proposed prototype optimizes the following parameters: network/system performance, resource usage, resource costs and failure tolerance.

Keywords: aggregation, deployment, embedding, resource allocation

Procedia PDF Downloads 192
959 An Evolutionary Approach for QAOA for Max-Cut

Authors: Francesca Schiavello

Abstract:

This work aims to create a hybrid algorithm, combining Quantum Approximate Optimization Algorithm (QAOA) with an Evolutionary Algorithm (EA) in the place of traditional gradient based optimization processes. QAOA’s were first introduced in 2014, where, at the time, their algorithm performed better than the traditional best known classical algorithm for Max-cut graphs. Whilst classical algorithms have improved since then and have returned to being faster and more efficient, this was a huge milestone for quantum computing, and their work is often used as a benchmarking tool and a foundational tool to explore variants of QAOA’s. This, alongside with other famous algorithms like Grover’s or Shor’s, highlights to the world the potential that quantum computing holds. It also presents the reality of a real quantum advantage where, if the hardware continues to improve, this could constitute a revolutionary era. Given that the hardware is not there yet, many scientists are working on the software side of things in the hopes of future progress. Some of the major limitations holding back quantum computing are the quality of qubits and the noisy interference they generate in creating solutions, the barren plateaus that effectively hinder the optimization search in the latent space, and the availability of number of qubits limiting the scale of the problem that can be solved. These three issues are intertwined and are part of the motivation for using EAs in this work. Firstly, EAs are not based on gradient or linear optimization methods for the search in the latent space, and because of their freedom from gradients, they should suffer less from barren plateaus. Secondly, given that this algorithm performs a search in the solution space through a population of solutions, it can also be parallelized to speed up the search and optimization problem. The evaluation of the cost function, like in many other algorithms, is notoriously slow, and the ability to parallelize it can drastically improve the competitiveness of QAOA’s with respect to purely classical algorithms. Thirdly, because of the nature and structure of EA’s, solutions can be carried forward in time, making them more robust to noise and uncertainty. Preliminary results show that the EA algorithm attached to QAOA can perform on par with the traditional QAOA with a Cobyla optimizer, which is a linear based method, and in some instances, it can even create a better Max-Cut. Whilst the final objective of the work is to create an algorithm that can consistently beat the original QAOA, or its variants, due to either speedups or quality of the solution, this initial result is promising and show the potential of EAs in this field. Further tests need to be performed on an array of different graphs with the parallelization aspect of the work commencing in October 2023 and tests on real hardware scheduled for early 2024.

Keywords: evolutionary algorithm, max cut, parallel simulation, quantum optimization

Procedia PDF Downloads 49
958 Towards Accurate Velocity Profile Models in Turbulent Open-Channel Flows: Improved Eddy Viscosity Formulation

Authors: W. Meron Mebrahtu, R. Absi

Abstract:

Velocity distribution in turbulent open-channel flows is organized in a complex manner. This is due to the large spatial and temporal variability of fluid motion resulting from the free-surface turbulent flow condition. This phenomenon is complicated further due to the complex geometry of channels and the presence of solids transported. Thus, several efforts were made to understand the phenomenon and obtain accurate mathematical models that are suitable for engineering applications. However, predictions are inaccurate because oversimplified assumptions are involved in modeling this complex phenomenon. Therefore, the aim of this work is to study velocity distribution profiles and obtain simple, more accurate, and predictive mathematical models. Particular focus will be made on the acceptable simplification of the general transport equations and an accurate representation of eddy viscosity. Wide rectangular open-channel seems suitable to begin the study; other assumptions are smooth-wall, and sediment-free flow under steady and uniform flow conditions. These assumptions will allow examining the effect of the bottom wall and the free surface only, which is a necessary step before dealing with more complex flow scenarios. For this flow condition, two ordinary differential equations are obtained for velocity profiles; from the Reynolds-averaged Navier-Stokes (RANS) equation and equilibrium consideration between turbulent kinetic energy (TKE) production and dissipation. Then different analytic models for eddy viscosity, TKE, and mixing length were assessed. Computation results for velocity profiles were compared to experimental data for different flow conditions and the well-known linear, log, and log-wake laws. Results show that the model based on the RANS equation provides more accurate velocity profiles. In the viscous sublayer and buffer layer, the method based on Prandtl’s eddy viscosity model and Van Driest mixing length give a more precise result. For the log layer and outer region, a mixing length equation derived from Von Karman’s similarity hypothesis provides the best agreement with measured data except near the free surface where an additional correction based on a damping function for eddy viscosity is used. This method allows more accurate velocity profiles with the same value of the damping coefficient that is valid under different flow conditions. This work continues with investigating narrow channels, complex geometries, and the effect of solids transported in sewers.

Keywords: accuracy, eddy viscosity, sewers, velocity profile

Procedia PDF Downloads 101
957 The Effects of Computer Game-Based Pedagogy on Graduate Students Statistics Performance

Authors: Eva Laryea, Clement Yeboah Authors

Abstract:

A pretest-posttest within subjects, experimental design was employed to examine the effects of a computerized basic statistics learning game on achievement and statistics-related anxiety of students enrolled in introductory graduate statistics course. Participants (N = 34) were graduate students in a variety of programs at state-funded research university in the Southeast United States. We analyzed pre-test posttest differences using paired samples t-tests for achievement and for statistics anxiety. The results of the t-test for knowledge in statistics were found to be statistically significant indicating significant mean gains for statistical knowledge as a function of the game-based intervention. Likewise, the results of the t-test for statistics-related anxiety were also statistically significant indicating a decrease in anxiety from pretest to posttest. The implications of the present study are significant for both teachers and students. For teachers, using computer games developed by the researchers can help to create a more dynamic and engaging classroom environment, as well as improve student learning outcomes. For students, playing these educational games can help to develop important skills such as problem solving, critical thinking, and collaboration. Students can develop interest in the subject matter and spend quality time to learn the course as they play the game without knowing that they are even learning the presupposed hard course. The future directions of the present study are promising, as technology continues to advance and become more widely available. Some potential future developments include the integration of virtual and augmented reality into educational games, the use of machine learning and artificial intelligence to create personalized learning experiences, and the development of new and innovative game-based assessment tools. It is also important to consider the ethical implications of computer game-based pedagogy, such as the potential for games to perpetuate harmful stereotypes and biases. As the field continues to evolve, it will be crucial to address these issues and work towards creating inclusive and equitable learning experiences for all students. This study has the potential to revolutionize the way basic statistics graduate students learn and offers exciting opportunities for future development and research. It is an important area of inquiry for educators, researchers, and policymakers, and will continue to be a dynamic and rapidly evolving field for years to come.

Keywords: pretest-posttest within subjects, experimental design, achievement, statistics-related anxiety

Procedia PDF Downloads 51
956 Theta-Phase Gamma-Amplitude Coupling as a Neurophysiological Marker in Neuroleptic-Naive Schizophrenia

Authors: Jun Won Kim

Abstract:

Objective: Theta-phase gamma-amplitude coupling (TGC) was used as a novel evidence-based tool to reflect the dysfunctional cortico-thalamic interaction in patients with schizophrenia. However, to our best knowledge, no studies have reported the diagnostic utility of the TGC in the resting-state electroencephalographic (EEG) of neuroleptic-naive patients with schizophrenia compared to healthy controls. Thus, the purpose of this EEG study was to understand the underlying mechanisms in patients with schizophrenia by comparing the TGC at rest between two groups and to evaluate the diagnostic utility of TGC. Method: The subjects included 90 patients with schizophrenia and 90 healthy controls. All patients were diagnosed with schizophrenia according to the criteria of Diagnostic and Statistical Manual of Mental Disorders, 4th edition (DSM-IV) by two independent psychiatrists using semi-structured clinical interviews. Because patients were either drug-naïve (first episode) or had not been taking psychoactive drugs for one month before the study, we could exclude the influence of medications. Five frequency bands were defined for spectral analyses: delta (1–4 Hz), theta (4–8 Hz), slow alpha (8–10 Hz), fast alpha (10–13.5 Hz), beta (13.5–30 Hz), and gamma (30-80 Hz). The spectral power of the EEG data was calculated with fast Fourier Transformation using the 'spectrogram.m' function of the signal processing toolbox in Matlab. An analysis of covariance (ANCOVA) was performed to compare the TGC results between the groups, which were adjusted using a Bonferroni correction (P < 0.05/19 = 0.0026). Receiver operator characteristic (ROC) analysis was conducted to examine the discriminating ability of the TGC data for schizophrenia diagnosis. Results: The patients with schizophrenia showed a significant increase in the resting-state TGC at all electrodes. The delta, theta, slow alpha, fast alpha, and beta powers showed low accuracies of 62.2%, 58.4%, 56.9%, 60.9%, and 59.0%, respectively, in discriminating the patients with schizophrenia from the healthy controls. The ROC analysis performed on the TGC data generated the most accurate result among the EEG measures, displaying an overall classification accuracy of 92.5%. Conclusion: As TGC includes phase, which contains information about neuronal interactions from the EEG recording, TGC is expected to be useful for understanding the mechanisms the dysfunctional cortico-thalamic interaction in patients with schizophrenia. The resting-state TGC value was increased in the patients with schizophrenia compared to that in the healthy controls and had a higher discriminating ability than the other parameters. These findings may be related to the compensatory hyper-arousal patterns of the dysfunctional default-mode network (DMN) in schizophrenia. Further research exploring the association between TGC and medical or psychiatric conditions that may confound EEG signals will help clarify the potential utility of TGC.

Keywords: quantitative electroencephalography (QEEG), theta-phase gamma-amplitude coupling (TGC), schizophrenia, diagnostic utility

Procedia PDF Downloads 127
955 Exploring the Differences between Self-Harming and Suicidal Behaviour in Women with Complex Mental Health Needs

Authors: Sophie Oakes-Rogers, Di Bailey, Karen Slade

Abstract:

Female offenders are a uniquely vulnerable group, who are at high risk of suicide. Whilst the prevention of self-harm and suicide remains a key global priority, we need to better understand the relationship between these challenging behaviours that constitute a pressing problem, particularly in environments designed to prioritise safety and security. Method choice is unlikely to be random, and is instead influenced by a range of cultural, social, psychological and environmental factors, which change over time and between countries. A key aspect of self-harm and suicide in women receiving forensic care is the lack of free access to methods. At a time where self-harm and suicide rates continue to rise internationally, understanding the role of these influencing factors and the impact of current suicide prevention strategies on the use of near-lethal methods is crucial. This poster presentation will present findings from 25 interviews and 3 focus groups, which enlisted a Participatory Action Research approach to explore the differences between self-harming and suicidal behavior. A key element of this research was using the lived experiences of women receiving forensic care from one forensic pathway in the UK, and the staffs who care for them, to discuss the role of near-lethal self-harm (NLSH). The findings and suggestions from the lived accounts of the women and staff will inform a draft assessment tool, which better assesses the risk of suicide based on the lethality of methods. This tool will be the first of its kind, which specifically captures the needs of women receiving forensic services. Preliminary findings indicate women engage in NLSH for two key reasons and is determined by their history of self-harm. Women who have a history of superficial non-life threatening self-harm appear to engage in NLSH in response to a significant life event such as family bereavement or sentencing. For these women, suicide appears to be a realistic option to overcome their distress. This, however, differs from women who appear to have a lifetime history of NLSH, who engage in such behavior in a bid to overcome the grief and shame associated with historical abuse. NLSH in these women reflects a lifetime of suicidality and indicates they pose the greatest risk of completed suicide. Findings also indicate differences in method selection between forensic provisions. Restriction of means appears to play a role in method selection, and findings suggest it causes method substitution. Implications will be discussed relating to the screening of female forensic patients and improvements to the current suicide prevention strategies.

Keywords: forensic mental health, method substitution, restriction of means, suicide

Procedia PDF Downloads 166
954 Predictive Modelling of Curcuminoid Bioaccessibility as a Function of Food Formulation and Associated Properties

Authors: Kevin De Castro Cogle, Mirian Kubo, Maria Anastasiadi, Fady Mohareb, Claire Rossi

Abstract:

Background: The bioaccessibility of bioactive compounds is a critical determinant of the nutritional quality of various food products. Despite its importance, there is a limited number of comprehensive studies aimed at assessing how the composition of a food matrix influences the bioaccessibility of a compound of interest. This knowledge gap has prompted a growing need to investigate the intricate relationship between food matrix formulations and the bioaccessibility of bioactive compounds. One such class of bioactive compounds that has attracted considerable attention is curcuminoids. These naturally occurring phytochemicals, extracted from the roots of Curcuma longa, have gained popularity owing to their purported health benefits and also well known for their poor bioaccessibility Project aim: The primary objective of this research project is to systematically assess the influence of matrix composition on the bioaccessibility of curcuminoids. Additionally, this study aimed to develop a series of predictive models for bioaccessibility, providing valuable insights for optimising the formula for functional foods and provide more descriptive nutritional information to potential consumers. Methods: Food formulations enriched with curcuminoids were subjected to in vitro digestion simulation, and their bioaccessibility was characterized with chromatographic and spectrophotometric techniques. The resulting data served as the foundation for the development of predictive models capable of estimating bioaccessibility based on specific physicochemical properties of the food matrices. Results: One striking finding of this study was the strong correlation observed between the concentration of macronutrients within the food formulations and the bioaccessibility of curcuminoids. In fact, macronutrient content emerged as a very informative explanatory variable of bioaccessibility and was used, alongside other variables, as predictors in a Bayesian hierarchical model that predicted curcuminoid bioaccessibility accurately (optimisation performance of 0.97 R2) for the majority of cross-validated test formulations (LOOCV of 0.92 R2). These preliminary results open the door to further exploration, enabling researchers to investigate a broader spectrum of food matrix types and additional properties that may influence bioaccessibility. Conclusions: This research sheds light on the intricate interplay between food matrix composition and the bioaccessibility of curcuminoids. This study lays a foundation for future investigations, offering a promising avenue for advancing our understanding of bioactive compound bioaccessibility and its implications for the food industry and informed consumer choices.

Keywords: bioactive bioaccessibility, food formulation, food matrix, machine learning, probabilistic modelling

Procedia PDF Downloads 62
953 Innovating Electronics Engineering for Smart Materials Marketing

Authors: Muhammad Awais Kiani

Abstract:

The field of electronics engineering plays a vital role in the marketing of smart materials. Smart materials are innovative, adaptive materials that can respond to external stimuli, such as temperature, light, or pressure, in order to enhance performance or functionality. As the demand for smart materials continues to grow, it is crucial to understand how electronics engineering can contribute to their marketing strategies. This abstract presents an overview of the role of electronics engineering in the marketing of smart materials. It explores the various ways in which electronics engineering enables the development and integration of smart features within materials, enhancing their marketability. Firstly, electronics engineering facilitates the design and development of sensing and actuating systems for smart materials. These systems enable the detection and response to external stimuli, providing valuable data and feedback to users. By integrating sensors and actuators into materials, their functionality and performance can be significantly enhanced, making them more appealing to potential customers. Secondly, electronics engineering enables the creation of smart materials with wireless communication capabilities. By incorporating wireless technologies such as Bluetooth or Wi-Fi, smart materials can seamlessly interact with other devices, providing real-time data and enabling remote control and monitoring. This connectivity enhances the marketability of smart materials by offering convenience, efficiency, and improved user experience. Furthermore, electronics engineering plays a crucial role in power management for smart materials. Implementing energy-efficient systems and power harvesting techniques ensures that smart materials can operate autonomously for extended periods. This aspect not only increases their market appeal but also reduces the need for constant maintenance or battery replacements, thus enhancing customer satisfaction. Lastly, electronics engineering contributes to the marketing of smart materials through innovative user interfaces and intuitive control mechanisms. By designing user-friendly interfaces and integrating advanced control systems, smart materials become more accessible to a broader range of users. Clear and intuitive controls enhance the user experience and encourage wider adoption of smart materials in various industries. In conclusion, electronics engineering significantly influences the marketing of smart materials by enabling the design of sensing and actuating systems, wireless connectivity, efficient power management, and user-friendly interfaces. The integration of electronics engineering principles enhances the functionality, performance, and marketability of smart materials, making them more adaptable to the growing demand for innovative and connected materials in diverse industries.

Keywords: electronics engineering, smart materials, marketing, power management

Procedia PDF Downloads 51
952 Exploring Faculty Attitudes about Grades and Alternative Approaches to Grading: Pilot Study

Authors: Scott Snyder

Abstract:

Grading approaches in higher education have not changed meaningfully in over 100 years. While there is variation in the types of grades assigned across countries, most use approaches based on simple ordinal scales (e.g, letter grades). While grades are generally viewed as an indication of a student's performance, challenges arise regarding the clarity, validity, and reliability of letter grades. Research about grading in higher education has primarily focused on grade inflation, student attitudes toward grading, impacts of grades, and benefits of plus-minus letter grade systems. Little research is available about alternative approaches to grading, varying approaches used by faculty within and across colleges, and faculty attitudes toward grades and alternative approaches to grading. To begin to address these gaps, a survey was conducted of faculty in a sample of departments at three diverse colleges in a southeastern state in the US. The survey focused on faculty experiences with and attitudes toward grading, the degree to which faculty innovate in teaching and grading practices, and faculty interest in alternatives to the point system approach to grading. Responses were received from 104 instructors (21% response rate). The majority reported that teaching accounted for 50% or more of their academic duties. Almost all (92%) of respondents reported using point and percentage systems for their grading. While all respondents agreed that grades should reflect the degree to which objectives were mastered, half indicated that grades should also reflect effort or improvement. Over 60% felt that grades should be predictive of success in subsequent courses or real life applications. Most respondents disagreed that grades should compare students to other students. About 42% worried about their own grade inflation and grade inflation in their college. Only 17% disagreed that grades mean different things based on the instructor while 75% thought it would be good if there was agreement. Less than 50% of respondents felt that grades were directly useful for identifying students who should/should not continue, identify strengths/weaknesses, predict which students will be most successful, or contribute to program monitoring of student progress. Instructors were less willing to modify assessment than they were to modify instruction and curriculum. Most respondents (76%) were interested in learning about alternative approaches to grading (e.g., specifications grading). The factors that were most associated with willingness to adopt a new grading approach were clarity to students and simplicity of adoption of the approach. Follow-up studies are underway to investigate implementations of alternative grading approaches, expand the study to universities and departments not involved in the initial study, examine student attitudes about alternative approaches, and refine the measure of attitude toward adoption of alternative grading practices within the survey. Workshops about challenges of using percentage and point systems for determining grades and workshops regarding alternative approaches to grading are being offered.

Keywords: alternative approaches to grading, grades, higher education, letter grades

Procedia PDF Downloads 89
951 Ultra-Sensitive Point-Of-Care Detection of PSA Using an Enzyme- and Equipment-Free Microfluidic Platform

Authors: Ying Li, Rui Hu, Shizhen Chen, Xin Zhou, Yunhuang Yang

Abstract:

Prostate cancer is one of the leading causes of cancer-related death among men. Prostate-specific antigen (PSA), a specific product of prostatic epithelial cells, is an important indicator of prostate cancer. Though PSA is not a specific serum biomarker for the screening of prostate cancer, it is recognized as an indicator for prostate cancer recurrence and response to therapy for patient’s post-prostatectomy. Since radical prostatectomy eliminates the source of PSA production, serum PSA levels fall below 50 pg/mL, and may be below the detection limit of clinical immunoassays (current clinical immunoassay lower limit of detection is around 10 pg/mL). Many clinical studies have shown that intervention at low PSA levels was able to improve patient outcomes significantly. Therefore, ultra-sensitive and precise assays that can accurately quantify extremely low levels of PSA (below 1-10 pg/mL) will facilitate the assessment of patients for the possibility of early adjuvant or salvage treatment. Currently, the commercially available ultra-sensitive ELISA kit (not used clinically) can only reach a detection limit of 3-10 pg/mL. Other platforms developed by different research groups could achieve a detection limit as low as 0.33 pg/mL, but they relied on sophisticated instruments to get the final readout. Herein we report a microfluidic platform for point-of-care (POC) detection of PSA with a detection limit of 0.5 pg/mL and without the assistance of any equipment. This platform is based on a previously reported volumetric-bar-chart chip (V-Chip), which applies platinum nanoparticles (PtNPs) as the ELISA probe to convert the biomarker concentration to the volume of oxygen gas that further pushes the red ink to form a visualized bar-chart. The length of each bar is used to quantify the biomarker concentration of each sample. We devised a long reading channel V-Chip (LV-Chip) in this work to achieve a wide detection window. In addition, LV-Chip employed a unique enzyme-free ELISA probe that enriched PtNPs significantly and owned 500-fold enhanced catalytic ability over that of previous V-Chip, resulting in a significantly improved detection limit. LV-Chip is able to complete a PSA assay for five samples in 20 min. The device was applied to detect PSA in 50 patient serum samples, and the on-chip results demonstrated good correlation with conventional immunoassay. In addition, the PSA levels in finger-prick whole blood samples from healthy volunteers were successfully measured on the device. This completely stand-alone LV-Chip platform enables convenient POC testing for patient follow-up in the physician’s office and is also useful in resource-constrained settings.

Keywords: point-of-care detection, microfluidics, PSA, ultra-sensitive

Procedia PDF Downloads 100
950 Electromagnetic-Mechanical Stimulation on PC12 for Enhancement of Nerve Axonal Extension

Authors: E. Nakamachi, K. Matsumoto, K. Yamamoto, Y. Morita, H. Sakamoto

Abstract:

In recently, electromagnetic and mechanical stimulations have been recognized as the effective extracellular environment stimulation technique to enhance the defected peripheral nerve tissue regeneration. In this study, we developed a new hybrid bioreactor by adopting 50 Hz uniform alternative current (AC) magnetic stimulation and 4% strain mechanical stimulation. The guide tube for nerve regeneration is mesh structured tube made of biodegradable polymer, such as polylatic acid (PLA). However, when neural damage is large, there is a possibility that peripheral nerve undergoes necrosis. So it is quite important to accelerate the nerve tissue regeneration by achieving enhancement of nerve axonal extension rate. Therefore, we try to design and fabricate the system that can simultaneously load the uniform AC magnetic field stimulation and the stretch stimulation to cells for enhancement of nerve axonal extension. Next, we evaluated systems performance and the effectiveness of each stimulation for rat adrenal pheochromocytoma cells (PC12). First, we designed and fabricated the uniform AC magnetic field system and the stretch stimulation system. For the AC magnetic stimulation system, we focused on the use of pole piece structure to carry out in-situ microscopic observation. We designed an optimum pole piece structure using the magnetic field finite element analyses and the response surface methodology. We fabricated the uniform AC magnetic field stimulation system as a bio-reactor by adopting analytically determined design specifications. We measured magnetic flux density that is generated by the uniform AC magnetic field stimulation system. We confirmed that measurement values show good agreement with analytical results, where the uniform magnetic field was observed. Second, we fabricated the cyclic stretch stimulation device under the conditions of particular strains, where the chamber was made of polyoxymethylene (POM). We measured strains in the PC12 cell culture region to confirm the uniform strain. We found slightly different values from the target strain. Finally, we concluded that these differences were allowable in this mechanical stimulation system. We evaluated the effectiveness of each stimulation to enhance the nerve axonal extension using PC12. We confirmed that the average axonal extension length of PC12 under the uniform AC magnetic stimulation was increased by 16 % at 96 h in our bio-reactor. We could not confirm that the axonal extension enhancement under the stretch stimulation condition, where we found the exfoliating of cells. Further, the hybrid stimulation enhanced the axonal extension. Because the magnetic stimulation inhibits the exfoliating of cells. Finally, we concluded that the enhancement of PC12 axonal extension is due to the magnetic stimulation rather than the mechanical stimulation. Finally, we confirmed that the effectiveness of the uniform AC magnetic field stimulation for the nerve axonal extension using PC12 cells.

Keywords: nerve cell PC12, axonal extension, nerve regeneration, electromagnetic-mechanical stimulation, bioreactor

Procedia PDF Downloads 258
949 A Study on Relationship between Firm Managers Environmental Attitudes and Environment-Friendly Practices for Textile Firms in India

Authors: Anupriya Sharma, Sapna Narula

Abstract:

Over the past decade, sustainability has gone mainstream as more people are worried about environment-related issues than ever before. These issues are of even more concern for industries which leave a significant impact on the environment. Following these ecological issues, corporates are beginning to comprehend the impact on their business. Many such initiatives have been made to address these emerging issues in the consumer-driven textile industry. Demand from customers, local communities, government regulations, etc. are considered some of the major factors affecting environmental decision-making. Research also shows that motivations to go green are inevitably determined by the way top managers perceive environmental issues as managers personal values and ethical commitment act as a motivating factor towards corporate social responsibility. Little empirical research has been conducted to examine the relationship between top managers’ personal environmental attitudes and corporate environmental behaviors for the textile industry in the Indian context. The primary purpose of this study is to determine the current state of environmental management in textile industry and whether the attitude of textile firms’ top managers is significantly related to firm’s response to environmental issues and their perceived benefits of environmental management. To achieve the aforesaid objectives of the study, authors used structured questionnaire based on literature review. The questionnaire consisted of six sections with a total length of eight pages. The first section was based on background information on the position of the respondents in the organization, annual turnover, year of firm’s establishment and so on. The other five sections of the questionnaire were based upon (drivers, attitude, and awareness, sustainable business practices, barriers to implementation and benefits achieved). To test the questionnaire, a pretest was conducted with the professionals working in corporate sustainability and had knowledge about the textile industry and was then mailed to various stakeholders involved in textile production thereby covering firms top manufacturing officers, EHS managers, textile engineers, HR personnel and R&D managers. The results of the study showed that most of the textile firms were implementing some type of environmental management practice, even though the magnitude of firm’s involvement in environmental management practices varied. The results also show that textile firms with a higher level of involvement in environmental management were more involved in the process driven technical environmental practices. It also identified that firm’s top managers environmental attitudes were correlated with perceived advantages of environmental management as textile firm’s top managers are the ones who possess managerial discretion on formulating and deciding business policies such as environmental initiatives.

Keywords: attitude and awareness, Environmental management, sustainability, textile industry

Procedia PDF Downloads 224
948 Well-Defined Polypeptides: Synthesis and Selective Attachment of Poly(ethylene glycol) Functionalities

Authors: Cristina Lavilla, Andreas Heise

Abstract:

The synthesis of sequence-controlled polymers has received increasing attention in the last years. Well-defined polyacrylates, polyacrylamides and styrene-maleimide copolymers have been synthesized by sequential or kinetic addition of comonomers. However this approach has not yet been introduced to the synthesis of polypeptides, which are in fact polymers developed by nature in a sequence-controlled way. Polypeptides are natural materials that possess the ability to self-assemble into complex and highly ordered structures. Their folding and properties arise from precisely controlled sequences and compositions in their constituent amino acid monomers. So far, solid-phase peptide synthesis is the only technique that allows preparing short peptide sequences with excellent sequence control, but also requires extensive protection/deprotection steps and it is a difficult technique to scale-up. A new strategy towards sequence control in the synthesis of polypeptides is introduced, based on the sequential addition of α-amino acid-N-carboxyanhydrides (NCAs). The living ring-opening process is conducted to full conversion and no purification or deprotection is needed before addition of a new amino acid. The length of every block is predefined by the NCA:initiator ratio in every step. This method yields polypeptides with a specific sequence and controlled molecular weights. A series of polypeptides with varying block sequences have been synthesized with the aim to identify structure-property relationships. All of them are able to adopt secondary structures similar to natural polypeptides, and display properties in the solid state and in solution that are characteristic of the primary structure. By design the prepared polypeptides allow selective modification of individual block sequences, which has been exploited to introduce functionalities in defined positions along the polypeptide chain. Poly(ethylene glycol)(PEG) was the functionality chosen, as it is known to favor hydrophilicity and also yield thermoresponsive materials. After PEGylation, hydrophilicity of the polypeptides is enhanced, and their thermal response in H2O has been studied. Noteworthy differences in the behavior of the polypeptides having different sequences have been found. Circular dichroism measurements confirmed that the α-helical conformation is stable over the examined temperature range (5-90 °C). It is concluded that PEG units are the main responsible of the changes in H-bonding interactions with H2O upon variation of temperature, and the position of these functional units along the backbone is a factor of utmost importance in the resulting properties of the α-helical polypeptides.

Keywords: α-amino acid N-carboxyanhydrides, multiblock copolymers, poly(ethylene glycol), polypeptides, ring-opening polymerization, sequence control

Procedia PDF Downloads 186
947 The Effectiveness of an Educational Program on Awareness of Cancer Signs, Symptoms, and Risk Factors among School Students in Oman

Authors: Khadija Al-Hosni, Moon Fai Chan, Mohammed Al-Azri

Abstract:

Background: Several studies suggest that most school-age adolescents are poorly informed on cancer warning signs and risk factors. Providing adolescents with sufficient knowledge would increase their awareness in adulthood and improve seeking behaviors later. Significant: The results will provide a clear vision in assisting key decision-makers in formulating policies on the students' awareness programs towards cancer. So, the likelihood of avoiding cancer in the future will be increased or even promote early diagnosis. Objectives: to evaluate the effectiveness of an education program designed to increase awareness of cancer signs and symptoms risk factors, improve the behavior of seeking help among school students in Oman, and address the barriers to obtaining medical help. Methods: A randomized controlled trial with two groups was conducted in Oman. A total of 1716 students (n=886/control, n= 830/education), aged 15-17 years, at 10th and 11th grade from 12 governmental schools 3 in governorates from 20-February-2022 to 12-May-2022. Basic demographic data were collected, and the Cancer Awareness Measure (CAM) was used as the primary outcome. Data were collected at baseline (T0) and 4 weeks after (T1). The intervention group received an education program about cancer's cause and its signs and symptoms. In contrast, the control group did not receive any education related to this issue during the study period. Non-parametric tests were used to compare the outcomes between groups. Results: At T0, the lamp was the most recognized cancer warning sign in control (55.0%) and intervention (55.2%) groups. However, there were no significant changes at T1 for all signs in the control group. In contrast, all sign outcomes were improved significantly (p<0.001) in the intervention group, the highest response was unexplained pain (93.3%). Smoking was the most recognized risk factor in both groups: (82.8% for control; 84.1% for intervention) at T0. However, there was no significant change in T1 for the control group, but there was for the intervention group (p<0.001), the highest identification was smoking cigarettes (96.5%). Too scared was the largest barrier to seeking medical help by students in the control group at T0 (63.0%) and T1 (62.8%). However, there were no significant changes in all barriers in this group. Otherwise, being too embarrassed (60.2%) was the largest barrier to seeking medical help for students in the intervention group at T0 and too scared (58.6%) at T1. Although there were reductions in all barriers, significant differences were found in six of ten only (p<0.001). Conclusion: The intervention was effective in improving students' awareness of cancer symptoms, warning signs (p<0.001), and risk factors (p<0.001 reduced the most addressed barriers to seeking medical help (p<0.001) in comparison to the control group. The Ministry of Education in Oman could integrate awareness of cancer within the curriculum, and more interventions are needed on the sociological part to overcome the barriers that interfere with seeking medical help.

Keywords: adolescents, awareness, cancer, education, intervention, student

Procedia PDF Downloads 75
946 Dual-use UAVs in Armed Conflicts: Opportunities and Risks for Cyber and Electronic Warfare

Authors: Piret Pernik

Abstract:

Based on strategic, operational, and technical analysis of the ongoing armed conflict in Ukraine, this paper will examine the opportunities and risks of using small commercial drones (dual-use unmanned aerial vehicles, UAV) for military purposes. The paper discusses the opportunities and risks in the information domain, encompassing both cyber and electromagnetic interference and attacks. The paper will draw conclusions on a possible strategic impact to the battlefield outcomes in the modern armed conflicts by the widespread use of dual-use UAVs. This article will contribute to filling the gap in the literature by examining based on empirical data cyberattacks and electromagnetic interference. Today, more than one hundred states and non-state actors possess UAVs ranging from low cost commodity models, widely are dual-use, available and affordable to anyone, to high-cost combat UAVs (UCAV) with lethal kinetic strike capabilities, which can be enhanced with Artificial Intelligence (AI) and Machine Learning (ML). Dual-use UAVs have been used by various actors for intelligence, reconnaissance, surveillance, situational awareness, geolocation, and kinetic targeting. Thus they function as force multipliers enabling kinetic and electronic warfare attacks and provide comparative and asymmetric operational and tactical advances. Some go as far as argue that automated (or semi-automated) systems can change the character of warfare, while others observe that the use of small drones has not changed the balance of power or battlefield outcomes. UAVs give considerable opportunities for commanders, for example, because they can be operated without GPS navigation, makes them less vulnerable and dependent on satellite communications. They can and have been used to conduct cyberattacks, electromagnetic interference, and kinetic attacks. However, they are highly vulnerable to those attacks themselves. So far, strategic studies, literature, and expert commentary have overlooked cybersecurity and electronic interference dimension of the use of dual use UAVs. The studies that link technical analysis of opportunities and risks with strategic battlefield outcomes is missing. It is expected that dual use commercial UAV proliferation in armed and hybrid conflicts will continue and accelerate in the future. Therefore, it is important to understand specific opportunities and risks related to the crowdsourced use of dual-use UAVs, which can have kinetic effects. Technical countermeasures to protect UAVs differ depending on a type of UAV (small, midsize, large, stealth combat), and this paper will offer a unique analysis of small UAVs both from the view of opportunities and risks for commanders and other actors in armed conflict.

Keywords: dual-use technology, cyber attacks, electromagnetic warfare, case studies of cyberattacks in armed conflicts

Procedia PDF Downloads 89
945 Increasing The Role of Civil Society through LAPOR!: National Complaint Handling System in Indonesia

Authors: Izzati Nabiyla Risfa

Abstract:

The role of civil society has become an important issue in national and international level nowadays. Government all over the world started to realize that the involvement of civil society can boost up public services and better policy making. Global Policy Forum stated that there are five good reasons for civil society to be engaged in global governance; (1) to conferring legitimacy on policy decisions; (2) to increasing the pool of policy ideas; (3) to support less powerful governments; (4) countering a lack of political will; and (5) helping states to put nationalism aside. Indonesia also keeps up with this good trend. In November 2011, Indonesian Government set up LAPOR! (means “to report” in Indonesian), an online portal for complaints about public services, which is accessible through its website lapor.ukp.go.id. LAPOR! also accessible through social media (Twitter, Facebook) and text message. This program is an initiative from the government to provide an integrated and accessible portal for the Indonesian public to submit complaints and inquiries as a means of enhancing public participation in national development programs. LAPOR! aims to catalyze public participation as well as to have a more coordinated national complaint handling mechanism. The goal of this program is to increase the role of civil society in order to develop better public services. Thus, LAPOR! works in a simplest way possible. Public can submit any complaints or report their problem concerning development programs and public services simply through the website, short message services to 1708 and mobile applications for BlackBerry and Android. LAPOR! will then transfer every validated input to relevant institutions to be featured and responded on the website. LAPOR! is now integrated with 81 Ministries, 5 local government, and 44 State Owned Enterprise. Public can also give comments, likes or share them through Facebook and Twitter to have a discussion and to ensure the completeness of the reports. LAPOR! has unexpectedly contributed to various successful cases concerning public services. So far the portal has over 280,704 registered users, receiving an average of 1,000 reports every day. Government's response rate increase time to time, with 81% of complaints and inquiries have been solved or are being investigated. This paper will examine the effectiveness of LAPOR! as a tools to increase the role of civil society in order to develop better public services in Indonesia. Beside their promising story, there still are various difficulties that need to be solved. With qualitative approach as methodology for this research, writers will also explore potential improvement of LAPOR! so it can perform effectively as a leading national complaint handling system in Indonesia.

Keywords: civil society, government, Indonesia, public services

Procedia PDF Downloads 476
944 The Inverse Problem in Energy Beam Processes Using Discrete Adjoint Optimization

Authors: Aitor Bilbao, Dragos Axinte, John Billingham

Abstract:

The inverse problem in Energy Beam (EB) Processes consists of defining the control parameters, in particular the 2D beam path (position and orientation of the beam as a function of time), to arrive at a prescribed solution (freeform surface). This inverse problem is well understood for conventional machining, because the cutting tool geometry is well defined and the material removal is a time independent process. In contrast, EB machining is achieved through the local interaction of a beam of particular characteristics (e.g. energy distribution), which leads to a surface-dependent removal rate. Furthermore, EB machining is a time-dependent process in which not only the beam varies with the dwell time, but any acceleration/deceleration of the machine/beam delivery system, when performing raster paths will influence the actual geometry of the surface to be generated. Two different EB processes, Abrasive Water Machining (AWJM) and Pulsed Laser Ablation (PLA), are studied. Even though they are considered as independent different technologies, both can be described as time-dependent processes. AWJM can be considered as a continuous process and the etched material depends on the feed speed of the jet at each instant during the process. On the other hand, PLA processes are usually defined as discrete systems and the total removed material is calculated by the summation of the different pulses shot during the process. The overlapping of these shots depends on the feed speed and the frequency between two consecutive shots. However, if the feed speed is sufficiently slow compared with the frequency, then consecutive shots are close enough and the behaviour can be similar to a continuous process. Using this approximation a generic continuous model can be described for both processes. The inverse problem is usually solved for this kind of process by simply controlling dwell time in proportion to the required depth of milling at each single pixel on the surface using a linear model of the process. However, this approach does not always lead to the good solution since linear models are only valid when shallow surfaces are etched. The solution of the inverse problem is improved by using a discrete adjoint optimization algorithm. Moreover, the calculation of the Jacobian matrix consumes less computation time than finite difference approaches. The influence of the dynamics of the machine on the actual movement of the jet is also important and should be taken into account. When the parameters of the controller are not known or cannot be changed, a simple approximation is used for the choice of the slope of a step profile. Several experimental tests are performed for both technologies to show the usefulness of this approach.

Keywords: abrasive waterjet machining, energy beam processes, inverse problem, pulsed laser ablation

Procedia PDF Downloads 270
943 Nuclear Materials and Nuclear Security in India: A Brief Overview

Authors: Debalina Ghoshal

Abstract:

Nuclear security is the ‘prevention and detection of, and response to unauthorised removal, sabotage, unauthorised access, illegal transfer or other malicious acts involving nuclear or radiological material or their associated facilities.’ Ever since the end of Cold War, nuclear materials security has remained a concern for global security. However, with the increase in terrorist attacks not just in India especially, security of nuclear materials remains a priority. Therefore, India has made continued efforts to tighten its security on nuclear materials to prevent nuclear theft and radiological terrorism. Nuclear security is different from nuclear safety. Physical security is also a serious concern and India had been careful of the physical security of its nuclear materials. This is more so important since India is expanding its nuclear power capability to generate electricity for economic development. As India targets 60,000 MW of electricity production by 2030, it has a range of reactors to help it achieve its goal. These include indigenous Pressurised Heavy Water Reactors, now standardized at 700 MW per reactor Light Water Reactors, and the indigenous Fast Breeder Reactors that can generate more fuel for the future and enable the country to utilise its abundant thorium resource. Nuclear materials security can be enhanced through two important ways. One is through proliferation resistant technologies and diplomatic efforts to take non proliferation initiatives. The other is by developing technical means to prevent any leakage in nuclear materials in the hands of asymmetric organisations. New Delhi has already implemented IAEA Safeguards on their civilian nuclear installations. Moreover, the IAEA Additional Protocol has also been ratified by India in order to enhance its transparency of nuclear material and strengthen nuclear security. India is a party to the IAEA Conventions on Nuclear Safety and Security, and in particular the 1980 Convention on the Physical Protection of Nuclear Material and its amendment in 2005, Code of Conduct in Safety and Security of Radioactive Sources, 2006 which enables the country to provide for the highest international standards on nuclear and radiological safety and security. India's nuclear security approach is driven by five key components: Governance, Nuclear Security Practice and Culture, Institutions, Technology and International Cooperation. However, there is still scope for further improvements to strengthen nuclear materials and nuclear security. The NTI Report, ‘India’s improvement reflects its first contribution to the IAEA Nuclear Security Fund etc. in the future, India’s nuclear materials security conditions could be further improved by strengthening its laws and regulations for security and control of materials, particularly for control and accounting of materials, mitigating the insider threat, and for the physical security of materials during transport. India’s nuclear materials security conditions also remain adversely affected due to its continued increase in its quantities of nuclear material, and high levels of corruption among public officials.’ This paper would study briefly the progress made by India in nuclear and nuclear material security and the step ahead for India to further strengthen this.

Keywords: India, nuclear security, nuclear materials, non proliferation

Procedia PDF Downloads 338
942 Internationalization Process Model for Construction Firms: Stages and Strategies

Authors: S. Ping Ho, R. Dahal

Abstract:

The global economy has drastically changed how firms operate and compete. Although the construction industry is ‘local’ by its nature, the internationalization of the construction industry has become an inevitable reality. As a result of global competition, staying domestic is no longer safe from competition and, on the contrary, to grow and become an MNE (multi-national enterprise) becomes one of the important strategies for a firm to survive in the global competition. For the successful entrance into competing markets, the firms need to re-define their competitive advantages and re-identify the sources of the competitive advantages. A firm’s initiation of internationalization is not necessarily a result of strategic planning but also involves certain idiosyncratic events that pave the path leading to a firm’s internationalization. For example, a local firm’s incidental or unintentional collaboration with an MNE can become the initiating point of its internationalization process. However, because of the intensive competition in today’s global movement, many firms were compelled to initiate their internationalization as a strategic response to the competition. Understandingly stepping in in the process of internationalization and appropriately implementing the strategies (in the process) at different stages lead the construction firms to a successful internationalization journey. This study is carried out to develop a model of the internationalization process, which derives appropriate strategies that the construction firms can implement at each stage. The proposed model integrates two major and complementary views of internationalization and expresses the dynamic process of internationalization in three stages, which are the pre-international (PRE) stage, the foreign direct investment (FDI) stage, and the multi-national enterprise (MNE) stage. The strategies implied in the proposed model are derived, focusing on capability building, market locations, and entry modes based on the resource-based views: value, rareness, imitability, and substitutability (VRIN). With the proposed dynamic process model the potential construction firms which are willing to expand their business market area can be benefitted. Strategies for internationalization, such as core competence strategy, market selection, partner selection, and entry mode strategy, can be derived from the proposed model. The internationalization process is expressed in two different forms. First, we discuss the construction internationalization process, identify the driving factor/s of the process, and explain the strategy formation in the process. Second, we define the stages of internationalization along the process and the corresponding strategies in each stage. The strategies may include how to exploit existing advantages for the competition at the current stage and develop or explore additional advantages appropriate for the next stage. Particularly, the additionally developed advantages will then be accumulated and drive forward the firm’s stage of internationalization, which will further determine the subsequent strategies, and so on and so forth, spiraling up the stages of a higher degree of internationalization. However, the formation of additional strategies for the next stage does not happen automatically, and the strategy evolution is based on the firm’s dynamic capabilities.

Keywords: construction industry, dynamic capabilities, internationalization process, internationalization strategies, strategic management

Procedia PDF Downloads 51