Search results for: autoregressive moving average model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 20865

Search results for: autoregressive moving average model

8565 Therapeutic Effects of Toll Like Receptor 9 Ligand CpG-ODN on Radiation Injury

Authors: Jianming Cai

Abstract:

Exposure to ionizing radiation causes severe damage to human body and an safe and effective radioprotector is urgently required for alleviating radiation damage. In 2008, flagellin, an agonist of TLR5, was found to exert radioprotective effects on radiation injury through activating NF-kB signaling pathway. From then, the radioprotective effects of TLR ligands has shed new lights on radiation protection. CpG-ODN is an unmethylated oligonucleotide which activates TLR9 signaling pathway. In this study, we demonstrated that CpG-ODN has therapeutic effects on radiation injuries induced by γ ray and 12C6+ heavy ion particles. Our data showed that CpG-ODN increased the survival rate of mice after whole body irradiation and increased the number of leukocytes as well as the bone marrow cells. CpG-ODN also alleviated radiation damage on intestinal crypt through regulating apoptosis signaling pathway including bcl2, bax, and caspase 3 etc. By using a radiation-induced pulmonary fibrosis model, we found that CpG-ODN could alleviate structural damage, within 20 week after whole–thorax 15Gy irradiation. In this model, Th1/Th2 imbalance induced by irradiation was also reversed by CpG-ODN. We also found that TGFβ-Smad signaling pathway was regulated by CpG-ODN, which accounts for the therapeutic effects of CpG-ODN in radiation-induced pulmonary injury. On another hand, for high LET radiation protection, we investigated protective effects of CpG-ODN against 12C6+ heavy ion irradiation and found that after CpG-ODN treatment, the apoptosis and cell cycle arrest induced by 12C6+ irradiation was reduced. CpG-ODN also reduced the expression of Bax and caspase 3, while increased the level of bcl2. Then we detected the effect of CpG-ODN on heavy ion induced immune dysfunction. Our data showed that CpG-ODN increased the survival rate of mice and also the leukocytes after 12C6+ irradiation. Besides, the structural damage of immune organ such as thymus and spleen was also alleviated by CpG-ODN treatment. In conclusion, we found that TLR9 ligand, CpG-ODN reduced radiation injuries in response to γ ray and 12C6+ heavy ion irradiation. On one hand, CpG-ODN inhibited the activation of apoptosis induced by radiation through regulating bcl2, bax and caspase 3. On another hand, through activating TLR9, CpG-ODN recruit MyD88-IRAK-TRAF6 complex, activating TAK1, IRF5 and NF-kB pathway, and thus alleviates radiation damage. This study provides novel insights into protection and therapy of radiation damages.

Keywords: TLR9, CpG-ODN, radiation injury, high LET radiation

Procedia PDF Downloads 468
8564 A Numerical Investigation of Segmental Lining Joints Interactions in Tunnels

Authors: M. H. Ahmadi, A. Mortazavi, H. Zarei

Abstract:

Several authors have described the main mechanism of formation of cracks in the segment lining during the construction of tunnels with tunnel boring machines. A comprehensive analysis of segmental lining joints may help to guarantee a safe construction during Tunneling and serviceable stages. The most frequent types of segment damage are caused by a condition of uneven segment matching due to contact deficiencies. This paper investigated the interaction mechanism of precast concrete lining joints in tunnels. The Discrete Element Method (DEM) was used to analyze a typical segmental lining model consisting of six segment rings. In the analyses, typical segmental lining design parameters of the Ghomrood water conveyance tunnel, Iran were employed in the study. In the conducted analysis, the worst-case scenario of loading faced during the boring of Ghomrood tunnel was considered. This was associated with the existence of a crushed zone dipping at 75 degree at the location of the key segment. In the analysis, moreover, the effect of changes in horizontal stress ratio on the loads on the segment was assessed. The boundary condition associated with K (ratio of the horizontal to the vertical stress) values of 0.5, 1, 1.5 and 2 were applied to the model and separate analysis was conducted for each case. Important parameters such as stress, moments, and displacements were measured at joint locations and the surrounding rock. Accordingly, the segment joint interactions were assessed and analyzed. Moreover, rock mass properties of the Ghomrood in Ghom were adopted. In this study, the load acting on segments joints are included a crushed zone stratum force that intersect tunnel with 75 slopes in the location of the key segment, gravity force of segments and earth pressures. A numerical investigation was used for different coefficients of stress concentration of 0.5, 1, 1.5, 2 and different geological conditions of saturated crushed zone under the critical scenario. The numerical results also demonstrate that maximum bending moments in longitudinal joints occurred for crushed zone with the weaken strengths (Sandstone). Besides that, increasing the load in segment-stratum interfaces affected radial stress in longitudinal joints and finally the opening of joints occurred.

Keywords: joint, interface, segment, contact

Procedia PDF Downloads 249
8563 Development of Thermal Regulating Textile Material Consisted of Macrocapsulated Phase Change Material

Authors: Surini Duthika Fernandopulle, Kalamba Arachchige Pramodya Wijesinghe

Abstract:

Macrocapsules containing phase change material (PCM) PEG4000 as core and Calcium Alginate as the shell was synthesized by in-situ polymerization process, and their suitability for textile applications was studied. PCM macro-capsules were sandwiched between two polyurethane foams at regular intervals, and the sandwiched foams were subsequently covered with 100% cotton woven fabrics. According to the mathematical modelling and calculations 46 capsules were required to provide cooling for a period of 2 hours at 56ºC, so a panel of 10 cm x 10 cm area with 25 parts (having 5 capsules in each for 9 parts are 16 parts spaced for air permeability) were effectively merged into one textile material without changing the textile's original properties. First, the available cooling techniques related to textiles were considered and the best cooling techniques suiting the Sri Lankan climatic conditions were selected using a survey conducted for Sri Lankan Public based on ASHRAE-55-2010 standard and it consisted of 19 questions under 3 sections categorized as general information, thermal comfort sensation and requirement of Personal Cooling Garments (PCG). The results indicated that during daytime, majority of respondents feel warm and during nighttime also majority have responded as slightly warm. The survey also revealed that around 85% of the respondents are willing to accept a PCG. The developed panels were characterized using Fourier-transform infrared spectroscopy (FTIR) and Thermogravimetric Analysis (TGA) tests and the findings from FTIR showed that the macrocapsules consisted of PEG 4000 as the core material and Calcium Alginate as the shell material and findings from TGA showed that the capsules had the average weight percentage for core with 61,9% and shell with 34,7%. After heating both control samples and samples incorporating PCM panels, it was discovered that only the temperature of the control sample increased after 56ºC, whereas the temperature of the sample incorporating PCM panels began to regulate the temperature at 56ºC, preventing a temperature increase beyond 56ºC.

Keywords: phase change materials, thermal regulation, textiles, macrocapsules

Procedia PDF Downloads 115
8562 Impact of Climate Variability on Household's Crop Income in Central Highlands and Arssi Grain Plough Areas of Ethiopia

Authors: Arega Shumetie Ademe, Belay Kassa, Degye Goshu, Majaliwa Mwanjalolo

Abstract:

Currently the world economy is suffering from one critical problem, climate change. Some studies done before identified that impact of the problem is region specific means in some part of the world (temperate zone) there is improvement in agricultural performance but in some others like in the tropics there is drastic reduction in crop production and crop income. Climate variability is becoming dominant cause of short-term fluctuation in rain-fed agricultural production and income of developing countries. The purely rain-fed Ethiopian agriculture is the most vulnerable sector to the risks and impacts of climate variability. Thus, this study tried to identify impact of climate variability on crop income of smallholders in Ethiopia. The research used eight rounded unbalanced panel data from 1994- 2014 collected from six villages in the study area. After having all diagnostic tests the research used fixed effect method of regression. Based on the regression result rainfall and temperature deviation from their respective long term averages have negative and significant effect on crop income. Other extreme devastating shocks like flood, storm and frost, which are sourced from climate variability, have significant and negative effect on crop income of households’. Parameters that notify rainfall inconsistency like late start, variation in availability at growing season, and early cessation are critical problems for crop income of smallholder households as to the model result. Given this, impact of climate variability is not consistent in different agro-ecologies of the country. Rainfall variability has similar impact on crop income in different agro-ecology, but variation in temperature affects cold agro-ecology villages negatively and significantly, while it has positive effect in warm villages. Parameters that represent rainfall inconsistency have similar impact in both agro-ecologies and the aggregate model regression. This implies climate variability sourced from rainfall inconsistency is the main problem of Ethiopian agriculture especially the crop production sub-sector of smallholder households.

Keywords: climate variability, crop income, household, rainfall, temperature

Procedia PDF Downloads 362
8561 Characterization of an Extrapolation Chamber for Dosimetry of Low Energy X-Ray Beams

Authors: Fernanda M. Bastos, Teógenes A. da Silva

Abstract:

Extrapolation chambers were designed to be used as primary standard dosimeter for measuring absorbed dose in a medium in beta radiation and low energy x-rays. The International Organization for Standardization established series of reference x-radiation for calibrating and determining the energy dependence of dosimeters that are to be reproduced in metrology laboratories. Standardization of the low energy x-ray beams with tube potential lower than 30 kV may be affected by the instrument used for dosimetry. In this work, parameters of a 23392 model PTW extrapolation chamber were determined aiming its use in low energy x-ray beams as a reference instrument.

Keywords: extrapolation chamber, low energy x-rays, x-ray dosimetry, X-ray metrology

Procedia PDF Downloads 383
8560 Heritage, Cultural Events and Promises for Better Future: Media Strategies for Attracting Tourism during the Arab Spring Uprisings

Authors: Eli Avraham

Abstract:

The Arab Spring was widely covered in the global media and the number of Western tourists traveling to the area began to fall. The goal of this study was to analyze which media strategies marketers in Middle Eastern countries chose to employ in their attempts to repair the negative image of the area in the wake of the Arab Spring. Several studies were published concerning image-restoration strategies of destinations during crises around the globe; however, these strategies were not part of an overarching theory, conceptual framework or model from the fields of crisis communication and image repair. The conceptual framework used in the current study was the ‘multi-step model for altering place image’, which offers three types of strategies: source, message and audience. Three research questions were used: 1.What public relations crisis techniques and advertising campaign components were used? 2. What media policies and relationships with the international media were adopted by Arab officials? 3. Which marketing initiatives (such as cultural and sports events) were promoted? This study is based on qualitative content analysis of four types of data: 1) advertising components (slogans, visuals and text); (2) press interviews with Middle Eastern officials and marketers; (3) official media policy adopted by government decision-maker (e.g. boycotting or arresting newspeople); and (4) marketing initiatives (e.g. organizing heritage festivals and cultural events). The data was located in three channels from December 2010, when the events started, to September 31, 2013: (1) Internet and video-sharing websites: YouTube and Middle Eastern countries' national tourism board websites; (2) News reports from two international media outlets, The New York Times and Ha’aretz; these are considered quality newspapers that focus on foreign news and tend to criticize institutions; (3) Global tourism news websites: eTurbo news and ‘Cities and countries branding’. Using the ‘multi-step model for altering place image,’ the analysis reveals that Middle Eastern marketers and officials used three kinds of strategies to repair their countries' negative image: 1. Source (cooperation and media relations; complying, threatening and blocking the media; and finding alternatives to the traditional media) 2. Message (ignoring, limiting, narrowing or reducing the scale of the crisis; acknowledging the negative effect of an event’s coverage and assuring a better future; promotion of multiple facets, exhibitions and softening the ‘hard’ image; hosting spotlight sporting and cultural events; spinning liabilities into assets; geographic dissociation from the Middle East region; ridicule the existing stereotype) and 3. Audience (changing the target audience by addressing others; emphasizing similarities and relevance to specific target audience). It appears that dealing with their image problems will continue to be a challenge for officials and marketers of Middle Eastern countries until the region stabilizes and its regional conflicts are resolved.

Keywords: Arab spring, cultural events, image repair, Middle East, tourism marketing

Procedia PDF Downloads 269
8559 The Generalized Pareto Distribution as a Model for Sequential Order Statistics

Authors: Mahdy ‎Esmailian, Mahdi ‎Doostparast, Ahmad ‎Parsian

Abstract:

‎In this article‎, ‎sequential order statistics (SOS) censoring type II samples coming from the generalized Pareto distribution are considered‎. ‎Maximum likelihood (ML) estimators of the unknown parameters are derived on the basis of the available multiple SOS data‎. ‎Necessary conditions for existence and uniqueness of the derived ML estimates are given‎. Due to complexity in the proposed likelihood function‎, ‎a useful re-parametrization is suggested‎. ‎For illustrative purposes‎, ‎a Monte Carlo simulation study is conducted and an illustrative example is analysed‎.

Keywords: bayesian estimation‎, generalized pareto distribution‎, ‎maximum likelihood estimation‎, sequential order statistics

Procedia PDF Downloads 495
8558 Morphological and Molecular Characterization of Accessions of Black Fonio Millet (Digitaria Iburua Stapf) Grown in Selected Regions in Nigeria

Authors: Nwogiji Cletus Olando, Oselebe Happiness Ogba, Enoch Achigan-Dako

Abstract:

Digitaria iburua, commonly known as black fonio, is a cereal crop native to Africa and extensively cultivated by smallholder farmers in Northern Benin, Togo, and Nigeria. This crop holds immense nutritional and socio-cultural value. Unfortunately, limited knowledge about its genetic diversity exists due to a lack of scientific attention. As a result, its potential for improvement in food and agriculture remains largely untapped. To address this gap, a study was conducted using 41 accessions of D. iburua stored in the genebank of the Laboratory of Genetics, Biotechnology, and Seed Science at Abomey-Calavi University, Benin. The study employed both morphological and simple sequence repeat (SSR) markers to evaluate the genetic variability of the accessions. Agro-morphological assessments were carried out during the 2020 cropping season, utilizing an alpha lattice design with three replications. The collected data encompassed qualitative and quantitative traits. Additionally, molecular variability was assessed using eleven SSR markers. The results revealed significant phenotypic variability among the evaluated accessions, leading to their classification into three main clusters. Furthermore, the eleven SSR markers identified a total of 50 alleles, averaging 4.55 alleles per locus. The primers exhibited an average polymorphic information content value of 0.43, with the DE-ARC019 primer displaying the highest value (0.59). These findings suggest a substantial degree of genetic heterogeneity within the evaluated accessions, and the SSR markers employed in the study proved highly effective in detecting and characterizing this genetic variability. In conclusion, this study highlights the presence of significant genetic diversity in black fonio and provides valuable insights for future efforts aimed at its genetic improvement and conservation.

Keywords: genetic diversity, digitaria iburua, genetic improvement, simple sequence repeat markers, Nigeria, conservation

Procedia PDF Downloads 77
8557 Economic Impacts of Sanctuary and Immigration and Customs Enforcement Policies Inclusive and Exclusive Institutions

Authors: Alexander David Natanson

Abstract:

This paper focuses on the effect of Sanctuary and Immigration and Customs Enforcement (ICE) policies on local economies. "Sanctuary cities" refers to municipal jurisdictions that limit their cooperation with the federal government's efforts to enforce immigration. Using county-level data from the American Community Survey and ICE data on economic indicators from 2006 to 2018, this study isolates the effects of local immigration policies on U.S. counties. The investigation is accomplished by simultaneously studying the policies' effects in counties where immigrants' families are persecuted via collaboration with Immigration and Customs Enforcement (ICE), in contrast to counties that provide protections. The analysis includes a difference-in-difference & two-way fixed effect model. Results are robust to nearest-neighbor matching, after the random assignment of treatment, after running estimations using different cutoffs for immigration policies, and with a regression discontinuity model comparing bordering counties with opposite policies. Results are also robust after restricting the data to a single-year policy adoption, using the Sun and Abraham estimator, and with event-study estimation to deal with the staggered treatment issue. In addition, the study reverses the estimation to understand what drives the decision to choose policies to detect the presence of reverse causality biases in the estimated policy impact on economic factors. The evidence demonstrates that providing protections to undocumented immigrants increases economic activity. The estimates show gains in per capita income ranging from 3.1 to 7.2, median wages between 1.7 to 2.6, and GDP between 2.4 to 4.1 percent. Regarding labor, sanctuary counties saw increases in total employment between 2.3 to 4 percent, and the unemployment rate declined from 12 to 17 percent. The data further shows that ICE policies have no statistically significant effects on income, median wages, or GDP but adverse effects on total employment, with declines from 1 to 2 percent, mostly in rural counties, and an increase in unemployment of around 7 percent in urban counties. In addition, results show a decline in the foreign-born population in ICE counties but no changes in sanctuary counties. The study also finds similar results for sanctuary counties when separating the data between urban, rural, educational attainment, gender, ethnic groups, economic quintiles, and the number of business establishments. The takeaway from this study is that institutional inclusion creates the dynamic nature of an economy, as inclusion allows for economic expansion due to the extension of fundamental freedoms to newcomers. Inclusive policies show positive effects on economic outcomes with no evident increase in population. To make sense of these results, the hypothesis and theoretical model propose that inclusive immigration policies play an essential role in conditioning the effect of immigration by decreasing uncertainties and constraints for immigrants' interaction in their communities, decreasing the cost from fear of deportation or the constant fear of criminalization and optimize their human capital.

Keywords: inclusive and exclusive institutions, post matching, fixed effect, time trend, regression discontinuity, difference-in-difference, randomization inference and sun, Abraham estimator

Procedia PDF Downloads 70
8556 Kinematic Analysis of the Calf Raise Test Using a Mobile iOS Application: Validation of the Calf Raise Application

Authors: Ma. Roxanne Fernandez, Josie Athens, Balsalobre-Fernandez, Masayoshi Kubo, Kim Hébert-Losier

Abstract:

Objectives: The calf raise test (CRT) is used in rehabilitation and sports medicine to evaluate calf muscle function. For testing, individuals stand on one leg and go up on their toes and back down to volitional fatigue. The newly developed Calf Raise application (CRapp) for iOS uses computer-vision algorithms enabling objective measurement of CRT outcomes. We aimed to validate the CRapp by examining its concurrent validity and agreement levels against laboratory-based equipment and establishing its intra- and inter-rater reliability. Methods: CRT outcomes (i.e., repetitions, positive work, total height, peak height, fatigue index, and peak power) were assessed in thirteen healthy individuals (6 males, 7 females) on three occasions and both legs using the CRapp, 3D motion capture, and force plate technologies simultaneously. Data were extracted from two markers: one placed immediately below the lateral malleolus and another on the heel. Concurrent validity and agreement measures were determined using intraclass correlation coefficients (ICC₃,ₖ), typical errors expressed as coefficient of variations (CV), and Bland-Altman methods to assess biases and precision. Reliability was assessed using ICC3,1 and CV values. Results: Validity of CRapp outcomes was good to excellent across measures for both markers (mean ICC ≥0.878), with precision plots showing good agreement and precision. CV ranged from 0% (repetitions) to 33.3% (fatigue index) and were, on average better for the lateral malleolus marker. Additionally, inter- and intra-rater reliability were excellent (mean ICC ≥0.949, CV ≤5.6%). Conclusion: These results confirm the CRapp is valid and reliable within and between users for measuring CRT outcomes in healthy adults. The CRapp provides a tool to objectivise CRT outcomes in research and practice, aligning with recent advances in mobile technologies and their increased use in healthcare.

Keywords: calf raise test, mobile application, validity, reliability

Procedia PDF Downloads 151
8555 Decentralised Edge Authentication in the Industrial Enterprise IoT Space

Authors: C. P. Autry, A.W. Roscoe

Abstract:

Authentication protocols based on public key infrastructure (PKI) and trusted third party (TTP) are no longer adequate for industrial scale IoT networks thanks to issues such as low compute and power availability, the use of widely distributed and commercial off-the-shelf (COTS) systems, and the increasingly sophisticated attackers and attacks we now have to counter. For example, there is increasing concern about nation-state-based interference and future quantum computing capability. We have examined this space from first principles and have developed several approaches to group and point-to-point authentication for IoT that do not depend on the use of a centralised client-server model. We emphasise the use of quantum resistant primitives such as strong cryptographic hashing and the use multi-factor authentication.

Keywords: authentication, enterprise IoT cybersecurity, PKI/TTP, IoT space

Procedia PDF Downloads 155
8554 Maternal Perception of Using Epidural Anesthesia and the Childbirth Outcomes

Authors: Jiyoung Kim, Chae Weon Chung

Abstract:

Labor pain is one of the most common concerns of pregnant women, thus women are in need of possible options they could take to control the pain. So, this study aimed to explore maternal perception of epidural anesthesia and to compare the childbirth outcomes according to the use of epidural anesthesia. For this descriptive study, women who were over 36 weeks of pregnancy were recruited from an out-patient obstetric clinic in a public hospital in Seoul. Women were included in the study if agreed to participate, were pregnant singleton, without pregnancy complication, and expecting a natural birth. Data collection was done twice, the first one at the prenatal care visit and the second one at an in-patient ward on 2nd day postpartum. The instrument of the beliefs about epidural anesthesia, one item of asking intention to use epidural anesthesia, demographics, and obstetrical characteristics were incorporated into a questionnaire. One nurse researcher performed data collection with the structured questionnaire after the approval of the institutional review board. At the initial data collection 133 women were included, while 117 were retained at the second point after excluded 13 women due to the occurrence of complications. Analyses were done by chi-square, t-test, and ANOVA using the SPSS program. Women were aged 32.5 years old, 22.2% were over 35 years old. The average gestational age was 38.5 weeks, and 67.5% were nulliparous. Out of 38 multiparous women, 20 women (52.6%) had received epidural anesthesia in the previous delivery. At the initial interview, 62.6% (n=73) of women wanted to receive epidural anesthesia while 22.4% answered not decided and 15.4% did not want to take the procedure. However, there were changes in proportions between women’s intention to take it and actual procedures done, particularly, two-thirds of women (n=26) who had been undecided were found to receive epidural anesthesia during labor. There was a significant difference in the perception of epidural anesthesia measured before delivery between women who received and not received it (t=3.68, p < .001). Delivery outcomes were statistically different between the two groups in delivery mode (chi-square=8.64, p=.01), O₂ supply during labor (chi-square =5.01, p=.03), duration of 2nd stage of labor (t=3.70, p < .001), and arterial cord blood pH (t=2.64, p=.01). Interestingly, there was no difference in labor pain perceived between women with and without epidural anesthesia. Considering the preference and use of epidural anesthesia, health professionals need to assess coping ability of women undergoing delivery and to provide accurate information about pain control to support their decision making and eventually to enhance delivery outcomes for mothers and neonates.

Keywords: epidural anesthesia, delivery outcomes, labor pain, perception

Procedia PDF Downloads 145
8553 Adaptive Approach Towards Comprehensive Urban Development Simulation in Coastal Regions: Case Study of New Alamein City, Egypt

Authors: Nada Mohamed, Abdel Aziz Mohamed

Abstract:

Climate change in coastal areas is a global issue that can be felt on local scale and will be around for decades and centuries to come to an end; it also has critical risks on the city’s economy, communities, and the natural environment. One of these changes that cause a huge risk on coastal cities is the sea level rise (SLR). SLR is a result of scarcity and reduction in global environmental system. The main cause of climate change and global warming is the countries with high development index (HDI) as Japan and Germany while the medium and low HDI countries as Egypt does not have enough awareness and advanced tactics to adapt with this changes that destroy urban areas and cause loss in land and economy. This is why Climate Resilience is one of the UN sustainable development goals 2030, which is calling for actions to strengthen climate change resilience through mitigation and adaptation. For many reasons, adaptation has received less attention than mitigation and it is only recently that adaptation has become a focal global point of attention. This adaption can be achieved through some actions such as upgrading the use and the design of the land, adjusting business and activities of people, and increasing community understanding of climate risks. To reach the adaption goals, and we have to apply a strategic pathway to Climate Resilience, which is the Urban Bioregionalism Paradigm. Resiliency has been framed as persistence, adaptation, and transformation. Climate Resilience decision support system includes a visualization platform where ecological, social, and economic information can be viewed alongside with specific geographies that's why Urban Bioregionalism is a socio-ecological system which is defined as a paradigm that has potential to help move social attitudes toward environmental understanding and deepen human-environment connections within ecological development. The research aim is to achieve an adaptive integrated urban development model throughout the analyses of tactics and strategies that can be used to adapt urban areas and coastal communities to the challenges of climate changes especially SLR and also simulation model using advanced technological software for a coastal city corridor to elaborates the suitable strategy to apply.

Keywords: climate resilience, sea level rise, SLR, coastal resilience, adaptive development simulation

Procedia PDF Downloads 125
8552 Choosing the Green Energy Option: A Willingness to Pay Study of Metro Manila Residents for Solar Renewable Energy

Authors: Paolo Magnata

Abstract:

The energy market in the Philippines remains to have one of the highest electricity rates in the region averaging at US$0.16/kWh (PHP6.89/kWh), excluding VAT, as opposed to the overall energy market average of US$0.13/kWh. The movement towards renewable energy, specifically solar energy, will pose as an expensive one with the country’s energy sector providing Feed-in-Tariff rates as high as US$0.17/kWh (PHP8.69/kWh) for solar energy power plants. Increasing the share of renewables at the current state of the energy regulatory background would yield a three-fold increase in residential electricity bills. The issue lies in the uniform charge that consumers bear regardless of where the electricity is sourced resulting in rates that only consider costs and not the consumers. But if they are given the option to choose where their electricity comes from, a number of consumers may potentially choose economically costlier sources of electricity due to higher levels of utility coupled with the willingness to pay of consuming environmentally-friendly sourced electricity. A contingent valuation survey was conducted to determine their willingness-to-pay for solar energy on a sample that was representative of Metro Manila to elicit their willingness-to-pay and a Single Bounded Dichotomous Choice and Double Bounded Dichotomous Choice analysis was used to estimate the amount they were willing to pay. The results showed that Metro Manila residents are willing to pay a premium on top of their current electricity bill amounting to US$5.71 (PHP268.42) – US$9.26 (PHP435.37) per month which is approximately 0.97% - 1.29% of their monthly household income. It was also discovered that besides higher income of households, a higher level of self-perceived knowledge on environmental awareness significantly affected the likelihood of a consumer to pay the premium. Shifting towards renewable energy is an expensive move not only for the government because of high capital investment but also to consumers; however, the Green Energy Option (a policy mechanism which gives consumers the option to decide where their electricity comes from) can potentially balance the shift of the economic burden by transitioning from a uniformly charged electricity rate to equitably charging consumers based on their willingness to pay for renewably sourced energy.

Keywords: contingent valuation, dichotomous choice, Philippines, solar energy

Procedia PDF Downloads 322
8551 Estimation of Maximum Earthquake for Gujarat Region, India

Authors: Ashutosh Saxena, Kumar Pallav, Ramji Dwivedi

Abstract:

The present study estimates the seismicity parameter 'b' and maximum possible magnitude of an earthquake (Mmax) for Gujarat region with three well-established methods viz. Kijiko parametric model (KP), Kijiko-Sellevol-Bayern (KSB) and Tapered Gutenberg-Richter (TGR), as a combined seismic source regime. The earthquake catalogue is prepared for a period of 1330 to 2013 in the region Latitudes 20o N to 250 N and Longitudinally extending from 680 to 750 E for earthquake moment magnitude (Mw) ≥4.0. The ’a’ and 'b' value estimated for the region as 4.68 and 0.58. Further, Mmax estimated as 8.54 (± 0.29), 8.69 (± 0.48), and 8.12 with KP, KSB, and TGR, respectively.

Keywords: Mmax, seismicity parameter, Gujarat, Tapered Gutenberg-Richter

Procedia PDF Downloads 527
8550 D-Epi App: Mobile Application to Control Sodium Valproat Administration in Children with Idiopatic Epilepsy in Indonesia

Authors: Nyimas Annissa Mutiara Andini

Abstract:

There are 325,000 children younger than age 15 in the U.S. have epilepsy. In Indonesia, 40% of 3,5 millions cases of epilepsy happens in children. The most common type of epilepsy, which affects 6 out of 10 people with the disorder, is called idiopathic epilepsy and which has no identifiable cause. One of the most commonly used medications in the treatment of this childhood epilepsy is sodium valproate. Administration of sodium valproat in children has a problem to fail. Nearly 60% of pediatric patients known were mildly, moderately, or severely non-adherent with therapy during the first six months of treatment. Many parents or caregiver took far less medication than prescribed, and the treatment-adherence pattern for the majority of patients was established during the first month of treatment. 42% of the patients were almost always given their medications as prescribed but 13% had very poor adherence even in the early weeks and months of treatment. About 7% of patients initially gave the medication correctly 90% of the time, but adherence dropped to around 20% within six months of starting treatment. Over the six months of observation, the total missing of administration is about four out of 14 doses in any given week. This fail can cause the epilepsy to relapse. Whereas, current reported epilepsy disorder were significantly more likely than those never diagnosed to experience depression (8% vs 2%), anxiety (17% vs 3%), attention-deficit/hyperactivity disorder (23% vs 6%), developmental delay (51% vs 3%), autism/autism spectrum disorder (16% vs 1%), and headaches (14% vs 5%) (all P< 0.05). They had a greater risk of limitation in the ability to do things (relative risk: 9.22; 95% CI: 7.56–11.24), repeating a school grade (relative risk: 2.59; CI: 1.52–4.40), and potentially having unmet medical and mental health needs. In the other side, technology can help to make our life easier. One of the technology, that we can use is a mobile application. A mobile app is a software program we can download and access directly using our phone. Indonesians are highly mobile centric. They use, on average, 6.7 applications over a 30 day period. This paper is aimed to describe an application that could help to control a sodium valproat administration in children; we call it as D-Epi app. D-Epi app is a downloadable application that can help parents or caregiver alert by a timer-related application to warn whether it is the time to administer the sodium valproat. It works not only as a standard alarm, but also inform important information about the drug and emergency stuffs to do to children with epilepsy. This application could help parents and caregiver to take care a child with epilepsy in Indonesia.

Keywords: application, children, D-Epi, epilepsy

Procedia PDF Downloads 266
8549 Development of an Integrated System for the Treatment of Rural Domestic Wastewater: Emphasis on Nutrient Removal

Authors: Prangya Ranjan Rout, Puspendu Bhunia, Rajesh Roshan Dash

Abstract:

In a developing country like India, providing reliable and affordable wastewater treatment facilities in rural areas is a huge challenge. With the aim of enhancing the nutrient removal from rural domestic wastewater while reducing the cost of treatment process, a novel, integrated treatment system consisting of a multistage bio-filter with drop aeration and a post positioned attached growth carbonaceous denitrifying-bioreactor was designed and developed in this work. The bio-filter was packed with ‘dolochar’, a sponge iron industry waste, as an adsorbent mainly for phosphate removal through physiochemical approach. The Denitrifying bio-reactor was packed with many waste organic solid substances (WOSS) as carbon sources and substrates for biomass attachment, mainly to remove nitrate in biological denitrification process. The performance of the modular system, treating real domestic wastewater was monitored for a period of about 60 days and the average removal efficiencies during the period were as follows: phosphate, 97.37%; nitrate, 85.91%, ammonia, 87.85%, with mean final effluent concentration of 0.73, 9.86, and 9.46 mg/L, respectively. The multistage bio-filter played an important role in ammonium oxidation and phosphate adsorption. The multilevel drop aeration with increasing oxygenation, and the special media used, consisting of certain oxides were likely beneficial for nitrification and phosphorus removal, respectively, whereas the nitrate was effectively reduced by biological denitrification in the carbonaceous bioreactor. This treatment system would allow multipurpose reuse of the final effluent. Moreover, the saturated dolochar can be used as nutrient suppliers in agricultural practices and the partially degraded carbonaceous substances can be subjected to composting, and subsequently used as an organic fertilizer. Thus, the system displays immense potential for treating domestic wastewater significantly decreasing the concentrations of nutrients and more importantly, facilitating the conversion of the waste materials into usable ones.

Keywords: nutrient removal, denitrifying bioreactor, multi-stage bio-filter, dolochar, waste organic solid substances

Procedia PDF Downloads 372
8548 Assessment Literacy Levels of Mathematics Teachers to Implement Classroom Assessment in Ghanaian High Schools

Authors: Peter Akayuure

Abstract:

One key determinant of the quality of mathematics learning is the teacher’s ability to assess students adequately and effectively and make assessment an integral part of the instructional practices. If the mathematics teacher lacks the required literacy to perform classroom assessment roles, the true trajectory of learning success and attainment of curriculum expectations might be indeterminate. It is therefore important that educators and policymakers understand and seek ways to improve the literacy level of mathematics teachers to implement classroom assessments that would meet curriculum demands. This study employed a descriptive survey design to explore perceived levels of assessment literacy of mathematics teachers to implement classroom assessment with the school based assessment framework in Ghana. A 25-item classroom assessment inventory on teachers’ assessment scenarios was adopted, modified, and administered to a purposive sample of 48 mathematics teachers from eleven Senior High Schools. Seven other items were included to further collect data on their self-efficacy towards assessment literacy. Data were analyzed using descriptive and bivariate correlation statistics. The result shows that, on average, 48.6% of the mathematics teachers attained standard levels of assessment literacy. Specifically, 50.0% met standard one in choosing appropriate assessment methods, 68.3% reached standard two in developing appropriate assessment tasks, 36.6% reached standard three in administering, scoring, and interpreting assessment results, 58.3% reached standard four in making appropriate assessment decisions, 41.7% reached standard five in developing valid grading procedures, 45.8% reached standard six in communicating assessment results, and 36.2 % reached standard seven by identifying unethical, illegal and inappropriate use of assessment results. Participants rated their self-efficacy belief in performing assessments high, making the relationships between participants’ assessment literacy scores and self-efficacy scores weak and statistically insignificant. The study recommends that institutions training mathematics teachers or providing professional developments should accentuate assessment literacy development to ensure standard assessment practices and quality instruction in mathematics education at senior high schools.

Keywords: assessment literacy, mathematics teacher, senior high schools, Ghana

Procedia PDF Downloads 123
8547 Simulation of Focusing of Diamagnetic Particles in Ferrofluid Microflows with a Single Set of Overhead Permanent Magnets

Authors: Shuang Chen, Zongqian Shi, Jiajia Sun, Mingjia Li

Abstract:

Microfluidics is a technology that small amounts of fluids are manipulated using channels with dimensions of tens to hundreds of micrometers. At present, this significant technology is required for several applications in some fields, including disease diagnostics, genetic engineering, and environmental monitoring, etc. Among these fields, manipulation of microparticles and cells in microfluidic device, especially separation, have aroused general concern. In magnetic field, the separation methods include positive and negative magnetophoresis. By comparison, negative magnetophoresis is a label-free technology. It has many advantages, e.g., easy operation, low cost, and simple design. Before the separation of particles or cells, focusing them into a single tight stream is usually a necessary upstream operation. In this work, the focusing of diamagnetic particles in ferrofluid microflows with a single set of overhead permanent magnets is investigated numerically. The geometric model of the simulation is based on the configuration of previous experiments. The straight microchannel is 24mm long and has a rectangular cross-section of 100μm in width and 50μm in depth. The spherical diamagnetic particles of 10μm in diameter are suspended into ferrofluid. The initial concentration of the ferrofluid c₀ is 0.096%, and the flow rate of the ferrofluid is 1.8mL/h. The magnetic field is induced by five identical rectangular neodymium−iron− boron permanent magnets (1/8 × 1/8 × 1/8 in.), and it is calculated by equivalent charge source (ECS) method. The flow of the ferrofluid is governed by the Navier–Stokes equations. The trajectories of particles are solved by the discrete phase model (DPM) in the ANSYS FLUENT program. The positions of diamagnetic particles are recorded by transient simulation. Compared with the results of the mentioned experiments, our simulation shows consistent results that diamagnetic particles are gradually focused in ferrofluid under magnetic field. Besides, the diamagnetic particle focusing is studied by varying the flow rate of the ferrofluid. It is in agreement with the experiment that the diamagnetic particle focusing is better with the increase of the flow rate. Furthermore, it is investigated that the diamagnetic particle focusing is affected by other factors, e.g., the width and depth of the microchannel, the concentration of the ferrofluid and the diameter of diamagnetic particles.

Keywords: diamagnetic particle, focusing, microfluidics, permanent magnet

Procedia PDF Downloads 120
8546 Four-Electron Auger Process for Hollow Ions

Authors: Shahin A. Abdel-Naby, James P. Colgan, Michael S. Pindzola

Abstract:

A time-dependent close-coupling method is developed to calculate a total, double and triple autoionization rates for hollow atomic ions of four-electron systems. This work was motivated by recent observations of the four-electron Auger process in near K-edge photoionization of C+ ions. The time-dependent close-coupled equations are solved using lattice techniques to obtain a discrete representation of radial wave functions and all operators on a four-dimensional grid with uniform spacing. Initial excited states are obtained by relaxation of the Schrodinger equation in imaginary time using a Schmidt orthogonalization method involving interior subshells. The radial wave function grids are partitioned over the cores on a massively parallel computer, which is essential due to the large memory requirements needed to store the coupled-wave functions and the long run times needed to reach the convergence of the ionization process. Total, double, and triple autoionization rates are obtained by the propagation of the time-dependent close-coupled equations in real-time using integration over bound and continuum single-particle states. These states are generated by matrix diagonalization of one-electron Hamiltonians. The total autoionization rates for each L excited state is found to be slightly above the single autoionization rate for the excited configuration using configuration-average distorted-wave theory. As expected, we find the double and triple autoionization rates to be much smaller than the total autoionization rates. Future work can be extended to study electron-impact triple ionization of atoms or ions. The work was supported in part by grants from the American University of Sharjah and the US Department of Energy. Computational work was carried out at the National Energy Research Scientific Computing Center (NERSC) in Berkeley, California, USA.

Keywords: hollow atoms, autoionization, auger rates, time-dependent close-coupling method

Procedia PDF Downloads 144
8545 Value Co-Creation in Used-Car Auctions: A Service Scientific Perspective

Authors: Safdar Muhammad Usman, Youji Kohda, Katsuhiro Umemoto

Abstract:

Electronic market place plays an important intermediary role for connecting dealers and retail customers. The main aim of this paper is to design a value co-creation model in used-car auctions. More specifically, the study has been designed in order to describe the process of value co-creation in used-car auctions, to explore the co-created values in used-car auctions, and finally conclude the paper indicating the future research directions. Our analysis shows that economic values as well as non-economic values are co-created in used-car auctions. In addition, this paper contributes to the academic society broadening the view of value co-creation in service science.

Keywords: value co-creation, used-car auctions, non-financial values, service science

Procedia PDF Downloads 344
8544 Identification of New Familial Breast Cancer Susceptibility Genes: Are We There Yet?

Authors: Ian Campbell, Gillian Mitchell, Paul James, Na Li, Ella Thompson

Abstract:

The genetic cause of the majority of multiple-case breast cancer families remains unresolved. Next generation sequencing has emerged as an efficient strategy for identifying predisposing mutations in individuals with inherited cancer. We are conducting whole exome sequence analysis of germ line DNA from multiple affected relatives from breast cancer families, with the aim of identifying rare protein truncating and non-synonymous variants that are likely to include novel cancer predisposing mutations. Data from more than 200 exomes show that on average each individual carries 30-50 protein truncating mutations and 300-400 rare non-synonymous variants. Heterogeneity among our exome data strongly suggest that numerous moderate penetrance genes remain to be discovered, with each gene individually accounting for only a small fraction of families (~0.5%). This scenario marks validation of candidate breast cancer predisposing genes in large case-control studies as the rate-limiting step in resolving the missing heritability of breast cancer. The aim of this study is to screen genes that are recurrently mutated among our exome data in a larger cohort of cases and controls to assess the prevalence of inactivating mutations that may be associated with breast cancer risk. We are using the Agilent HaloPlex Target Enrichment System to screen the coding regions of 168 genes in 1,000 BRCA1/2 mutation-negative familial breast cancer cases and 1,000 cancer-naive controls. To date, our interim analysis has identified 21 genes which carry an excess of truncating mutations in multiple breast cancer families versus controls. Established breast cancer susceptibility gene PALB2 is the most frequently mutated gene (13/998 cases versus 0/1009 controls), but other interesting candidates include NPSR1, GSN, POLD2, and TOX3. These and other genes are being validated in a second cohort of 1,000 cases and controls. Our experience demonstrates that beyond PALB2, the prevalence of mutations in the remaining breast cancer predisposition genes is likely to be very low making definitive validation exceptionally challenging.

Keywords: predisposition, familial, exome sequencing, breast cancer

Procedia PDF Downloads 481
8543 Physics-Based Earthquake Source Models for Seismic Engineering: Analysis and Validation for Dip-Slip Faults

Authors: Percy Galvez, Anatoly Petukhin, Paul Somerville, Ken Miyakoshi, Kojiro Irikura, Daniel Peter

Abstract:

Physics-based dynamic rupture modelling is necessary for estimating parameters such as rupture velocity and slip rate function that are important for ground motion simulation, but poorly resolved by observations, e.g. by seismic source inversion. In order to generate a large number of physically self-consistent rupture models, whose rupture process is consistent with the spatio-temporal heterogeneity of past earthquakes, we use multicycle simulations under the heterogeneous rate-and-state (RS) friction law for a 45deg dip-slip fault. We performed a parametrization study by fully dynamic rupture modeling, and then, a set of spontaneous source models was generated in a large magnitude range (Mw > 7.0). In order to validate rupture models, we compare the source scaling relations vs. seismic moment Mo for the modeled rupture area S, as well as average slip Dave and the slip asperity area Sa, with similar scaling relations from the source inversions. Ground motions were also computed from our models. Their peak ground velocities (PGV) agree well with the GMPE values. We obtained good agreement of the permanent surface offset values with empirical relations. From the heterogeneous rupture models, we analyzed parameters, which are critical for ground motion simulations, i.e. distributions of slip, slip rate, rupture initiation points, rupture velocities, and source time functions. We studied cross-correlations between them and with the friction weakening distance Dc value, the only initial heterogeneity parameter in our modeling. The main findings are: (1) high slip-rate areas coincide with or are located on an outer edge of the large slip areas, (2) ruptures have a tendency to initiate in small Dc areas, and (3) high slip-rate areas correlate with areas of small Dc, large rupture velocity and short rise-time.

Keywords: earthquake dynamics, strong ground motion prediction, seismic engineering, source characterization

Procedia PDF Downloads 137
8542 Predictive Analytics for Theory Building

Authors: Ho-Won Jung, Donghun Lee, Hyung-Jin Kim

Abstract:

Predictive analytics (data analysis) uses a subset of measurements (the features, predictor, or independent variable) to predict another measurement (the outcome, target, or dependent variable) on a single person or unit. It applies empirical methods in statistics, operations research, and machine learning to predict the future, or otherwise unknown events or outcome on a single or person or unit, based on patterns in data. Most analyses of metabolic syndrome are not predictive analytics but statistical explanatory studies that build a proposed model (theory building) and then validate metabolic syndrome predictors hypothesized (theory testing). A proposed theoretical model forms with causal hypotheses that specify how and why certain empirical phenomena occur. Predictive analytics and explanatory modeling have their own territories in analysis. However, predictive analytics can perform vital roles in explanatory studies, i.e., scientific activities such as theory building, theory testing, and relevance assessment. In the context, this study is to demonstrate how to use our predictive analytics to support theory building (i.e., hypothesis generation). For the purpose, this study utilized a big data predictive analytics platform TM based on a co-occurrence graph. The co-occurrence graph is depicted with nodes (e.g., items in a basket) and arcs (direct connections between two nodes), where items in a basket are fully connected. A cluster is a collection of fully connected items, where the specific group of items has co-occurred in several rows in a data set. Clusters can be ranked using importance metrics, such as node size (number of items), frequency, surprise (observed frequency vs. expected), among others. The size of a graph can be represented by the numbers of nodes and arcs. Since the size of a co-occurrence graph does not depend directly on the number of observations (transactions), huge amounts of transactions can be represented and processed efficiently. For a demonstration, a total of 13,254 metabolic syndrome training data is plugged into the analytics platform to generate rules (potential hypotheses). Each observation includes 31 predictors, for example, associated with sociodemographic, habits, and activities. Some are intentionally included to get predictive analytics insights on variable selection such as cancer examination, house type, and vaccination. The platform automatically generates plausible hypotheses (rules) without statistical modeling. Then the rules are validated with an external testing dataset including 4,090 observations. Results as a kind of inductive reasoning show potential hypotheses extracted as a set of association rules. Most statistical models generate just one estimated equation. On the other hand, a set of rules (many estimated equations from a statistical perspective) in this study may imply heterogeneity in a population (i.e., different subpopulations with unique features are aggregated). Next step of theory development, i.e., theory testing, statistically tests whether a proposed theoretical model is a plausible explanation of a phenomenon interested in. If hypotheses generated are tested statistically with several thousand observations, most of the variables will become significant as the p-values approach zero. Thus, theory validation needs statistical methods utilizing a part of observations such as bootstrap resampling with an appropriate sample size.

Keywords: explanatory modeling, metabolic syndrome, predictive analytics, theory building

Procedia PDF Downloads 261
8541 Prediction of Alzheimer's Disease Based on Blood Biomarkers and Machine Learning Algorithms

Authors: Man-Yun Liu, Emily Chia-Yu Su

Abstract:

Alzheimer's disease (AD) is the public health crisis of the 21st century. AD is a degenerative brain disease and the most common cause of dementia, a costly disease on the healthcare system. Unfortunately, the cause of AD is poorly understood, furthermore; the treatments of AD so far can only alleviate symptoms rather cure or stop the progress of the disease. Currently, there are several ways to diagnose AD; medical imaging can be used to distinguish between AD, other dementias, and early onset AD, and cerebrospinal fluid (CSF). Compared with other diagnostic tools, blood (plasma) test has advantages as an approach to population-based disease screening because it is simpler, less invasive also cost effective. In our study, we used blood biomarkers dataset of The Alzheimer’s disease Neuroimaging Initiative (ADNI) which was funded by National Institutes of Health (NIH) to do data analysis and develop a prediction model. We used independent analysis of datasets to identify plasma protein biomarkers predicting early onset AD. Firstly, to compare the basic demographic statistics between the cohorts, we used SAS Enterprise Guide to do data preprocessing and statistical analysis. Secondly, we used logistic regression, neural network, decision tree to validate biomarkers by SAS Enterprise Miner. This study generated data from ADNI, contained 146 blood biomarkers from 566 participants. Participants include cognitive normal (healthy), mild cognitive impairment (MCI), and patient suffered Alzheimer’s disease (AD). Participants’ samples were separated into two groups, healthy and MCI, healthy and AD, respectively. We used the two groups to compare important biomarkers of AD and MCI. In preprocessing, we used a t-test to filter 41/47 features between the two groups (healthy and AD, healthy and MCI) before using machine learning algorithms. Then we have built model with 4 machine learning methods, the best AUC of two groups separately are 0.991/0.709. We want to stress the importance that the simple, less invasive, common blood (plasma) test may also early diagnose AD. As our opinion, the result will provide evidence that blood-based biomarkers might be an alternative diagnostics tool before further examination with CSF and medical imaging. A comprehensive study on the differences in blood-based biomarkers between AD patients and healthy subjects is warranted. Early detection of AD progression will allow physicians the opportunity for early intervention and treatment.

Keywords: Alzheimer's disease, blood-based biomarkers, diagnostics, early detection, machine learning

Procedia PDF Downloads 312
8540 A Relational Approach to Adverb Use in Interactions

Authors: Guillaume P. Fernandez

Abstract:

Individual language use is a matter of choice in particular interactions. The paper proposes a conceptual and theoretical framework with methodological consideration to develop how language produced in dyadic relations is to be considered and situated in the larger social configuration the interaction is embedded within. An integrated and comprehensive view is taken: social interactions are expected to be ruled by a normative context, defined by the chain of interdependences that structures the personal network. In this approach, the determinants of discursive practices are not only constrained by the moment of production and isolated from broader influences. Instead, the position the individual and the dyad have in the personal network influences the discursive practices in a twofold manner: on the one hand, the network limits the access to linguistic resources available within it, and, on the other hand, the structure of the network influences the agency of the individual, by the social control inherent to particular network characteristics. Concretely, we investigate how and to what extent consistent ego is from one interaction to another in his or her use of adverbs. To do so, social network analysis (SNA) methods are mobilized. Participants (N=130) are college students recruited in the french speaking part of Switzerland. The personal network of significant ones of each individual is created using name generators and edge interpreters, with a focus on social support and conflict. For the linguistic parts, respondents were asked to record themselves with five of their close relations. From the recordings, we computed an average similarity score based on the adverb used across interactions. In terms of analyses, two are envisaged: First, OLS regressions including network-level measures, such as density and reciprocity, and individual-level measures, such as centralities, are performed to understand the tenets of linguistic similarity from one interaction to another. The second analysis considers each social tie as nested within ego networks. Multilevel models are performed to investigate how the different types of ties may influence the likelihood to use adverbs, by controlling structural properties of the personal network. Primary results suggest that the more cohesive the network, the less likely is the individual to change his or her manner of speaking, and social support increases the use of adverbs in interactions. While promising results emerge, further research should consider a longitudinal approach to able the claim of causality.

Keywords: personal network, adverbs, interactions, social influence

Procedia PDF Downloads 47
8539 Long-Term Subcentimeter-Accuracy Landslide Monitoring Using a Cost-Effective Global Navigation Satellite System Rover Network: Case Study

Authors: Vincent Schlageter, Maroua Mestiri, Florian Denzinger, Hugo Raetzo, Michel Demierre

Abstract:

Precise landslide monitoring with differential global navigation satellite system (GNSS) is well known, but technical or economic reasons limit its application by geotechnical companies. This study demonstrates the reliability and the usefulness of Geomon (Infrasurvey Sàrl, Switzerland), a stand-alone and cost-effective rover network. The system permits deploying up to 15 rovers, plus one reference station for differential GNSS. A dedicated radio communication links all the modules to a base station, where an embedded computer automatically provides all the relative positions (L1 phase, open-source RTKLib software) and populates an Internet server. Each measure also contains information from an internal inclinometer, battery level, and position quality indices. Contrary to standard GNSS survey systems, which suffer from a limited number of beacons that must be placed in areas with good GSM signal, Geomon offers greater flexibility and permits a real overview of the whole landslide with good spatial resolution. Each module is powered with solar panels, ensuring autonomous long-term recordings. In this study, we have tested the system on several sites in the Swiss mountains, setting up to 7 rovers per site, for an 18 month-long survey. The aim was to assess the robustness and the accuracy of the system in different environmental conditions. In one case, we ran forced blind tests (vertical movements of a given amplitude) and compared various session parameters (duration from 10 to 90 minutes). Then the other cases were a survey of real landslides sites using fixed optimized parameters. Sub centimetric-accuracy with few outliers was obtained using the best parameters (session duration of 60 minutes, baseline 1 km or less), with the noise level on the horizontal component half that of the vertical one. The performance (percent of aborting solutions, outliers) was reduced with sessions shorter than 30 minutes. The environment also had a strong influence on the percent of aborting solutions (ambiguity search problem), due to multiple reflections or satellites obstructed by trees and mountains. The length of the baseline (distance reference-rover, single baseline processing) reduced the accuracy above 1 km but had no significant effect below this limit. In critical weather conditions, the system’s robustness was limited: snow, avalanche, and frost-covered some rovers, including the antenna and vertically oriented solar panels, leading to data interruption; and strong wind damaged a reference station. The possibility of changing the sessions’ parameters remotely was very useful. In conclusion, the rover network tested provided the foreseen sub-centimetric-accuracy while providing a dense spatial resolution landslide survey. The ease of implementation and the fully automatic long-term survey were timesaving. Performance strongly depends on surrounding conditions, but short pre-measures should allow moving a rover to a better final placement. The system offers a promising hazard mitigation technique. Improvements could include data post-processing for alerts and automatic modification of the duration and numbers of sessions based on battery level and rover displacement velocity.

Keywords: GNSS, GSM, landslide, long-term, network, solar, spatial resolution, sub-centimeter.

Procedia PDF Downloads 105
8538 Trajectory Tracking of Fixed-Wing Unmanned Aerial Vehicle Using Fuzzy-Based Sliding Mode Controller

Authors: Feleke Tsegaye

Abstract:

The work in this thesis mainly focuses on trajectory tracking of fixed wing unmanned aerial vehicle (FWUAV) by using fuzzy based sliding mode controller(FSMC) for surveillance applications. Unmanned Aerial Vehicles (UAVs) are general-purpose aircraft built to fly autonomously. This technology is applied in a variety of sectors, including the military, to improve defense, surveillance, and logistics. The model of FWUAV is complex due to its high non-linearity and coupling effect. In this thesis, input decoupling is done through extracting the dominant inputs during the design of the controller and considering the remaining inputs as uncertainty. The proper and steady flight maneuvering of UAVs under uncertain and unstable circumstances is the most critical problem for researchers studying UAVs. A FSMC technique was suggested to tackle the complexity of FWUAV systems. The trajectory tracking control algorithm primarily uses the sliding-mode (SM) variable structure control method to address the system’s control issue. In the SM control, a fuzzy logic control(FLC) algorithm is utilized in place of the discontinuous phase of the SM controller to reduce the chattering impact. In the reaching and sliding stages of SM control, Lyapunov theory is used to assure finite-time convergence. A comparison between the conventional SM controller and the suggested controller is done in relation to the chattering effect as well as tracking performance. It is evident that the chattering is effectively reduced, the suggested controller provides a quick response with a minimum steady-state error, and the controller is robust in the face of unknown disturbances. The designed control strategy is simulated with the nonlinear model of FWUAV using the MATLAB® / Simulink® environments. The simulation result shows the suggested controller operates effectively, maintains an aircraft’s stability, and will hold the aircraft’s targeted flight path despite the presence of uncertainty and disturbances.

Keywords: fixed-wing UAVs, sliding mode controller, fuzzy logic controller, chattering, coupling effect, surveillance, finite-time convergence, Lyapunov theory, flight path

Procedia PDF Downloads 42
8537 Description of Reported Foodborne Diseases in Selected Communities within the Greater Accra Region-Ghana: Epidemiological Review of Surveillance Data

Authors: Benjamin Osei-Tutu, Henrietta Awewole Kolson

Abstract:

Background: Acute gastroenteritis is one of the frequently reported Out-Patient Department (OPD) cases. However, the causative pathogens of these cases are rarely identified at the OPD due to delay in laboratory results or failure to obtain specimens before antibiotics is administered. Method: A retrospective review of surveillance data from the Adentan Municipality, Accra, Ghana that were recorded in the National foodborne disease surveillance system of Ghana, was conducted with the main aim of describing the epidemiology and food practice of cases reported from the Adentan Municipality. The study involved a retrospective review of surveillance data kept on patients who visited health facilities that are involved in foodborne disease surveillance in Ghana, from January 2015 to December 2016. Results: A total of 375 cases were reviewed and these were classified as viral hepatitis (hepatitis A and E), cholera (Vibrio cholerae), dysentery (Shigella sp.), typhoid fever (Salmonella sp.) or gastroenteritis. Cases recorded were all suspected case and the average cases recorded per week was 3. Typhoid fever and dysentery were the two main clinically diagnosed foodborne illnesses. The highest number of cases were observed during the late dry season (Feb to April), which marks the end of the dry season and the beginning of the rainy season. Relatively high number of cases was also observed during the late wet seasons (Jul to Oct) when the rainfall is the heaviest. Home-made food and street vended food were the major sources of suspected etiological food, recording 49.01% and 34.87% of the cases respectively. Conclusion: Majority of cases recorded were classified as gastroenteritis due to the absence of laboratory confirmation. Few cases were classified as typhoid fever and dysentery based on clinical symptoms presented. Patients reporting with foodborne diseases were found to consume home meal and street vended foods as their predominant source of food.

Keywords: accra, etiologic food, food poisoning, gastroenteritis, illness, surveillance

Procedia PDF Downloads 203
8536 On the Utility of Bidirectional Transformers in Gene Expression-Based Classification

Authors: Babak Forouraghi

Abstract:

A genetic circuit is a collection of interacting genes and proteins that enable individual cells to implement and perform vital biological functions such as cell division, growth, death, and signaling. In cell engineering, synthetic gene circuits are engineered networks of genes specifically designed to implement functionalities that are not evolved by nature. These engineered networks enable scientists to tackle complex problems such as engineering cells to produce therapeutics within the patient's body, altering T cells to target cancer-related antigens for treatment, improving antibody production using engineered cells, tissue engineering, and production of genetically modified plants and livestock. Construction of computational models to realize genetic circuits is an especially challenging task since it requires the discovery of the flow of genetic information in complex biological systems. Building synthetic biological models is also a time-consuming process with relatively low prediction accuracy for highly complex genetic circuits. The primary goal of this study was to investigate the utility of a pre-trained bidirectional encoder transformer that can accurately predict gene expressions in genetic circuit designs. The main reason behind using transformers is their innate ability (attention mechanism) to take account of the semantic context present in long DNA chains that are heavily dependent on the spatial representation of their constituent genes. Previous approaches to gene circuit design, such as CNN and RNN architectures, are unable to capture semantic dependencies in long contexts, as required in most real-world applications of synthetic biology. For instance, RNN models (LSTM, GRU), although able to learn long-term dependencies, greatly suffer from vanishing gradient and low-efficiency problem when they sequentially process past states and compresses contextual information into a bottleneck with long input sequences. In other words, these architectures are not equipped with the necessary attention mechanisms to follow a long chain of genes with thousands of tokens. To address the above-mentioned limitations, a transformer model was built in this work as a variation to the existing DNA Bidirectional Encoder Representations from Transformers (DNABERT) model. It is shown that the proposed transformer is capable of capturing contextual information from long input sequences with an attention mechanism. In previous works on genetic circuit design, the traditional approaches to classification and regression, such as Random Forrest, Support Vector Machine, and Artificial Neural Networks, were able to achieve reasonably high R2 accuracy levels of 0.95 to 0.97. However, the transformer model utilized in this work, with its attention-based mechanism, was able to achieve a perfect accuracy level of 100%. Further, it is demonstrated that the efficiency of the transformer-based gene expression classifier is not dependent on the presence of large amounts of training examples, which may be difficult to compile in many real-world gene circuit designs.

Keywords: machine learning, classification and regression, gene circuit design, bidirectional transformers

Procedia PDF Downloads 49