Search results for: high quality planting materials
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 30254

Search results for: high quality planting materials

1844 Musically Yours: Impact of Social Media Advertisement Music per the Circadian Rhythm

Authors: Payal Bose

Abstract:

The impact of music on consumers' attention and emotions at different parts of the day are rarely/never studied. Music has been widely studied in different parameters, such as in-store music and its atmospheric effects, to understand consumer arousal, in-store traffic, perceptions of visual stimuli, and actual time spent in the store. Further other parameters such as tempo, shopper's age, volume, music preference, and its usage as foreground or background music acting as a mediator and impacting consumer behavior is also well researched. However, no study has traversed the influence of music on social media advertisements and its impact on the consumer mind. Most studies have catered to the influence of music on consumers conscious. A recent study found that playing pleasant music is more effective on weekdays in enhancing supermarkets' sales than on weekends. This led to a more pertinent question about the impact of music on different parts of the day and how it impacts the attention and emotion in the consumers’ mind is an interesting question to be asked given the fact that there is a high usage of social media advertisement consumption in the recent past on a day-to-day basis. This study would help brands on social media to structure their advertisements and engage more consumers towards their products. Prior literature has examined the effects or influence of music on consumers largely in retail, brick-and-mortar format. Hence most of the outcomes are favorable for physical retail environments. However, with the rise of Web 3.0 and social media marketing, it would be interesting to see how consumers' attention and emotion can be studied with the effects of music embedded in an advertisement during different parts of the day. A smartphone is considered a personal gadget, and viewing social media advertisements on them is mostly an intimate experience. Hence in a social media advertisement, most of the viewing happens on a one-on-one basis between the consumer and the brand advertisement. To the best of our knowledge, little or no work has explored the influence of music on different parts of the day (per the circadian rhythm) in advertising research. Previous works on social media advertisement have explored the timing of social media posts, deploying Targeted Content Advertising, appropriate content, reallocation of time, and advertising expenditure. Hence, I propose studying advertisements embedded with music during different parts of the day and its influence on consumers' attention and emotions. To address the research objectives and knowledge gap, it is intended to use a neuroscientific approach using fMRI and eye-tracking. The influence of music embedded in social media advertisement during different parts of the day would be assessed.

Keywords: music, neuromarketing, circadian rhythm, social media, engagement

Procedia PDF Downloads 55
1843 Long Time Oxidation Behavior of Machined 316 Austenitic Stainless Steel in Primary Water Reactor

Authors: Siyang Wang, Yujin Hu, Xuelin Wang, Wenqian Zhang

Abstract:

Austenitic stainless steels are widely used in nuclear industry to manufacture critical components owing to their excellent corrosion resistance at high temperatures. Almost all the components used in nuclear power plants are produced by surface finishing (surface cold work) such as milling, grinding and so on. The change of surface states induced by machining has great influence on the corrosion behavior. In the present study, long time oxidation behavior of machined 316 austenitic stainless steel exposed to simulated pressure water reactor environment was investigated considering different surface states. Four surface finishes were produced by electro-polishing (P), grinding (G), and two milling (M and M1) processes respectively. Before oxidation, the surface Vickers micro-hardness, surface roughness of each type of sample was measured. Corrosion behavior of four types of sample was studied by using oxidation weight gain method for six oxidation periods. The oxidation time of each period was 120h, 216h, 336h, 504h, 672h and 1344h, respectively. SEM was used to observe the surface morphology of oxide film in several period. The results showed that oxide film on austenitic stainless steel has a duplex-layer structure. The inner oxide film is continuous and compact, while the outer layer is composed of oxide particles. The oxide particle consisted of large particles (nearly micron size) and small particles (dozens of nanometers to a few hundred nanometers). The formation of oxide particle could be significantly affected by the machined surface states. The large particle on cold worked samples (grinding and milling) appeared earlier than electro-polished one, and the milled sample has the largest particle size followed by ground one and electro-polished one. For machined samples, the large particles were almost distributed along the direction of machining marks. Severe exfoliation was observed on one milled surface (M) which had the most heavily cold worked layer, while rare local exfoliation occurred on the ground sample (G) and the other milled sample (M1). The electro-polished sample (P) entirely did not exfoliate.

Keywords: austenitic stainless steel, oxidation, machining, SEM

Procedia PDF Downloads 273
1842 Impact of Instrument Transformer Secondary Connections on Performance of Protection System: Experiences from Indian POWERGRID

Authors: Pankaj Kumar Jha, Mahendra Singh Hada, Brijendra Singh, Sandeep Yadav

Abstract:

Protective relays are commonly connected to the secondary windings of instrument transformers, i.e., current transformers (CTs) and/or capacitive voltage transformers (CVTs). The purpose of CT and CVT is to provide galvanic isolation from high voltages and reduce primary currents and voltages to a nominal quantity recognized by the protective relays. Selecting the correct instrument transformers for an application is imperative: failing to do so may compromise the relay’s performance, as the output of the instrument transformer may no longer be an accurately scaled representation of the primary quantity. Having an accurately rated instrument transformer is of no use if these devices are not properly connected. The performance of the protective relay is reliant on its programmed settings and on the current and voltage inputs from the instrument transformers secondary. This paper will help in understanding the fundamental concepts of the connections of Instrument Transformers to the protection relays and the effect of incorrect connection on the performance of protective relays. Multiple case studies of protection system mal-operations due to incorrect connections of instrument transformers will be discussed in detail in this paper. Apart from the connection issue of instrument transformers to protective relays, this paper will also discuss the effect of multiple earthing of CTs and CVTs secondary on the performance of the protection system. Case studies presented in this paper will help the readers to analyse the problem through real-world challenges in complex power system networks. This paper will also help the protection engineer in better analysis of disturbance records. CT and CVT connection errors can lead to undesired operations of protection systems. However, many of these operations can be avoided by adhering to industry standards and implementing tried-and-true field testing and commissioning practices. Understanding the effect of missing neutral of CVT, multiple earthing of CVT secondary, and multiple grounding of CT star points on the performance of the protection system through real-world case studies will help the protection engineer in better commissioning the protection system and maintenance of the protection system.

Keywords: bus reactor, current transformer, capacitive voltage transformer, distance protection, differential protection, directional earth fault, disturbance report, instrument transformer, ICT, REF protection, shunt reactor, voltage selection relay, VT fuse failure

Procedia PDF Downloads 68
1841 Evaluating the Feasibility of Chemical Dermal Exposure Assessment Model

Authors: P. S. Hsi, Y. F. Wang, Y. F. Ho, P. C. Hung

Abstract:

The aim of the present study was to explore the dermal exposure assessment model of chemicals that have been developed abroad and to evaluate the feasibility of chemical dermal exposure assessment model for manufacturing industry in Taiwan. We conducted and analyzed six semi-quantitative risk management tools, including UK - Control of substances hazardous to health ( COSHH ) Europe – Risk assessment of occupational dermal exposure ( RISKOFDERM ), Netherlands - Dose related effect assessment model ( DREAM ), Netherlands – Stoffenmanager ( STOFFEN ), Nicaragua-Dermal exposure ranking method ( DERM ) and USA / Canada - Public Health Engineering Department ( PHED ). Five types of manufacturing industry were selected to evaluate. The Monte Carlo simulation was used to analyze the sensitivity of each factor, and the correlation between the assessment results of each semi-quantitative model and the exposure factors used in the model was analyzed to understand the important evaluation indicators of the dermal exposure assessment model. To assess the effectiveness of the semi-quantitative assessment models, this study also conduct quantitative dermal exposure results using prediction model and verify the correlation via Pearson's test. Results show that COSHH was unable to determine the strength of its decision factor because the results evaluated at all industries belong to the same risk level. In the DERM model, it can be found that the transmission process, the exposed area, and the clothing protection factor are all positively correlated. In the STOFFEN model, the fugitive, operation, near-field concentrations, the far-field concentration, and the operating time and frequency have a positive correlation. There is a positive correlation between skin exposure, work relative time, and working environment in the DREAM model. In the RISKOFDERM model, the actual exposure situation and exposure time have a positive correlation. We also found high correlation with the DERM and RISKOFDERM models, with coefficient coefficients of 0.92 and 0.93 (p<0.05), respectively. The STOFFEN and DREAM models have poor correlation, the coefficients are 0.24 and 0.29 (p>0.05), respectively. According to the results, both the DERM and RISKOFDERM models are suitable for performance in these selected manufacturing industries. However, considering the small sample size evaluated in this study, more categories of industries should be evaluated to reduce its uncertainty and enhance its applicability in the future.

Keywords: dermal exposure, risk management, quantitative estimation, feasibility evaluation

Procedia PDF Downloads 152
1840 A Comprehensive Review on Structural Properties and Erection Benefits of Large Span Stressed-Arch Steel Truss Industrial Buildings

Authors: Anoush Saadatmehr

Abstract:

Design and build of large clear span structures have always been demanding in the construction industry targeting industrial and commercial buildings around the world. The function of these spectacular structures encompasses distinguished types of building such as aircraft and airship hangars, warehouses, bulk storage buildings, sports and recreation facilities. From an engineering point of view, there are various types of steel structure systems that are often adopted in large-span buildings like conventional trusses, space frames and cable-supported roofs. However, this paper intends to investigate and review an innovative light, economic and quickly erected large span steel structure renowned as “Stressed-Arch,” which has several advantages over the other common types of structures. This patented system integrates the use of cold-formed hollow section steel material with high-strength pre-stressing strands and concrete grout to establish an arch shape truss frame anywhere there is a requirement to construct a cost-effective column-free space for spans within the range of 60m to 180m. In this study and firstly, the main structural properties of the stressed-arch system and its components are discussed technically. These features include nonlinear behavior of truss chords during stress-erection, the effect of erection method on member’s compressive strength, the rigidity of pre-stressed trusses to overcome strict deflection criteria for cases with roof suspended cranes or specialized front doors and more importantly, the prominent lightness of steel structure. Then, the effects of utilizing pre-stressing strands to safeguard a smooth process of installation of main steel members and roof components and cladding are investigated. In conclusion, it is shown that the Stressed-Arch system not only provides an optimized light steel structure up to 30% lighter than its conventional competitors but also streamlines the process of building erection and minimizes the construction time while preventing the risks of working at height.

Keywords: large span structure, pre-stressed steel truss, stressed-arch building, stress-erection, steel structure

Procedia PDF Downloads 135
1839 Loss of Control Eating as a Key Factor of the Psychological Symptomatology Related to Childhood Obesity

Authors: L. Beltran, S. Solano, T. Lacruz, M. Blanco, M. Rojo, M. Graell, A. R. Sepulveda

Abstract:

Introduction and Objective: Given the difficulties of assessing Binge Eating Disorder during childhood, episodes of Loss of Control (LOC) eating can be a key symptom. The objective is to know the prevalence of food psychopathology depending on the type of evaluation and find out which psychological characteristics differentiate overweight or obese children who present LOC from those who do not. Material and Methods: 170 children from 8 to 12 years of age with overweight or obesity (P > 85) were evaluated through the Primary Care Centers of Madrid. Sociodemographic data and psychological measures were collected through the Kiddie-Schedule for Affective Disorders & Schizophrenia, Present & Lifetime Version (K-SADS-PL) diagnostic interview and self-applied questionnaires: Children's eating attitudes (ChEAT), depressive symptomatology (CDI), anxiety (STAIC), general self-esteem (LAWSEQ), body self-esteem (BES), perceived teasing (POTS) and perfectionism (CAPS). Results: 15.2% of the sample exceeded the ChEAT cut-off point, presenting a risk of pathological eating; 5.88% presented an Eating Disorder through the diagnostic interview (2.35% Binge Eating disorder), and 33.53% had LOC episodes. No relationship was found between the presence of LOC and clinical diagnosis of eating disorders according to DSM-V; however, the group with LOC presented a higher risk of eating psychopathology using the ChEAT (p < .02). Significant differences were found in the group with LOC (p < .02): higher z-BMI, lower body self-esteem, greater anxious symptomatology, greater frequency of teasing towards weight, and greater effect of teasing both towards weight and competitions; compared to their peers without LOC. Conclusion: According to previous studies in samples with overweight children, in this Spanish sample of children with obesity, we found a prevalence of moderate eating disorder and a high presence of LOC episodes, which is related to both eating and general psychopathology. These findings confirm that the exclusion of LOC episodes as a diagnostic criterion can underestimate the presence of eating psychopathology during this developmental stage. According to these results, it is highly recommended to promote school context programs that approach LOC episodes in order to reduce associated symptoms. This study is included in a Project funded by the Ministry of Innovation and Science (PSI2011-23127).

Keywords: childhood obesity, eating psychopathology, loss-of-control eating, psychological symptomatology

Procedia PDF Downloads 93
1838 Clinical Risk Score for Mortality and Predictors of Severe Disease in Adult Patients with Dengue

Authors: Siddharth Jain, Abhenil Mittal, Surendra Kumar Sharma

Abstract:

Background: With its recent emergence and re-emergence, dengue has become a major international public health concern, imposing significant financial burden especially in developing countries. Despite aggressive control measures in place, India experienced one of its largest outbreaks in 2015 with Delhi being most severely affected. There is a lack of reliable predictors of disease severity and mortality in dengue. The present study was carried out to identify these predictors during the 2015 outbreak. Methods: This prospective observational study conducted at an apex tertiary care center in Delhi, India included confirmed adult dengue patients admitted between August-November 2015. Patient demographics, clinical details, and laboratory findings were recorded in a predesigned proforma. Appropriate statistical tests were used to summarize and compare the clinical and laboratory characteristics and derive predictors of mortality and severe disease, while developing a clinical risk score for mortality. Serotype analysis was also done for 75 representative samples to identify the dominant serotypes. Results: Data of 369 patients were analyzed (mean age 30.9 years; 67% males). Of these, 198 (54%) patients had dengue fever, 125 (34%) had dengue hemorrhagic fever (DHF Grade 1,2)and 46 (12%) developed dengue shock syndrome (DSS). Twenty two (6%) patients died. Late presentation to the hospital (≥5 days after onset) and dyspnoea at rest were identified as independent predictors of severe disease. Age ≥ 24 years, dyspnoea at rest and altered sensorium were identified as independent predictors of mortality. A clinical risk score was developed (12*age + 14*sensorium + 10*dyspnoea) which, if ≥ 22, predicted mortality with a high sensitivity (81.8%) and specificity (79.2%). The predominant serotypes in Delhi (2015) were DENV-2 and DENV-4. Conclusion: Age ≥ 24 years, dyspnoea at rest and altered sensorium were identified as independent predictors of mortality. Platelet counts did not determine the outcome in dengue patients. Timely referral/access to health care is important. Development and use of validated predictors of disease severity and simple clinical risk scores, which can be applied in all healthcare settings, can help minimize mortality and morbidity, especially in resource limited settings.

Keywords: dengue, mortality, predictors, severity

Procedia PDF Downloads 292
1837 Impact of Short-Term Drought on Vegetation Health Condition in the Kingdom of Saudi Arabia Using Space Data

Authors: E. Ghoneim, C. Narron, I. Iqbal, I. Hassan, E. Hammam

Abstract:

The scarcity of water is becoming a more prominent threat, especially in areas that are already arid in nature. Although the Kingdom of Saudi Arabia (KSA) is an arid country, its southwestern region offers a high variety of botanical landscapes, many of which are wooded forests, while the eastern and northern regions offer large areas of groundwater irrigated farmlands. At present, some parts of KSA, including forests and farmlands, have witnessed protracted and severe drought due to change in rainfall pattern as a result of global climate change. Such prolonged drought that last for several consecutive years is expected to cause deterioration of forested and pastured lands as well as cause crop failure in the KSA (e.g., wheat yield). An analysis to determine vegetation drought vulnerability and severity during the growing season (September-April) over a fourteen year period (2000-2014) in KSA was conducted using MODIS Terra imagery. The Vegetation Condition Index (VCI), derived from the Normalized Difference Vegetation Index (NDVI), and the Temperature Condition Index (TCI), derived from the Land Surface Temperature (LST) data was extracted from MODIS Terra Images. The VCI and TCI were then combined to compute the Vegetation Health Index (VHI). The VHI revealed the overall vegetation health for the area under investigation. A preliminary outcome of the modeled VHI over KSA, using averaged monthly vegetation data over a 14-year period, revealed that the vegetation health condition is deteriorating over time in both naturally vegetated areas and irrigated farmlands. The derived drought map for KSA indicates that both extreme and severe drought occurrences have considerably increased over the same study period. Moreover, based on the cumulative average of drought frequency in each governorate of KSA it was determined that Makkah and Jizan governorates to the east and southwest, witness the most frequency of extreme drought, whereas Tabuk to the northwest, exhibits the less extreme drought frequency. Areas where drought is extreme or severe would most likely have negative influences on agriculture, ecosystems, tourism, and even human welfare. With the drought risk map the kingdom could make informed land management decisions including were to continue with agricultural endeavors and protect forested areas and even where to develop new settlements.

Keywords: drought, vegetation health condition, TCI, Saudi Arabia

Procedia PDF Downloads 369
1836 Mesoporous Titania Thin Films for Gentamicin Delivery and Bone Morphogenetic Protein-2 Immobilization

Authors: Ane Escobar, Paula Angelomé, Mihaela Delcea, Marek Grzelczak, Sergio Enrique Moya

Abstract:

The antibacterial capacity of bone-anchoring implants can be improved by the use of antibiotics that can be delivered to the media after the surgery. Mesoporous films have shown great potential in drug delivery for orthopedic applications, since pore size and thickness can be tuned to produce different surface area and free volume inside the material. This work shows the synthesis of mesoporous titania films (MTF) by sol-gel chemistry and evaporation-induced self-assembly (EISA) on top of glass substrates. Pores with a diameter of 12nm were observed by Transmission Electron Microscopy (TEM). A film thickness of 100 nm was measured by Scanning Electron Microscopy (SEM). Gentamicin was used to study the antibiotic delivery from the film by means of High-performance liquid chromatography (HPLC). The Staphilococcus aureus strand was used to evaluate the effectiveness of the penicillin loaded films toward inhibiting bacterial colonization. MC3T3-E1 pre-osteoblast cell proliferation experiments proved that MTFs have a good biocompatibility and are a suitable surface for MC3T3-E1 cell proliferation. Moreover, images taken by Confocal Fluorescence Microscopy using labeled vinculin, showed good adhesion of the MC3T3-E1 cells to the MTFs, as well as complex actin filaments arrangement. In order to improve cell proliferation Bone Morphogenetic Protein-2 (BMP-2) was adsorbed on top of the mesoporous film. The deposition of the protein was proved by measurements in the contact angle, showing an increment in the hydrophobicity while the protein concentration is higher. By measuring the dehydrogenase activity in MC3T3-E1 cells cultured in dually functionalized mesoporous titatina films with gentamicin and BMP-2 is possible to find an improvement in cell proliferation. For this purpose, the absorption of a yellow-color formazan dye, product of a water-soluble salt (WST-8) reduction by the dehydrogenases, is measured. In summary, this study proves that by means of the surface modification of MTFs with proteins and loading of gentamicin is possible to achieve an antibacterial effect and a cell growth improvement.

Keywords: antibacterial, biocompatibility, bone morphogenetic protein-2, cell proliferation, gentamicin, implants, mesoporous titania films, osteoblasts

Procedia PDF Downloads 155
1835 Examination of Recreation Possibilities and Determination of Efficiency Zone in Bursa, Province Nilufer Creek

Authors: Zeynep Pirselimoglu Batman, Elvan Ender Altay, Murat Zencirkiran

Abstract:

Water and water resources are characteristic areas with their special ecosystems Their natural, cultural and economic value and recreation opportunities are high. Recreational activities differ according to the natural, cultural, socio-economic resource values of the areas. In this sense, water and water edge areas, which are important for their resource values, are also important landscape values for recreational activities. From these landscapes values, creeks and the surrounding areas have become a major source of daily life in the past, as well as a major attraction for people's leisure time. However, their qualities and quantities must be sufficient to enable these areas to be used effectively in a recreational sense and to be able to fulfill their recreational functions. The purpose of the study is to identify the recreational use of the water-based activities and identify effective service areas in dense urbanization zones along the creek and green spaces around them. For this purpose, the study was carried out in the vicinity of Nilufer Creek in Bursa. The study area and its immediate surroundings are in the boundaries of Osmangazi and Nilufer districts. The study was carried out in the green spaces along the creek with an individual interaction of 17.930m. These areas are Hudavendigar Urban Park, Atatürk Urban Forest, Bursa Zoo, Soganlı Botanical Park, Mihrapli Park, Nilufer Valley Park. In the first phase of the study, the efficiency zones of these locations were calculated according to international standards. 3200m of this locations are serving the city population and 800m are serving the district and neighborhood population. These calculations are processed on the digitized map by the AUTOCAD program using the satellite image. The efficiency zone of these green spaces in the city were calculated as 71.04 km². In the second phase of the study, water-based current activities were determined by evaluating the recreational potential of these green spaces, which are located along the Nilufer Creek, where efficiency zones have been identified. It has been determined that water-based activities are used intensively in Hudavendigar Urban Park and interacted with Nilufer Creek. Within the scope of effective zones for the study area, appropriate recreational planning proposals have been developed and water-based activities have been suggested.

Keywords: Bursa, efficiency zone, Nilufer Creek, recreation, water-based activities

Procedia PDF Downloads 145
1834 A Modified Estimating Equations in Derivation of the Causal Effect on the Survival Time with Time-Varying Covariates

Authors: Yemane Hailu Fissuh, Zhongzhan Zhang

Abstract:

a systematic observation from a defined time of origin up to certain failure or censor is known as survival data. Survival analysis is a major area of interest in biostatistics and biomedical researches. At the heart of understanding, the most scientific and medical research inquiries lie for a causality analysis. Thus, the main concern of this study is to investigate the causal effect of treatment on survival time conditional to the possibly time-varying covariates. The theory of causality often differs from the simple association between the response variable and predictors. A causal estimation is a scientific concept to compare a pragmatic effect between two or more experimental arms. To evaluate an average treatment effect on survival outcome, the estimating equation was adjusted for time-varying covariates under the semi-parametric transformation models. The proposed model intuitively obtained the consistent estimators for unknown parameters and unspecified monotone transformation functions. In this article, the proposed method estimated an unbiased average causal effect of treatment on survival time of interest. The modified estimating equations of semiparametric transformation models have the advantage to include the time-varying effect in the model. Finally, the finite sample performance characteristics of the estimators proved through the simulation and Stanford heart transplant real data. To this end, the average effect of a treatment on survival time estimated after adjusting for biases raised due to the high correlation of the left-truncation and possibly time-varying covariates. The bias in covariates was restored, by estimating density function for left-truncation. Besides, to relax the independence assumption between failure time and truncation time, the model incorporated the left-truncation variable as a covariate. Moreover, the expectation-maximization (EM) algorithm iteratively obtained unknown parameters and unspecified monotone transformation functions. To summarize idea, the ratio of cumulative hazards functions between the treated and untreated experimental group has a sense of the average causal effect for the entire population.

Keywords: a modified estimation equation, causal effect, semiparametric transformation models, survival analysis, time-varying covariate

Procedia PDF Downloads 163
1833 Effectiveness of the Lacey Assessment of Preterm Infants to Predict Neuromotor Outcomes of Premature Babies at 12 Months Corrected Age

Authors: Thanooja Naushad, Meena Natarajan, Tushar Vasant Kulkarni

Abstract:

Background: The Lacey Assessment of Preterm Infants (LAPI) is used in clinical practice to identify premature babies at risk of neuromotor impairments, especially cerebral palsy. This study attempted to find the validity of the Lacey assessment of preterm infants to predict neuromotor outcomes of premature babies at 12 months corrected age and to compare its predictive ability with the brain ultrasound. Methods: This prospective cohort study included 89 preterm infants (45 females and 44 males) born below 35 weeks gestation who were admitted to the neonatal intensive care unit of a government hospital in Dubai. Initial assessment was done using the Lacey assessment after the babies reached 33 weeks postmenstrual age. Follow up assessment on neuromotor outcomes was done at 12 months (± 1 week) corrected age using two standardized outcome measures, i.e., infant neurological international battery and Alberta infant motor scale. Brain ultrasound data were collected retrospectively. Data were statistically analyzed, and the diagnostic accuracy of the Lacey assessment of preterm infants (LAPI) was calculated -when used alone and in combination with the brain ultrasound. Results: On comparison with brain ultrasound, the Lacey assessment showed superior specificity (96% vs. 77%), higher positive predictive value (57% vs. 22%), and higher positive likelihood ratio (18 vs. 3) to predict neuromotor outcomes at one year of age. The sensitivity of Lacey assessment was lower than brain ultrasound (66% vs. 83%), whereas specificity was similar (97% vs. 98%). A combination of Lacey assessment and brain ultrasound results showed higher sensitivity (80%), positive (66%), and negative (98%) predictive values, positive likelihood ratio (24), and test accuracy (95%) than Lacey assessment alone in predicting neurological outcomes. The negative predictive value of the Lacey assessment was similar to that of its combination with brain ultrasound (96%). Conclusion: Results of this study suggest that the Lacey assessment of preterm infants can be used as a supplementary assessment tool for premature babies in the neonatal intensive care unit. Due to its high specificity, Lacey assessment can be used to identify those babies at low risk of abnormal neuromotor outcomes at a later age. When used along with the findings of the brain ultrasound, Lacey assessment has better sensitivity to identify preterm babies at particular risk. These findings have applications in identifying premature babies who may benefit from early intervention services.

Keywords: brain ultrasound, lacey assessment of preterm infants, neuromotor outcomes, preterm

Procedia PDF Downloads 128
1832 Non-Invasive Viscosity Determination of Liquid Organic Hydrogen Carriers by Alteration of Temperature and Flow Velocity Using Cavity Based Permittivity Measurement

Authors: I. Wiemann, N. Weiß, E. Schlücker, M. Wensing, A. Kölpin

Abstract:

Chemical storage of hydrogen by liquid organic hydrogen carriers (LOHC) is a very promising alternative to compression or cryogenics. These carriers have high energy density and allow at the same time efficient and safe storage of hydrogen under ambient conditions and without leakage losses. Another benefit of LOHC is the possibility to transport it using already available infrastructure for transport of fossil fuels. Efficient use of LOHC is related to a precise process control, which requires a number of sensors in order to measure all relevant process parameters, for example, to measure the level of hydrogen loading of the carrier. The degree of loading is relevant for the energy content of the storage carrier and represents simultaneously the modification in chemical structure of the carrier molecules. This variation can be detected in different physical properties like viscosity, permittivity or density. Thereby, each degree of loading corresponds to different viscosity values. Conventional measurements currently use invasive viscosity measurements or near-line measurements to obtain quantitative information. Avoiding invasive measurements has several severe advantages. Efforts are currently taken to provide a precise, non-invasive measurement method with equal or higher precision of the obtained results. This study investigates a method for determination of the viscosity of LOHC. Since the viscosity can retroactively derived from the degree of loading, permittivity is a target parameter as it is a suitable for determining the hydrogenation degree. This research analyses the influence of common physical properties on permittivity. The permittivity measurement system is based on a cavity resonator, an electromagnetic resonant structure, whose resonation frequency depends on its dimensions as well as the permittivity of the medium inside. For known resonator dimensions, the resonation frequency directly characterizes the permittivity. In order to determine the dependency of the permittivity on temperature and flow velocity, an experimental setup with heating device and flow test bench was designed. By varying temperature in the range of 293,15 K -393,15 K and flow velocity up to 140 mm/s, corresponding changes in the resonation frequency were measured in the hundredths of the GHz range.

Keywords: liquid organic hydrogen carriers, measurement, permittivity, viscosity., temperature, flow process

Procedia PDF Downloads 76
1831 Comparison of Iodine Density Quantification through Three Material Decomposition between Philips iQon Dual Layer Spectral CT Scanner and Siemens Somatom Force Dual Source Dual Energy CT Scanner: An in vitro Study

Authors: Jitendra Pratap, Jonathan Sivyer

Abstract:

Introduction: Dual energy/Spectral CT scanning permits simultaneous acquisition of two x-ray spectra datasets and can complement radiological diagnosis by allowing tissue characterisation (e.g., uric acid vs. non-uric acid renal stones), enhancing structures (e.g. boost iodine signal to improve contrast resolution), and quantifying substances (e.g. iodine density). However, the latter showed inconsistent results between the 2 main modes of dual energy scanning (i.e. dual source vs. dual layer). Therefore, the present study aimed to determine which technology is more accurate in quantifying iodine density. Methods: Twenty vials with known concentrations of iodine solutions were made using Optiray 350 contrast media diluted in sterile water. The concentration of iodine utilised ranged from 0.1 mg/ml to 1.0mg/ml in 0.1mg/ml increments, 1.5 mg/ml to 4.5 mg/ml in 0.5mg/ml increments followed by further concentrations at 5.0 mg/ml, 7mg/ml, 10 mg/ml and 15mg/ml. The vials were scanned using Dual Energy scan mode on a Siemens Somatom Force at 80kV/Sn150kV and 100kV/Sn150kV kilovoltage pairing. The same vials were scanned using Spectral scan mode on a Philips iQon at 120kVp and 140kVp. The images were reconstructed at 5mm thickness and 5mm increment using Br40 kernel on the Siemens Force and B Filter on Philips iQon. Post-processing of the Dual Energy data was performed on vendor-specific Siemens Syngo VIA (VB40) and Philips Intellispace Portal (Ver. 12) for the Spectral data. For each vial and scan mode, the iodine concentration was measured by placing an ROI in the coronal plane. Intraclass correlation analysis was performed on both datasets. Results: The iodine concentrations were reproduced with a high degree of accuracy for Dual Layer CT scanner. Although the Dual Source images showed a greater degree of deviation in measured iodine density for all vials, the dataset acquired at 80kV/Sn150kV had a higher accuracy. Conclusion: Spectral CT scanning by the dual layer technique has higher accuracy for quantitative measurements of iodine density compared to the dual source technique.

Keywords: CT, iodine density, spectral, dual-energy

Procedia PDF Downloads 109
1830 Optimizing AI Voice for Adolescent Health Education: Preferences and Trustworthiness Across Teens and Parent

Authors: Yu-Lin Chen, Kimberly Koester, Marissa Raymond-Flesh, Anika Thapar, Jay Thapar

Abstract:

Purpose: Effectively communicating adolescent health topics to teens and their parents is crucial. This study emphasizes critically evaluating the optimal use of artificial intelligence tools (AI), which are increasingly prevalent in disseminating health information. By fostering a deeper understanding of AI voice preference in the context of health, the research aspires to have a ripple effect, enhancing the collective health literacy and decision-making capabilities of both teenagers and their parents. This study explores AI voices' potential within health learning modules for annual well-child visits. We aim to identify preferred voice characteristics and understand factors influencing perceived trustworthiness, ultimately aiming to improve health literacy and decision-making in both demographics. Methods: A cross-sectional study assessed preferences and trust perceptions of AI voices in learning modules among teens (11-18) and their parents/guardians in Northern California. The study involved the development of four distinct learning modules covering various adolescent health-related topics, including general communication, sexual and reproductive health communication, parental monitoring, and well-child check-ups. Participants were asked to evaluate eight AI voices across the modules, considering a set of six factors such as intelligibility, naturalness, prosody, social impression, trustworthiness, and overall appeal, using Likert scales ranging from 1 to 10 (the higher, the better). They were also asked to select their preferred choice of voice for each module. Descriptive statistics summarized participant demographics. Chi-square/t-tests explored differences in voice preferences between groups. Regression models identified factors impacting the perceived trustworthiness of the top-selected voice per module. Results: Data from 104 participants (teen=63; adult guardian = 41) were included in the analysis. The mean age is 14.9 for teens (54% male) and 41.9 for the parent/guardian (12% male). At the same time, similar voice quality ratings were observed across groups, and preferences varied by topic. For instance, in general communication, teens leaned towards young female voices, while parents preferred mature female tones. Interestingly, this trend reversed for parental monitoring, with teens favoring mature male voices and parents opting for mature female ones. Both groups, however, converged on mature female voices for sexual and reproductive health topics. Beyond preferences, the study delved into factors influencing perceived trustworthiness. Interestingly, social impression and sound appeal emerged as the most significant contributors across all modules, jointly explaining 71-75% of the variance in trustworthiness ratings. Conclusion: The study emphasizes the importance of catering AI voices to specific audiences and topics. Social impression and sound appeal emerged as critical factors influencing perceived trustworthiness across all modules. These findings highlight the need to tailor AI voices by age and the specific health information being delivered. Ensuring AI voices resonate with both teens and their parents can foster their engagement and trust, ultimately leading to improved health literacy and decision-making for both groups. Limitations and future research: This study lays the groundwork for understanding AI voice preferences for teenagers and their parents in healthcare settings. However, limitations exist. The sample represents a specific geographic location, and cultural variations might influence preferences. Additionally, the modules focused on topics related to well-child visits, and preferences might differ for more sensitive health topics. Future research should explore these limitations and investigate the long-term impact of AI voice on user engagement, health outcomes, and health behaviors.

Keywords: artificial intelligence, trustworthiness, voice, adolescent

Procedia PDF Downloads 34
1829 Classification of Emotions in Emergency Call Center Conversations

Authors: Magdalena Igras, Joanna Grzybowska, Mariusz Ziółko

Abstract:

The study of emotions expressed in emergency phone call is presented, covering both statistical analysis of emotions configurations and an attempt to automatically classify emotions. An emergency call is a situation usually accompanied by intense, authentic emotions. They influence (and may inhibit) the communication between caller and responder. In order to support responders in their responsible and psychically exhaustive work, we studied when and in which combinations emotions appeared in calls. A corpus of 45 hours of conversations (about 3300 calls) from emergency call center was collected. Each recording was manually tagged with labels of emotions valence (positive, negative or neutral), type (sadness, tiredness, anxiety, surprise, stress, anger, fury, calm, relief, compassion, satisfaction, amusement, joy) and arousal (weak, typical, varying, high) on the basis of perceptual judgment of two annotators. As we concluded, basic emotions tend to appear in specific configurations depending on the overall situational context and attitude of speaker. After performing statistical analysis we distinguished four main types of emotional behavior of callers: worry/helplessness (sadness, tiredness, compassion), alarm (anxiety, intense stress), mistake or neutral request for information (calm, surprise, sometimes with amusement) and pretension/insisting (anger, fury). The frequency of profiles was respectively: 51%, 21%, 18% and 8% of recordings. A model of presenting the complex emotional profiles on the two-dimensional (tension-insecurity) plane was introduced. In the stage of acoustic analysis, a set of prosodic parameters, as well as Mel-Frequency Cepstral Coefficients (MFCC) were used. Using these parameters, complex emotional states were modeled with machine learning techniques including Gaussian mixture models, decision trees and discriminant analysis. Results of classification with several methods will be presented and compared with the state of the art results obtained for classification of basic emotions. Future work will include optimization of the algorithm to perform in real time in order to track changes of emotions during a conversation.

Keywords: acoustic analysis, complex emotions, emotion recognition, machine learning

Procedia PDF Downloads 383
1828 Neologisms and Word-Formation Processes in Board Game Rulebook Corpus: Preliminary Results

Authors: Athanasios Karasimos, Vasiliki Makri

Abstract:

This research focuses on the design and development of the first text Corpus based on Board Game Rulebooks (BGRC) with direct application on the morphological analysis of neologisms and tendencies in word-formation processes. Corpus linguistics is a dynamic field that examines language through the lens of vast collections of texts. These corpora consist of diverse written and spoken materials, ranging from literature and newspapers to transcripts of everyday conversations. By morphologically analyzing these extensive datasets, morphologists can gain valuable insights into how language functions and evolves, as these extensive datasets can reflect the byproducts of inflection, derivation, blending, clipping, compounding, and neology. This entails scrutinizing how words are created, modified, and combined to convey meaning in a corpus of challenging, creative, and straightforward texts that include rules, examples, tutorials, and tips. Board games teach players how to strategize, consider alternatives, and think flexibly, which are critical elements in language learning. Their rulebooks reflect not only their weight (complexity) but also the language properties of each genre and subgenre of these games. Board games are a captivating realm where strategy, competition, and creativity converge. Beyond the excitement of gameplay, board games also spark the art of word creation. Word games, like Scrabble, Codenames, Bananagrams, Wordcraft, Alice in the Wordland, Once uUpona Time, challenge players to construct words from a pool of letters, thus encouraging linguistic ingenuity and vocabulary expansion. These games foster a love for language, motivating players to unearth obscure words and devise clever combinations. On the other hand, the designers and creators produce rulebooks, where they include their joy of discovering the hidden potential of language, igniting the imagination, and playing with the beauty of words, making these games a delightful fusion of linguistic exploration and leisurely amusement. In this research, more than 150 rulebooks in English from all types of modern board games, either language-independent or language-dependent, are used to create the BGRC. A representative sample of each genre (family, party, worker placement, deckbuilding, dice, and chance games, strategy, eurogames, thematic, role-playing, among others) was selected based on the score from BoardGameGeek, the size of the texts and the level of complexity (weight) of the game. A morphological model with morphological networks, multi-word expressions, and word-creation mechanics based on the complexity of the textual structure, difficulty, and board game category will be presented. In enabling the identification of patterns, trends, and variations in word formation and other morphological processes, this research aspires to make avail of this creative yet strict text genre so as to (a) give invaluable insight into morphological creativity and innovation that (re)shape the lexicon of the English language and (b) test morphological theories. Overall, it is shown that corpus linguistics empowers us to explore the intricate tapestry of language, and morphology in particular, revealing its richness, flexibility, and adaptability in the ever-evolving landscape of human expression.

Keywords: board game rulebooks, corpus design, morphological innovations, neologisms, word-formation processes

Procedia PDF Downloads 74
1827 Evaluating Daylight Performance in an Office Environment in Malaysia, Using Venetian Blind Systems

Authors: Fatemeh Deldarabdolmaleki, Mohamad Fakri Zaky Bin Ja'afar

Abstract:

This paper presents fenestration analysis to study the balance between utilizing daylight and eliminating the disturbing parameters in a private office room with interior venetian blinds taking into account different slat angles. Mean luminance of the scene and window, luminance ratio of the workplane and window, work plane illumination and daylight glare probability(DGP) were calculated as a function of venetian blind design properties. Recently developed software, analyzing High Dynamic Range Images (HDRI captured by CCD camera), such as radiance based evalglare and hdrscope help to investigate luminance-based metrics. A total of Eight-day measurement experiment was conducted to investigate the impact of different venetian blind angles in an office environment under daylight condition in Serdang, Malaysia. Detailed result for the selected case study showed that artificial lighting is necessary during the morning session for Malaysian buildings with southwest windows regardless of the venetian blind’s slat angle. However, in some conditions of afternoon session the workplane illuminance level exceeds the maximum illuminance of 2000 lx such as 10° and 40° slat angles. Generally, a rising trend is discovered toward mean window luminance level during the day. All the conditions have less than 10% of the pixels exceeding 2000 cd/m² before 1:00 P.M. However, 40% of the selected hours have more than 10% of the scene pixels higher than 2000 cd/m² after 1:00 P.M. Surprisingly in no blind condition, there is no extreme case of window/task ratio, However, the extreme cases happen for 20°, 30°, 40° and 50° slat angles. As expected mean window luminance level is higher than 2000 cd/m² after 2:00 P.M for most cases except 60° slat angle condition. Studying the daylight glare probability, there is not any DGP value higher than 0.35 in this experiment, due to the window’s direction, location of the building and studied workplane. Specifically, this paper reviews different blind angle’s response to the suggested metrics by the previous standards, and finally conclusions and knowledge gaps are summarized and suggested next steps for research are provided. Addressing these gaps is critical for the continued progress of the energy efficiency movement.

Keywords: daylighting, office environment, energy simulation, venetian blind

Procedia PDF Downloads 212
1826 Afrikan Natural Medicines: An Innovation-Based Model for Medicines Production, Curriculum Development and Clinical Application

Authors: H. Chabalala, A. Grootboom, M. Tang

Abstract:

The innovative development, production, and clinical utilisation of African natural medicines requires frameworks from systematisation, innovation, registration. Afrika faces challenges when it comes to these sectors. The opposite is the case as is is evident in ancient Asian (Traditional Chinese Medicine and Indian Ayurveda and Siddha) medical systems, which are interfaced into their respective national health and educational systems. Afrikan Natural Medicines (ANMs) are yet to develop systematisation frameworks, i.e. disease characterisation and medicines classification. This paper explores classical medical systems drawn from Afrikan and Chinese experts in natural medicines. An Afrikological research methodology was used to conduct in-depth interviews with 20 key respondents selected through purposeful sampling technique. Data was summarised into systematisation frameworks for classical disease theories, patient categorisation, medicine classification, aetiology and pathogenesis of disease, diagnosis and prognosis techniques and treatment methods. It was discovered that ancient Afrika had systematic medical cosmologies, remnants of which are evident in most Afrikan cultural health practices. Parallels could be drawn from classical medical concepts of antiquity, like Chinese Taoist and Indian tantric health systems. Data revealed that both the ancient and contemporary ANM systems were based on living medical cosmologies. The study showed that African Natural Healing Systems have etiological systems, general pathogenesis knowledge, differential diagnostic techniques, comprehensive prognosis and holistic treatment regimes. Systematisation models were developed out of these frameworks, and this could be used for evaluation of clinical research, medical application including development of curriculum for high-education. It was envisaged that frameworks will pave way towards the development, production and commercialisation of ANMs. This was piloted in inclusive innovation, technology transfer and commercialisation of South African natural medicines, cosmeceuticals, nutraceuticals and health infusions. The central model presented here in will assist in curriculum development and establishment of Afrikan Medicines Hospitals and Pharmaceutical Industries.

Keywords: African Natural Medicines, Indigenous Knowledge Systems, Medical Cosmology, Clinical Application

Procedia PDF Downloads 113
1825 Influence of Wind Induced Fatigue Damage in the Reliability of Wind Turbines

Authors: Emilio A. Berny-Brandt, Sonia E. Ruiz

Abstract:

Steel tubular towers serving as support structures for large wind turbines are subject to several hundred million stress cycles arising from the turbulent nature of the wind. This causes high-cycle fatigue which can govern tower design. The practice of maintaining the support structure after wind turbines reach its typical 20-year design life have become common, but without quantifying the changes in the reliability on the tower. There are several studies on this topic, but most of them are based on the S-N curve approach using the Miner’s rule damage summation method, the de-facto standard in the wind industry. However, the qualitative nature of Miner’s method makes desirable the use of fracture mechanics to measure the effects of fatigue in the capacity curve of the structure, which is important in order to evaluate the integrity and reliability of these towers. Temporal and spatially varying wind speed time histories are simulated based on power spectral density and coherence functions. Simulations are then applied to a SAP2000 finite element model and step-by-step analysis is used to obtain the stress time histories for a range of representative wind speeds expected during service conditions of the wind turbine. Rainflow method is then used to obtain cycle and stress range information of each of these time histories and a statistical analysis is performed to obtain the distribution parameters of each variable. Monte Carlo simulation is used here to evaluate crack growth over time in the tower base using the Paris-Erdogan equation. A nonlinear static pushover analysis to assess the capacity curve of the structure after a number of years is performed. The capacity curves are then used to evaluate the changes in reliability of a steel tower located in Oaxaca, Mexico, where wind energy facilities are expected to grow in the near future. Results show that fatigue on the tower base can have significant effects on the structural capacity of the wind turbine, especially after the 20-year design life when the crack growth curve starts behaving exponentially.

Keywords: crack growth, fatigue, Monte Carlo simulation, structural reliability, wind turbines

Procedia PDF Downloads 506
1824 The Accuracy of an 8-Minute Running Field Test to Estimate Lactate Threshold

Authors: Timothy Quinn, Ronald Croce, Aliaksandr Leuchanka, Justin Walker

Abstract:

Many endurance athletes train at or just below an intensity associated with their lactate threshold (LT) and often the heart rate (HR) that these athletes use for their LT are above their true LT-HR measured in a laboratory. Training above their true LT-HR may lead to overtraining and injury. Few athletes have the capability of measuring their LT in a laboratory and rely on perception to guide them, as accurate field tests to determine LT are limited. Therefore, the purpose of this study was to determine if an 8-minute field test could accurately define the HR associated with LT as measured in the laboratory. On Day 1, fifteen male runners (mean±SD; age, 27.8±4.1 years; height, 177.9±7.1 cm; body mass, 72.3±6.2 kg; body fat, 8.3±3.1%) performed a discontinuous treadmill LT/maximal oxygen consumption (LT/VO2max) test using a portable metabolic gas analyzer (Cosmed K4b2) and a lactate analyzer (Analox GL5). The LT (and associated HR) was determined using the 1/+1 method, where blood lactate increased by 1 mMol•L-1 over baseline followed by an additional 1 mMol•L-1 increase. Days 2 and 3 were randomized, and the athletes performed either an 8-minute run on the treadmill (TM) or on a 160-m indoor track (TR) in an effort to cover as much distance as possible while maintaining a high intensity throughout the entire 8 minutes. VO2, HR, ventilation (VE), and respiratory exchange ratio (RER) were measured using the Cosmed system, and rating of perceived exertion (RPE; 6-20 scale) was recorded every minute. All variables were averaged over the 8 minutes. The total distance covered over the 8 minutes was measured in both conditions. At the completion of the 8-minute runs, blood lactate was measured. Paired sample t-tests and pairwise Pearson correlations were computed to determine the relationship between variables measured in the field tests versus those obtained in the laboratory at LT. An alpha level of <0.05 was required for statistical significance. The HR (mean +SD) during the TM (167+9 bpm) and TR (172+9 bpm) tests were strongly correlated to the HR measured during the laboratory LT (169+11 bpm) test (r=0.68; p<0.03 and r=0.88; p<0.001, respectively). Blood lactate values during the TM and TR tests were not different from each other but were strongly correlated with the laboratory LT (r=0.73; p<0.04 and r=0.66; p<0.05, respectively). VE (Lmin-1) was significantly greater during the TR (134.8+11.4 Lmin-1) as compared to the TM (123.3+16.2 Lmin-1) with moderately strong correlations to the laboratory threshold values (r=0.38; p=0.27 and r=0.58; p=0.06, respectively). VO2 was higher during TR (51.4 mlkg-1min-1) compared to TM (47.4 mlkg-1min-1) with correlations of 0.33 (p=0.35) and 0.48 (p=0.13), respectively to threshold values. Total distance run was significantly greater during the TR (2331.6+180.9 m) as compared to the TM (2177.0+232.6 m), but they were strongly correlated with each other (r=0.82; p<0.002). These results suggest that an 8-minute running field test can accurately predict the HR associated with the LT and may be a simple test that athletes and coaches could implement to aid in training techniques.

Keywords: blood lactate, heart rate, running, training

Procedia PDF Downloads 242
1823 Minimizing Unscheduled Maintenance from an Aircraft and Rolling Stock Maintenance Perspective: Preventive Maintenance Model

Authors: Adel A. Ghobbar, Varun Raman

Abstract:

The Corrective maintenance of components and systems is a problem plaguing almost every industry in the world today. Train operators’ and the maintenance repair and overhaul subsidiary of the Dutch railway company is also facing this problem. A considerable portion of the maintenance activities carried out by the company are unscheduled. This, in turn, severely stresses and stretches the workforce and resources available. One possible solution is to have a robust preventive maintenance plan. The other possible solution is to plan maintenance based on real-time data obtained from sensor-based ‘Health and Usage Monitoring Systems.’ The former has been investigated in this paper. The preventive maintenance model developed for train operator will subsequently be extended, to tackle the unscheduled maintenance problem also affecting the aerospace industry. The extension of the model to the aerospace sector will be dealt with in the second part of the research, and it would, in turn, validate the soundness of the model developed. Thus, there are distinct areas that will be addressed in this paper, including the mathematical modelling of preventive maintenance and optimization based on cost and system availability. The results of this research will help an organization to choose the right maintenance strategy, allowing it to save considerable sums of money as opposed to overspending under the guise of maintaining high asset availability. The concept of delay time modelling was used to address the practical problem of unscheduled maintenance in this paper. The delay time modelling can be used to help with support planning for a given asset. The model was run using MATLAB, and the results are shown that the ideal inspection intervals computed using the extended from a minimal cost perspective were 29 days, and from a minimum downtime, perspective was 14 days. Risk matrix integration was constructed to represent the risk in terms of the probability of a fault leading to breakdown maintenance and its consequences in terms of maintenance cost. Thus, the choice of an optimal inspection interval of 29 days, resulted in a cost of approximately 50 Euros and the corresponding value of b(T) was 0.011. These values ensure that the risk associated with component X being maintained at an inspection interval of 29 days is more than acceptable. Thus, a switch in maintenance frequency from 90 days to 29 days would be optimal from the point of view of cost, downtime and risk.

Keywords: delay time modelling, unscheduled maintenance, reliability, maintainability, availability

Procedia PDF Downloads 123
1822 The Willingness to Pay of People in Taiwan for Flood Protection Standard of Regions

Authors: Takahiro Katayama, Hsueh-Sheng Chang

Abstract:

Due to the global climate change, it has increased the extreme rainfall that led to serious floods around the world. In recent years, urbanization and population growth also tend to increase the number of impervious surfaces, resulting in significant loss of life and property during floods especially for the urban areas of Taiwan. In the past, the primary governmental response to floods was structural flood control and the only flood protection standards in use were the design standards. However, these design standards of flood control facilities are generally calculated based on current hydrological conditions. In the face of future extreme events, there is a high possibility to surpass existing design standards and cause damages directly and indirectly to the public. To cope with the frequent occurrence of floods in recent years, it has been pointed out that there is a need for a different standard called FPSR (Flood Protection Standard of Regions) in Taiwan. FPSR is mainly used for disaster reduction and used to ensure that hydraulic facilities draining regional flood immediately under specific return period. FPSR could convey a level of flood risk which is useful for land use planning and reflect the disaster situations that a region can bear. However, little has been reported on FPSR and its impacts to the public in Taiwan. Hence, this study proposes a quantity procedure to evaluate the FPSR. This study aimed to examine FPSR of the region and public perceptions of and knowledge about FPSR, as well as the public’s WTP (willingness to pay) for FPSR. The research is conducted via literature review and questionnaire method. Firstly, this study will review the domestic and international research on the FPSR, and provide the theoretical framework of FPSR. Secondly, CVM (Contingent Value Method) has been employed to conduct this survey and using double-bounded dichotomous choice, close-ended format elicits households WTP for raising the protection level to understand the social costs. The samplings of this study are citizens living in Taichung city, Taiwan and 700 samplings were chosen in this study. In the end, this research will continue working on surveys, finding out which factors determining WTP, and provide some recommendations for adaption policies for floods in the future.

Keywords: climate change, CVM (Contingent Value Method), FPSR (Flood Protection Standard of Regions), urban flooding

Procedia PDF Downloads 234
1821 LWD Acquisition of Caliper and Drilling Mechanics in a Geothermal Well, A Case Study in Sorik Marapi Field – Indonesia

Authors: Vinda B. Manurung, Laila Warkhaida, David Hutabarat, Sentanu Wisnuwardhana, Christovik Simatupang, Dhani Sanjaya, Ashadi, Redha B. Putra, Kiki Yustendi

Abstract:

The geothermal drilling environment presents many obstacles that have limited the use of directional drilling and logging-while-drilling (LWD) technologies, such as borehole washout, mud losses, severe vibration, and high temperature. The case study presented in this paper demonstrates a practice to enhance data logging in geothermal drilling by deploying advanced telemetry and LWD technologies. This operation is aiming continuous improvement in geothermal drilling operations. The case study covers a 12.25-in. hole section of well XX-05 in Pad XX of the Sorik Marapi Geothermal Field. LWD string consists of electromagnetic (EM) telemetry, pressure while drilling (PWD), vibration (DDSr), and acoustic calliper (ACAL). Through this tool configuration, the operator acquired drilling mechanics and caliper logs in real-time and recorded mode, enabling effective monitoring of wellbore stability. Throughout the real-time acquisition, EM-PPM telemetry had provided a three times faster data rate to the surface unit. With the integration of Caliper data and Drilling mechanics data (vibration and ECD -equivalent circulating density), the borehole conditions were more visible to the directional driller, allowing for better control of drilling parameters to minimize vibration and achieve optimum hole cleaning in washed-out or tight formation sequences. After reaching well TD, the recorded data from the caliper sensor indicated an average of 8.6% washout for the entire 12.25-in. interval. Washout intervals were compared with loss occurrence, showing potential for the caliper to be used as an indirect indicator of fractured intervals and validating fault trend prognosis. This LWD case study has given added value in geothermal borehole characterization for both drilling operation and subsurface. Identified challenges while running LWD in this geothermal environment need to be addressed for future improvements, such as the effect of tool eccentricity and the impact of vibration. A perusal of both real-time and recorded drilling mechanics and caliper data has opened various possibilities for maximizing sensor usage in future wells.

Keywords: geothermal drilling, geothermal formation, geothermal technologies, logging-while-drilling, vibration, caliper, case study

Procedia PDF Downloads 112
1820 The Shrinking of the Pink Wave and the Rise of the Right-Wing in Latin America

Authors: B. M. Moda, L. F. Secco

Abstract:

Through free and fair elections and others less democratic processes, Latin America has been gradually turning into a right-wing political region. In order to understand these recent changes, this paper aims to discuss the origin and the traits of the pink wave in the subcontinent, the reasons for its current rollback and future projections for left-wing in the region. The methodology used in this paper will be descriptive and analytical combined with secondary sources mainly from the social and political sciences fields. The canons of the Washington Consensus was implemented by the majority of the Latin American governments in the 80s and 90s under the social democratic and right-wing parties. The neoliberal agenda caused political, social and economic dissatisfaction bursting into a new political configuration for the region. It started in 1998 when Hugo Chávez took the office in Venezuela through the Fifth Republic Movement under the socialist flag. From there on, Latin America was swiped by the so-called ‘pink wave’, term adopted to define the rising of self-designated left-wing or center-left parties with a progressive agenda. After Venezuela, countries like Chile, Brazil, Argentina, Uruguay, Bolivia, Equator, Nicaragua, Paraguay, El Salvador and Peru got into the pink wave. The success of these governments was due a post-neoliberal agenda focused on cash transfers programs, increasing of public spending, and the straightening of national market. The discontinuation of the preference for the left-wing started in 2012 with the coup against Fernando Lugo in Paraguay. In 2015, the chavismo in Venezuela lost the majority of the legislative seats. In 2016, an impeachment removed the Brazilian president Dilma Rousself from office who was replaced by the center-right vice-president Michel Temer. In the same year, Mauricio Macri representing the right-wing party Proposta Republicana was elected in Argentina. In 2016 center-right and liberal, Pedro Pablo Kuczynski was elected in Peru. In 2017, Sebastián Piñera was elected in Chile through the center-right party Renovación Nacional. The pink wave current rollback points towards some findings that can be arranged in two fields. Economically, the 2008 financial crisis affected the majority of the Latin American countries and the left-wing economic policies along with the end of the raw materials boom and the subsequent shrinking of economic performance opened a flank for popular dissatisfaction. In Venezuela, the 2014 oil crisis reduced the revenues for the State in more than 50% dropping social spending, creating an inflationary spiral, and consequently loss of popular support. Politically, the death of Hugo Chavez in 2013 weakened the ‘socialism of the twenty first century’ ideal, which was followed by the death of Fidel Castro, the last bastion of communism in the subcontinent. In addition, several cases of corruption revealed during the pink wave governments made the traditional politics unpopular. These issues challenge the left-wing to develop a future agenda based on innovation of its economic program, improve its legal and political compliance practices, and to regroup its electoral forces amid the social movements that supported its ascension back in the early 2000s.

Keywords: Latin America, political parties, left-wing, right-wing, pink wave

Procedia PDF Downloads 227
1819 Spectrophotometric Detection of Histidine Using Enzyme Reaction and Examination of Reaction Conditions

Authors: Akimitsu Kugimiya, Kouhei Iwato, Toru Saito, Jiro Kohda, Yasuhisa Nakano, Yu Takano

Abstract:

The measurement of amino acid content is reported to be useful for the diagnosis of several types of diseases, including lung cancer, gastric cancer, colorectal cancer, breast cancer, prostate cancer, and diabetes. The conventional detection methods for amino acid are high-performance liquid chromatography (HPLC) and liquid chromatography-mass spectrometry (LC-MS), but they have several drawbacks as the equipment is cumbersome and the techniques are costly in terms of time and costs. In contrast, biosensors and biosensing methods provide more rapid and facile detection strategies that use simple equipment. The authors have reported a novel approach for the detection of each amino acid that involved the use of aminoacyl-tRNA synthetase (aaRS) as a molecular recognition element because aaRS is expected to a selective binding ability for corresponding amino acid. The consecutive enzymatic reactions used in this study are as follows: aaRS binds to its cognate amino acid and releases inorganic pyrophosphate. Hydrogen peroxide (H₂O₂) was produced by the enzyme reactions of inorganic pyrophosphatase and pyruvate oxidase. The Trinder’s reagent was added into the reaction mixture, and the absorbance change at 556 nm was measured using a microplate reader. In this study, an amino acid-sensing method using histidyl-tRNA synthetase (HisRS; histidine-specific aaRS) as molecular recognition element in combination with the Trinder’s reagent spectrophotometric method was developed. The quantitative performance and selectivity of the method were evaluated, and the optimal enzyme reaction and detection conditions were determined. The authors developed a simple and rapid method for detecting histidine with a combination of enzymatic reaction and spectrophotometric detection. In this study, HisRS was used to detect histidine, and the reaction and detection conditions were optimized for quantitation of these amino acids in the ranges of 1–100 µM histidine. The detection limits are sufficient to analyze these amino acids in biological fluids. This work was partly supported by Hiroshima City University Grant for Special Academic Research (General Studies).

Keywords: amino acid, aminoacyl-tRNA synthetase, biosensing, enzyme reaction

Procedia PDF Downloads 271
1818 An Analysis of Emmanuel Macron's Campaign Discourse

Authors: Robin Turner

Abstract:

In the context of the strengthening conservative movements such as “Brexit” and the election of US President Donald Trump, the global political stage was shaken up by the election of Emmanuel Macron to the French presidency, defeating the far-right candidate Marine Le Pen. The election itself was a first for the Fifth Republic in which neither final candidate was from the traditional two major political parties: the left Parti Socialiste (PS) and the right Les Républicains (LR). Macron, who served as the Minister of Finance under his predecessor, founded the centrist liberal political party En Marche! in April 2016 before resigning from his post in August to launch his bid for the presidency. Between the time of the party’s creation to the first round of elections a year later, Emmanuel Macron and En Marche! had garnered enough support to make it to the run-off election, finishing far ahead of many seasoned national political figures. Now months into his presidency, the youngest President of the Republic shows no sign of losing fuel anytime soon. His unprecedented success raises a lot of questions with respect to international relations, economics, and the evolving relationship between the French government and its citizens. The effectiveness of Macron’s campaign, of course, relies on many factors, one of which is his manner of communicating his platform to French voters. Using data from oral discourse and primary material from Macron and En Marche! in sources such as party publications and Twitter, the study categorizes linguistic instruments – address, lexicon, tone, register, and syntax – to identify prevailing patterns of speech and communication. The linguistic analysis in this project is two-fold. In addition to these findings’ stand-alone value, these discourse patterns are contextualized by comparable discourse of other 2017 presidential candidates with high emphasis on that of Marine Le Pen. Secondly, to provide an alternative approach, the study contextualizes Macron’s discourse using those of two immediate predecessors representing the traditional stronghold political parties, François Hollande (PS) and Nicolas Sarkozy (LR). These comparative methods produce an analysis that gives insight to not only a contributing factor to Macron’s successful 2017 campaign but also provides insight into how Macron’s platform presents itself differently to previous presidential platforms. Furthermore, this study extends analysis to supply data that contributes to a wider analysis of the defeat of “traditional” French political parties by the “start-up” movement En Marche!.

Keywords: Emmanuel Macron, French, discourse analysis, political discourse

Procedia PDF Downloads 247
1817 Relationship between Readability of Paper-Based Braille and Character Spacing

Authors: T. Nishimura, K. Doi, H. Fujimoto, T. Wada

Abstract:

The Number of people with acquired visual impairments has increased in recent years. In specialized courses at schools for the blind and in Braille lessons offered by social welfare organizations, many people with acquired visual impairments cannot learn to read adequately Braille. One of the reasons is that the common Braille patterns for people visual impairments who already has mature Braille reading skill being difficult to read for Braille reading beginners. In addition, there is the scanty knowledge of Braille book manufacturing companies regarding what Braille patterns would be easy to read for beginners. Therefore, it is required to investigate a suitable Braille patterns would be easy to read for beginners. In order to obtain knowledge regarding suitable Braille patterns for beginners, this study aimed to elucidate the relationship between readability of paper-based Braille and its patterns. This study focused on character spacing, which readily affects Braille reading ability, to determine a suitable character spacing ratio (ratio of character spacing to dot spacing) for beginners. Specifically, considering beginners with acquired visual impairments who are unfamiliar with reading Braille, we quantitatively evaluated the effect of character spacing ratio on Braille readability through an evaluation experiment using sighted subjects with no experience of reading Braille. In this experiment, ten sighted adults took the blindfold were asked to read test piece (three Braille characters). Braille used as test piece was composed of five dots. They were asked to touch the Braille by sliding their forefinger on the test piece immediately after the test examiner gave a signal to start the experiment. Then, they were required to release their forefinger from the test piece when they perceived the Braille characters. Seven conditions depended on character spacing ratio was held (i.e., 1.2, 1.4, 1.5, 1.6, 1.8, 2.0, 2.2 [mm]), and the other four depended on the dot spacing (i.e., 2.0, 2.5, 3.0, 3.5 [mm]). Ten trials were conducted for each conditions. The test pieces are created using by NISE Graphic could print Braille adjusted arbitrary value of character spacing and dot spacing with high accuracy. We adopted the evaluation indices for correct rate, reading time, and subjective readability to investigate how the character spacing ratio affects Braille readability. The results showed that Braille reading beginners could read Braille accurately and quickly, when character spacing ratio is more than 1.8 and dot spacing is more than 3.0 mm. Furthermore, it is difficult to read Braille accurately and quickly for beginners, when both character spacing and dot spacing are small. For this study, suitable character spacing ratio to make reading easy for Braille beginners is revealed.

Keywords: Braille, character spacing, people with visual impairments, readability

Procedia PDF Downloads 275
1816 Maintenance Wrench Time Improvement Project

Authors: Awadh O. Al-Anazi

Abstract:

As part of the organizational needs toward successful maintaining activities, a proper management system need to be put in place, ensuring the effectiveness of maintenance activities. The management system shall clearly describes the process of identifying, prioritizing, planning, scheduling, execution, and providing valuable feedback for all maintenance activities. Completion and accuracy of the system with proper implementation shall provide the organization with a strong platform for effective maintenance activities that are resulted in efficient outcomes toward business success. The purpose of this research was to introduce a practical tool for measuring the maintenance efficiency level within Saudi organizations. A comprehensive study was launched across many maintenance professionals throughout Saudi leading organizations. The study covered five main categories: work process, identification, planning and scheduling, execution, and performance monitoring. Each category was evaluated across many dimensions to determine its current effectiveness through a five-level scale from 'process is not there' to 'mature implementation'. Wide participation was received, responses were analyzed, and the study was concluded by highlighting major gaps and improvement opportunities within Saudi organizations. One effective implementation of the efficiency enhancement efforts was deployed in Saudi Kayan (one of Sabic affiliates). Below details describes the project outcomes: SK overall maintenance wrench time was measured at 20% (on average) from the total daily working time. The assessment indicates the appearance of several organizational gaps, such as a high amount of reactive work, poor coordination and teamwork, Unclear roles and responsibilities, as well as underutilization of resources. Multidiscipline team was assigned to design and implement an appropriate work process that is capable to govern the execution process, improve the maintenance workforce efficiency, and maximize wrench time (targeting > 50%). The enhanced work process was introduced through brainstorming and wide benchmarking, incorporated with a proper change management plan and leadership sponsorship. The project was completed in 2018. Achieved Results: SK WT was improved to 50%, which resulted in 1) reducing the Average Notification completion time. 2) reducing maintenance expenses on OT and manpower support (3.6 MSAR Actual Saving from Budget within 6 months).

Keywords: efficiency, enhancement, maintenance, work force, wrench time

Procedia PDF Downloads 126
1815 Nondecoupling Signatures of Supersymmetry and an Lμ-Lτ Gauge Boson at Belle-II

Authors: Heerak Banerjee, Sourov Roy

Abstract:

Supersymmetry, one of the most celebrated fields of study for explaining experimental observations where the standard model (SM) falls short, is reeling from the lack of experimental vindication. At the same time, the idea of additional gauge symmetry, in particular, the gauged Lμ-Lτ symmetric models have also generated significant interest. They have been extensively proposed in order to explain the tantalizing discrepancy in the predicted and measured value of the muon anomalous magnetic moment alongside several other issues plaguing the SM. While very little parameter space within these models remain unconstrained, this work finds that the γ + Missing Energy (ME) signal at the Belle-II detector will be a smoking gun for supersymmetry (SUSY) in the presence of a gauged U(1)Lμ-Lτ symmetry. A remarkable consequence of breaking the enhanced symmetry appearing in the limit of degenerate (s)leptons is the nondecoupling of the radiative contribution of heavy charged sleptons to the γ-Z΄ kinetic mixing. The signal process, e⁺e⁻ →γZ΄→γ+ME, is an outcome of this ubiquitous feature. Taking the severe constraints on gauged Lμ-Lτ models by several low energy observables into account, it is shown that any significant excess in all but the highest photon energy bin would be an undeniable signature of such heavy scalar fields in SUSY coupling to the additional gauge boson Z΄. The number of signal events depends crucially on the logarithm of the ratio of stau to smuon mass in the presence of SUSY. In addition, the number is also inversely proportional to the e⁺e⁻ collision energy, making a low-energy, high-luminosity collider like Belle-II an ideal testing ground for this channel. This process can probe large swathes of the hitherto free slepton mass ratio vs. additional gauge coupling (gₓ) parameter space. More importantly, it can explore the narrow slice of Z΄ mass (MZ΄) vs. gₓ parameter space still allowed in gauged U(1)Lμ-Lτ models for superheavy sparticles. The spectacular finding that the signal significance is independent of individual slepton masses is an exciting prospect indeed. Further, the prospect that signatures of even superheavy SUSY particles that may have escaped detection at the LHC may show up at the Belle-II detector is an invigorating revelation.

Keywords: additional gauge symmetry, electron-positron collider, kinetic mixing, nondecoupling radiative effect, supersymmetry

Procedia PDF Downloads 116