Search results for: LCD panel deviation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1940

Search results for: LCD panel deviation

140 Solar Electric Propulsion: The Future of Deep Space Exploration

Authors: Abhishek Sharma, Arnab Banerjee

Abstract:

The research is intended to study the solar electric propulsion (SEP) technology for planetary missions. The main benefits of using solar electric propulsion for such missions are shorter flight times, more frequent target accessibility and the use of a smaller launch vehicle than that required by a comparable chemical propulsion mission. Energized by electric power from on-board solar arrays, the electrically propelled system uses 10 times less propellant than conventional chemical propulsion system, yet the reduced fuel mass can provide vigorous power which is capable of propelling robotic and crewed missions beyond the Lower Earth Orbit (LEO). The various thrusters used in the SEP are gridded ion thrusters and the Hall Effect thrusters. The research is solely aimed to study the ion thrusters and investigate the complications related to it and what can be done to overcome the glitches. The ion thrusters are used because they are found to have a total lower propellant requirement and have substantially longer time. In the ion thrusters, the anode pushes or directs the incoming electrons from the cathode. But the anode is not maintained at a very high potential which leads to divergence. Divergence leads to the charges interacting against the surface of the thruster. Just as the charges ionize the xenon gases, they are capable of ionizing the surfaces and over time destroy the surface and hence contaminate it. Hence the lifetime of thruster gets limited. So a solution to this problem is using substances which are not easy to ionize as the surface material. Another approach can be to increase the potential of anode so that the electrons don’t deviate much or reduce the length of thruster such that the positive anode is more effective. The aim is to work on these aspects as to how constriction of the deviation of charges can be done by keeping the input power constant and hence increase the lifetime of the thruster. Predominantly ring cusp magnets are used in the ion thrusters. However, the study is also intended to observe the effect of using solenoid for producing micro-solenoidal magnetic field apart from using the ring cusp magnetic field which are used in the discharge chamber for prevention of interaction of electrons with the ionization walls. Another foremost area of interest is what are the ways by which power can be provided to the Solar Electric Propulsion Vehicle for lowering and boosting the orbit of the spacecraft and also provide substantial amount of power to the solenoid for producing stronger magnetic fields. This can be successfully achieved by using the concept of Electro-dynamic tether which will serve as a power source for powering both the vehicle and the solenoids in the ion thruster and hence eliminating the need for carrying extra propellant on the spacecraft which will reduce the weight and hence reduce the cost of space propulsion.

Keywords: electro-dynamic tether, ion thruster, lifetime of thruster, solar electric propulsion vehicle

Procedia PDF Downloads 195
139 Identifying Confirmed Resemblances in Problem-Solving Engineering, Both in the Past and Present

Authors: Colin Schmidt, Adrien Lecossier, Pascal Crubleau, Simon Richir

Abstract:

Introduction:The widespread availability of artificial intelligence, exemplified by Generative Pre-trained Transformers (GPT) relying on large language models (LLM), has caused a seismic shift in the realm of knowledge. Everyone now has the capacity to swiftly learn how these models can either serve them well or not. Today, conversational AI like ChatGPT is grounded in neural transformer models, a significant advance in natural language processing facilitated by the emergence of renowned LLMs constructed using neural transformer architecture. Inventiveness of an LLM : OpenAI's GPT-3 stands as a premier LLM, capable of handling a broad spectrum of natural language processing tasks without requiring fine-tuning, reliably producing text that reads as if authored by humans. However, even with an understanding of how LLMs respond to questions asked, there may be lurking behind OpenAI’s seemingly endless responses an inventive model yet to be uncovered. There may be some unforeseen reasoning emerging from the interconnection of neural networks here. Just as a Soviet researcher in the 1940s questioned the existence of Common factors in inventions, enabling an Under standing of how and according to what principles humans create them, it is equally legitimate today to explore whether solutions provided by LLMs to complex problems also share common denominators. Theory of Inventive Problem Solving (TRIZ) : We will revisit some fundamentals of TRIZ and how Genrich ALTSHULLER was inspired by the idea that inventions and innovations are essential means to solve societal problems. It's crucial to note that traditional problem-solving methods often fall short in discovering innovative solutions. The design team is frequently hampered by psychological barriers stemming from confinement within a highly specialized knowledge domain that is difficult to question. We presume ChatGPT Utilizes TRIZ 40. Hence, the objective of this research is to decipher the inventive model of LLMs, particularly that of ChatGPT, through a comparative study. This will enhance the efficiency of sustainable innovation processes and shed light on how the construction of a solution to a complex problem was devised. Description of the Experimental Protocol : To confirm or reject our main hypothesis that is to determine whether ChatGPT uses TRIZ, we will follow a stringent protocol that we will detail, drawing on insights from a panel of two TRIZ experts. Conclusion and Future Directions : In this endeavor, we sought to comprehend how an LLM like GPT addresses complex challenges. Our goal was to analyze the inventive model of responses provided by an LLM, specifically ChatGPT, by comparing it to an existing standard model: TRIZ 40. Of course, problem solving is our main focus in our endeavours.

Keywords: artificial intelligence, Triz, ChatGPT, inventiveness, problem-solving

Procedia PDF Downloads 39
138 Determination of Genetic Markers, Microsatellites Type, Liked to Milk Production Traits in Goats

Authors: Mohamed Fawzy Elzarei, Yousef Mohammed Al-Dakheel, Ali Mohamed Alseaf

Abstract:

Modern molecular techniques, like single marker analysis for linked traits to these markers, can provide us with rapid and accurate genetic results. In the last two decades of the last century, the applications of molecular techniques were reached a faraway point in cattle, sheep, and pig. In goats, especially in our region, the application of molecular techniques is still far from other species. As reported by many researchers, microsatellites marker is one of the suitable markers for lie studies. The single marker linked to traits of interest is one technique allowed us to early select animals without the necessity for mapping the entire genome. Simplicity, applicability, and low cost of this technique gave this technique a wide range of applications in many areas of genetics and molecular biology. Also, this technique provides a useful approach for evaluating genetic differentiation, particularly in populations that are poorly known genetically. The expected breeding value (EBV) and yield deviation (YD) are considered as the most parameters used for studying the linkage between quantitative characteristics and molecular markers, since these values are raw data corrected for the non-genetic factors. A total of 17 microsatellites markers (from chromosomes 6, 14, 18, 20 and 23) were used in this study to search for areas that could be responsible for genetic variability for some milk traits and search of chromosomal regions that explain part of the phenotypic variance. Results of single-marker analyses were used to identify the linkage between microsatellite markers and variation in EBVs of these traits, Milk yield, Protein percentage, Fat percentage, Litter size and weight at birth, and litter size and weight at weaning. The estimates of the parameters from forward and backward solutions using stepwise regression procedure on milk yield trait, only two markers, OARCP9 and AGLA29, showed a highly significant effect (p≤0.01) in backward and forward solutions. The forward solution for different equations conducted that R2 of these equations were highly depending on only two partials regressions coefficient (βi,) for these markers. For the milk protein trait, four marker showed significant effect BMS2361, CSSM66 (p≤0.01), BMS2626, and OARCP9 (p≤0.05). By the other way, four markers (MCM147, BM1225, INRA006, andINRA133) showed highly significant effect (p≤0.01) in both backward and forward solutions in association with milk fat trait. For both litter size at birth and at weaning traits, only one marker (BM143(p≤0.01) and RJH1 (p≤0.05), respectively) showed a significant effect in backward and forward solutions. The estimates of the parameters from forward and backward solution using stepwise regression procedure on litter weight at birth (LWB) trait only one marker (MCM147) showed highly significant effect (p≤0.01) and two marker (ILSTS011, CSSM66) showed a significant effect (p≤0.05) in backward and forward solutions.

Keywords: microsatellites marker, estimated breeding value, stepwise regression, milk traits

Procedia PDF Downloads 66
137 Development and Validation of a Turbidimetric Bioassay to Determine the Potency of Ertapenem Sodium

Authors: Tahisa M. Pedroso, Hérida R. N. Salgado

Abstract:

The microbiological turbidimetric assay allows the determination of potency of the drug, by measuring the turbidity (absorbance), caused by inhibition of microorganisms by ertapenem sodium. Ertapenem sodium (ERTM), a synthetic antimicrobial agent of the class of carbapenems, shows action against Gram-negative, Gram-positive, aerobic and anaerobic microorganisms. Turbidimetric assays are described in the literature for some antibiotics, but this method is not described for ertapenem. The objective of the present study was to develop and validate a simple, sensitive, precise and accurate microbiological assay by turbidimetry to quantify ertapenem sodium injectable as an alternative to the physicochemical methods described in the literature. Several preliminary tests were performed to choose the following parameters: Staphylococcus aureus ATCC 25923, IAL 1851, 8 % of inoculum, BHI culture medium, and aqueous solution of ertapenem sodium. 10.0 mL of sterile BHI culture medium were distributed in 20 tubes. 0.2 mL of solutions (standard and test), were added in tube, respectively S1, S2 and S3, and T1, T2 and T3, 0.8 mL of culture medium inoculated were transferred to each tube, according parallel lines 3 x 3 test. The tubes were incubated in shaker Marconi MA 420 at a temperature of 35.0 °C ± 2.0 °C for 4 hours. After this period, the growth of microorganisms was inhibited by addition of 0.5 mL of 12% formaldehyde solution in each tube. The absorbance was determined in Quimis Q-798DRM spectrophotometer at a wavelength of 530 nm. An analytical curve was constructed to obtain the equation of the line by the least-squares method and the linearity and parallelism was detected by ANOVA. The specificity of the method was proven by comparing the response obtained for the standard and the finished product. The precision was checked by testing the determination of ertapenem sodium in three days. The accuracy was determined by recovery test. The robustness was determined by comparing the results obtained by varying wavelength, brand of culture medium and volume of culture medium in the tubes. Statistical analysis showed that there is no deviation from linearity in the analytical curves of standard and test samples. The correlation coefficients were 0.9996 and 0.9998 for the standard and test samples, respectively. The specificity was confirmed by comparing the absorbance of the reference substance and test samples. The values obtained for intraday, interday and between analyst precision were 1.25%; 0.26%, 0.15% respectively. The amount of ertapenem sodium present in the samples analyzed, 99.87%, is consistent. The accuracy was proven by the recovery test, with value of 98.20%. The parameters varied did not affect the analysis of ertapenem sodium, confirming the robustness of this method. The turbidimetric assay is more versatile, faster and easier to apply than agar diffusion assay. The method is simple, rapid and accurate and can be used in routine analysis of quality control of formulations containing ertapenem sodium.

Keywords: ertapenem sodium, turbidimetric assay, quality control, validation

Procedia PDF Downloads 375
136 Climate Change Law and Transnational Corporations

Authors: Manuel Jose Oyson

Abstract:

The Intergovernmental Panel on Climate Change (IPCC) warned in its most recent report for the entire world “to both mitigate and adapt to climate change if it is to effectively avoid harmful climate impacts.” The IPCC observed “with high confidence” a more rapid rise in total anthropogenic greenhouse gas emissions (GHG) emissions from 2000 to 2010 than in the past three decades that “were the highest in human history”, which if left unchecked will entail a continuing process of global warming and can alter the climate system. Current efforts, however, to respond to the threat of global warming, such as the United Nations Framework Convention on Climate Change and the Kyoto Protocol, have focused on states, and fail to involve Transnational Corporations (TNCs) which are responsible for a vast amount of GHG emissions. Involving TNCs in the search for solutions to climate change is consistent with an acknowledgment by contemporary international law that there is an international role for other international persons, including TNCs, and departs from the traditional “state-centric” response to climate change. Putting the focus of GHG emissions away from states recognises that the activities of TNCs “are not bound by national borders” and that the international movement of goods meets the needs of consumers worldwide. Although there is no legally-binding instrument that covers TNC activities or legal responsibilities generally, TNCs have increasingly been made legally responsible under international law for violations of human rights, exploitation of workers and environmental damage, but not for climate change damage. Imposing on TNCs a legally-binding obligation to reduce their GHG emissions or a legal liability for climate change damage is arguably formidable and unlikely in the absence a recognisable source of obligation in international law or municipal law. Instead a recourse to “soft law” and non-legally binding instruments may be a way forward for TNCs to reduce their GHG emissions and help in addressing climate change. Positive effects have been noted by various studies to voluntary approaches. TNCs have also in recent decades voluntarily committed to “soft law” international agreements. This development reflects a growing recognition among corporations in general and TNCs in particular of their corporate social responsibility (CSR). While CSR used to be the domain of “small, offbeat companies”, it has now become part of mainstream organization. The paper argues that TNCs must voluntarily commit to reducing their GHG emissions and helping address climate change as part of their CSR. One, as a serious “global commons problem”, climate change requires international cooperation from multiple actors, including TNCs. Two, TNCs are not innocent bystanders but are responsible for a large part of GHG emissions across their vast global operations. Three, TNCs have the capability to help solve the problem of climate change. Assuming arguendo that TNCs did not strongly contribute to the problem of climate change, society would have valid expectations for them to use their capabilities, knowledge-base and advanced technologies to help address the problem. It would seem unthinkable for TNCs to do nothing while the global environment fractures.

Keywords: climate change law, corporate social responsibility, greenhouse gas emissions, transnational corporations

Procedia PDF Downloads 329
135 “Divorced Women are Like Second-Hand Clothes” - Hate Language in Media Discourse

Authors: Sopio Totibadze

Abstract:

Although the legal framework of Georgia reflects the main principles of gender equality and is in line with the international situation, Georgia remains a male-dominated society. This means that men prevail in many areas of social, economic, and political life, which frequently gives women a subordinate status in society and the family. According to the latest studies, “violence against women and girls in Georgia is also recognized as a public problem, and it is necessary to focus on it”. Moreover, the Public Defender's report (2019) reveals that “in the last five years, 151 women were killed in Georgia due to gender and family violence”. Unfortunately, there are frequent cases of crimes based on gender-based oppression in Georgia, which pose a threat not only to women but also to people of any gender whose desires and aspirations do not correspond to the gender norms and roles prevailing in society. It is well-known that language is often used as a tool for gender oppression. Therefore, feminist and gender studies in linguistics ultimately serve to represent the problem, reflect on it, and propose ways to solve it. Together with technical advancement in communication, a new form of discrimination has arisen- hate language against women in electronic media discourse. Due to the nature of social media and the internet, messages containing hate language can spread in seconds and reach millions of people. However, only a few know about the detrimental effects they may have on the addressee and society. This paper aims to analyse the hateful comments directed at women on various media platforms to determine the linguistic strategies used while attacking women and the reasons why women may fall victim to this type of hate language. The data have been collected over six months, and overall, 500 comments will be examined for the paper. Qualitative and quantitative analysis was chosen for the methodology of the study. The comments posted on various media platforms have been selected manually due to several reasons, the most important being the problem of identifying hate speech as it can disguise itself in different ways- humour, memes, etc. The comments on the articles, posts, pictures, and videos selected for sociolinguistic analysis depict a woman, a taboo topic, or a scandalous event centred on a woman that triggered hate language towards the person to whom the post/article was dedicated. The study has revealed that a woman can become a victim of hatred directed at them if they do something considered to be a deviation from a societal norm, namely, get a divorce, be sexually active, be vocal about feministic values, and talk about taboos. Interestingly, people who utilize hate language are not only men trying to “normalize” the prejudiced patriarchal values but also women who are equally active in bringing down a "strong" woman. The paper also aims to raise awareness about the hate language directed at women, as being knowledgeable about the issue at hand is the first step to tackling it.

Keywords: femicide, hate language, media discourse, sociolinguistics

Procedia PDF Downloads 65
134 Implementation of Deep Neural Networks for Pavement Condition Index Prediction

Authors: M. Sirhan, S. Bekhor, A. Sidess

Abstract:

In-service pavements deteriorate with time due to traffic wheel loads, environment, and climate conditions. Pavement deterioration leads to a reduction in their serviceability and structural behavior. Consequently, proper maintenance and rehabilitation (M&R) are necessary actions to keep the in-service pavement network at the desired level of serviceability. Due to resource and financial constraints, the pavement management system (PMS) prioritizes roads most in need of maintenance and rehabilitation action. It recommends a suitable action for each pavement based on the performance and surface condition of each road in the network. The pavement performance and condition are usually quantified and evaluated by different types of roughness-based and stress-based indices. Examples of such indices are Pavement Serviceability Index (PSI), Pavement Serviceability Ratio (PSR), Mean Panel Rating (MPR), Pavement Condition Rating (PCR), Ride Number (RN), Profile Index (PI), International Roughness Index (IRI), and Pavement Condition Index (PCI). PCI is commonly used in PMS as an indicator of the extent of the distresses on the pavement surface. PCI values range between 0 and 100; where 0 and 100 represent a highly deteriorated pavement and a newly constructed pavement, respectively. The PCI value is a function of distress type, severity, and density (measured as a percentage of the total pavement area). PCI is usually calculated iteratively using the 'Paver' program developed by the US Army Corps. The use of soft computing techniques, especially Artificial Neural Network (ANN), has become increasingly popular in the modeling of engineering problems. ANN techniques have successfully modeled the performance of the in-service pavements, due to its efficiency in predicting and solving non-linear relationships and dealing with an uncertain large amount of data. Typical regression models, which require a pre-defined relationship, can be replaced by ANN, which was found to be an appropriate tool for predicting the different pavement performance indices versus different factors as well. Subsequently, the objective of the presented study is to develop and train an ANN model that predicts the PCI values. The model’s input consists of percentage areas of 11 different damage types; alligator cracking, swelling, rutting, block cracking, longitudinal/transverse cracking, edge cracking, shoving, raveling, potholes, patching, and lane drop off, at three severity levels (low, medium, high) for each. The developed model was trained using 536,000 samples and tested on 134,000 samples. The samples were collected and prepared by The National Transport Infrastructure Company. The predicted results yielded satisfactory compliance with field measurements. The proposed model predicted PCI values with relatively low standard deviations, suggesting that it could be incorporated into the PMS for PCI determination. It is worth mentioning that the most influencing variables for PCI prediction are damages related to alligator cracking, swelling, rutting, and potholes.

Keywords: artificial neural networks, computer programming, pavement condition index, pavement management, performance prediction

Procedia PDF Downloads 116
133 Impact of Ethiopia's Productive Safety Net Program on Household Dietary Diversity and Child Nutrition in Rural Ethiopia

Authors: Tagel Gebrehiwot, Carolina Castilla

Abstract:

Food insecurity and child malnutrition are among the most critical issues in Ethiopia. Accordingly, different reform programs have been carried to improve household food security. The Food Security Program (FSP) (among others) was introduced to combat the persistent food insecurity problem in the country. The FSP combines a safety net component called the Productive Safety Net Program (PSNP) started in 2005. The goal of PSNP is to offer multi-annual transfers, such as food, cash or a combination of both to chronically food insecure households to break the cycle of food aid. Food or cash transfers are the main elements of PSNP. The case for cash transfers builds on the Sen’s analysis of ‘entitlement to food’, where he argues that restoring access to food by improving demand is a more effective and sustainable response to food insecurity than food aid. Cash-based schemes offer a greater choice of use of the transfer and can allow a greater diversity of food choice. It has been proven that dietary diversity is positively associated with the key pillars of food security. Thus, dietary diversity is considered as a measure of household’s capacity to access a variety of food groups. Studies of dietary diversity among Ethiopian rural households are somewhat rare and there is still a dearth of evidence on the impact of PSNP on household dietary diversity. In this paper, we examine the impact of the Ethiopia’s PSNP on household dietary diversity and child nutrition using panel household surveys. We employed different methodologies for identification. We exploit the exogenous increase in kebeles’ PSNP budget to identify the effect of the change in the amount of money households received in transfers between 2012 and 2014 on the change in dietary diversity. We use three different approaches to identify this effect: two-stage least squares, reduced form IV, and generalized propensity score matching using a continuous treatment. The results indicate the increase in PSNP transfers between 2012 and 2014 had no effect on household dietary diversity. Estimates for different household dietary indicators reveal that the effect of the change in the cash transfer received by the household is statistically and economically insignificant. This finding is robust to different identification strategies and the inclusion of control variables that determine eligibility to become a PSNP beneficiary. To identify the effect of PSNP participation on children height-for-age and stunting we use a difference-in-difference approach. We use children between 2 and 5 in 2012 as a baseline because by then they have achieved long-term failure to grow. The treatment group comprises children ages 2 to 5 in 2014 in PSNP participant households. While changes in height-for-age take time, two years of additional transfers among children who were not born or under the age of 2-3 in 2012 have the potential to make a considerable impact on reducing the prevalence of stunting. The results indicate that participation in PSNP had no effect on child nutrition measured as height-for-age or probability of beings stunted, suggesting that PSNP should be designed in a more nutrition-sensitive way.

Keywords: continuous treatment, dietary diversity, impact, nutrition security

Procedia PDF Downloads 310
132 Defining the Tipping Point of Tolerance to CO₂-Induced Ocean Acidification in Larval Dusky Kob Argyrosomus japonicus (Pisces: Sciaenidae)

Authors: Pule P. Mpopetsi, Warren M. Potts, Nicola James, Amber Childs

Abstract:

Increased CO₂ production and the consequent ocean acidification (OA) have been identified as one of the greatest threats to both calcifying and non-calcifying marine organisms. Traditionally, marine fishes, as non-calcifying organisms, were considered to have a higher tolerance to near-future OA conditions owing to their well-developed ion regulatory mechanisms. However, recent studies provide evidence to suggest that they may not be as resilient to near-future OA conditions as previously thought. In addition, earlier life stages of marine fishes are thought to be less tolerant than juveniles and adults of the same species as they lack well-developed ion regulatory mechanisms for maintaining homeostasis. This study focused on the effects of near-future OA on larval Argyrosomus japonicus, an estuarine-dependent marine fish species, in order to identify the tipping point of tolerance for the larvae of this species. Larval A. japonicus in the present study were reared from the egg up to 22 days after hatching (DAH) under three treatments. The three treatments, (pCO₂ 353 µatm; pH 8.03), (pCO₂ 451 µatm; pH 7.93) and (pCO₂ 602 µatm; pH 7.83) corresponded to levels predicted to occur in year 2050, 2068 and 2090 respectively under the Intergovernmental Panel on Climate Change (IPCC) Representative Concentration Pathways (IPCC RCP) 8.5 model. Size-at-hatch, growth, development, and metabolic responses (standard and active metabolic rates and metabolic scope) were assessed and compared between the three treatments throughout the rearing period. Five earlier larval life stages (hatchling – flexion/post-flexion) were identified by the end of the experiment. There were no significant differences in size-at-hatch (p > 0.05), development or the active metabolic (p > 0.05) or metabolic scope (p > 0.05) of fish in the three treatments throughout the study. However, the standard metabolic rate was significantly higher in the year 2068 treatment but only at the flexion/post-flexion stage which could be attributed to differences in developmental rates (including the development of the gills) between the 2068 and the other two treatments. Overall, the metabolic scope was narrowest in the 2090 treatment but varied according to life stage. Although not significantly different, metabolic scope in the 2090 treatment was noticeably lower at the flexion stage compared to the other two treatments, and the development appeared slower, suggesting that this could be the stage most prone to OA. The study concluded that, in isolation, OA levels predicted to occur between 2050 and 2090 will not negatively affect size-at-hatch, growth, development, and metabolic responses of larval A. japonicus up to 22 DAH (flexion/post-flexion stage). The present study also identified the tipping point of tolerance (where negative impacts will begin) in larvae of the species to be between the years 2090 and 2100.

Keywords: climate change, ecology, marine, ocean acidification

Procedia PDF Downloads 115
131 Comparison of Two Transcranial Magnetic Stimulation Protocols on Spasticity in Multiple Sclerosis - Pilot Study of a Randomized and Blind Cross-over Clinical Trial

Authors: Amanda Cristina da Silva Reis, Bruno Paulino Venâncio, Cristina Theada Ferreira, Andrea Fialho do Prado, Lucimara Guedes dos Santos, Aline de Souza Gravatá, Larissa Lima Gonçalves, Isabella Aparecida Ferreira Moretto, João Carlos Ferrari Corrêa, Fernanda Ishida Corrêa

Abstract:

Objective: To compare two protocols of Transcranial Magnetic Stimulation (TMS) on quadriceps muscle spasticity in individuals diagnosed with Multiple Sclerosis (MS). Method: Clinical, crossover study, in which six adult individuals diagnosed with MS and spasticity in the lower limbs were randomized to receive one session of high-frequency (≥5Hz) and low-frequency (≤ 1Hz) TMS on motor cortex (M1) hotspot for quadriceps muscle, with a one-week interval between the sessions. To assess the spasticity was applied the Ashworth scale and were analyzed the latency time (ms) of the motor evoked potential (MEP) and the central motor conduction time (CMCT) of the bilateral quadriceps muscle. Assessments were performed before and after each intervention. The difference between groups was analyzed using the Friedman test, with a significance level of 0.05 adopted. Results: All statistical analyzes were performed using the SPSS Statistic version 26 programs, with a significance level established for the analyzes at p<0.05. Shapiro Wilk normality test. Parametric data were represented as mean and standard deviation for non-parametric variables, median and interquartile range, and frequency and percentage for categorical variables. There was no clinical change in quadriceps spasticity assessed using the Ashworth scale for the 1 Hz (p=0.813) and 5 Hz (p= 0.232) protocols for both limbs. Motor Evoked Potential latency time: in the 5hz protocol, there was no significant change for the contralateral side from pre to post-treatment (p>0.05), and for the ipsilateral side, there was a decrease in latency time of 0.07 seconds (p<0.05 ); for the 1Hz protocol there was an increase of 0.04 seconds in the latency time (p<0.05) for the contralateral side to the stimulus, and for the ipsilateral side there was a decrease in the latency time of 0.04 seconds (p=<0.05), with a significant difference between the contralateral (p=0.007) and ipsilateral (p=0.014) groups. Central motor conduction time in the 1Hz protocol, there was no change for the contralateral side (p>0.05) and for the ipsilateral side (p>0.05). In the 5Hz protocol for the contralateral side, there was a small decrease in latency time (p<0.05) and for the ipsilateral side, there was a decrease of 0.6 seconds in the latency time (p<0.05) with a significant difference between groups (p=0.019). Conclusion: A high or low-frequency session does not change spasticity, but it is observed that when the low-frequency protocol was performed, there was an increase in latency time on the stimulated side, and a decrease in latency time on the non-stimulated side, considering then that inhibiting the motor cortex increases cortical excitability on the opposite side.

Keywords: multiple sclerosis, spasticity, motor evoked potential, transcranial magnetic stimulation

Procedia PDF Downloads 65
130 Applying Biosensors’ Electromyography Signals through an Artificial Neural Network to Control a Small Unmanned Aerial Vehicle

Authors: Mylena McCoggle, Shyra Wilson, Andrea Rivera, Rocio Alba-Flores

Abstract:

This work introduces the use of EMGs (electromyography) from muscle sensors to develop an Artificial Neural Network (ANN) for pattern recognition to control a small unmanned aerial vehicle. The objective of this endeavor exhibits interfacing drone applications beyond manual control directly. MyoWare Muscle sensor contains three EMG electrodes (dual and single type) used to collect signals from the posterior (extensor) and anterior (flexor) forearm and the bicep. Collection of raw voltages from each sensor were connected to an Arduino Uno and a data processing algorithm was developed with the purpose of interpreting the voltage signals given when performing flexing, resting, and motion of the arm. Each sensor collected eight values over a two-second period for the duration of one minute, per assessment. During each two-second interval, the movements were alternating between a resting reference class and an active motion class, resulting in controlling the motion of the drone with left and right movements. This paper further investigated adding up to three sensors to differentiate between hand gestures to control the principal motions of the drone (left, right, up, and land). The hand gestures chosen to execute these movements were: a resting position, a thumbs up, a hand swipe right motion, and a flexing position. The MATLAB software was utilized to collect, process, and analyze the signals from the sensors. The protocol (machine learning tool) was used to classify the hand gestures. To generate the input vector to the ANN, the mean, root means squared, and standard deviation was processed for every two-second interval of the hand gestures. The neuromuscular information was then trained using an artificial neural network with one hidden layer of 10 neurons to categorize the four targets, one for each hand gesture. Once the machine learning training was completed, the resulting network interpreted the processed inputs and returned the probabilities of each class. Based on the resultant probability of the application process, once an output was greater or equal to 80% of matching a specific target class, the drone would perform the motion expected. Afterward, each movement was sent from the computer to the drone through a Wi-Fi network connection. These procedures have been successfully tested and integrated into trial flights, where the drone has responded successfully in real-time to predefined command inputs with the machine learning algorithm through the MyoWare sensor interface. The full paper will describe in detail the database of the hand gestures, the details of the ANN architecture, and confusion matrices results.

Keywords: artificial neural network, biosensors, electromyography, machine learning, MyoWare muscle sensors, Arduino

Procedia PDF Downloads 150
129 Consumer Behavior and Attitudes of Green Advertising: A Collaborative Study with Three Companies to Educate Consumers

Authors: Mokhlisur Rahman

Abstract:

Consumers' understanding of the products depends on what levels of information the advertisement contains. Consumers' attitudes vary widely depending on factors such as their level of environmental awareness, their perception of the company's motives, and the perceived effectiveness of the advertising campaign. Considering the growing eco-consciousness among consumers and their concern for the environment, strategies for green advertising have become equally significant for companies to attract new consumers. It is important to understand consumers' habits of purchasing, knowledge, and attitudes regarding eco-friendly products depending on promotion because of the limitless options of the products in the market. Additionally, encouraging consumers to buy sustainable products requires a platform that can message the world that being a stakeholder in sustainability is possible if consumers show eco-friendly behavior on a larger scale. Social media platforms provide an excellent atmosphere to promote companies' sustainable efforts to be connected engagingly with their potential consumers. The unique strategies of green advertising use techniques to carry information and rewards for the consumers. This study aims to understand the consumer behavior and effectiveness of green advertising by experimenting in collaboration with three companies in promoting their eco-friendly products using green designs on the products. The experiment uses three sustainable personalized offerings, Nike shoes, H&M t-shirts, and Patagonia school bags. The experiment uses a pretest and posttest design. 300 randomly selected participants take part in this experiment and survey through Facebook, Twitter, and Instagram. Nike, H&M, and Patagonia share the post of the experiment on their social media homepages with a video advertisement for the three products. The consumers participate in a pre-experiment online survey before making a purchase decision to assess their attitudes and behavior toward eco-friendly products. The audio-only feature explains the product's information, like their use of recycled materials, their manufacturing methods, sustainable packaging, and their impact on the environment during the purchase while the consumer watches the product video. After making a purchase, consumers take a post-experiment survey to know their perception and behavior toward eco-friendly products. For the data analysis, descriptive statistical tools mean, standard deviation, and frequencies measure the pre- and post-experiment survey data. The inferential statistical tool paired sample t-test measures the difference in consumers' behavior and attitudes between pre-purchase and post-experiment survey results. This experiment provides consumers ample time to consider many aspects rather than impulses. This research provides valuable insights into how companies can adopt sustainable and eco-friendly products. The result set a target for the companies to achieve a sustainable production goal that ultimately supports companies' profit-making and promotes consumers' well-being. This empowers consumers to make informed choices about the products they purchase and support their companies of interest.

Keywords: green-advertising, sustainability, consumer-behavior, social media

Procedia PDF Downloads 64
128 Sorption Properties of Hemp Cellulosic Byproducts for Petroleum Spills and Water

Authors: M. Soleimani, D. Cree, C. Chafe, L. Bates

Abstract:

The accidental release of petroleum products into the environment could have harmful consequences to our ecosystem. Different techniques such as mechanical separation, membrane filtration, incineration, treatment processes using enzymes and dispersants, bioremediation, and sorption process using sorbents have been applied for oil spill remediation. Most of the techniques investigated are too costly or do not have high enough efficiency. This study was conducted to determine the sorption performance of hemp byproducts (cellulosic materials) in terms of sorption capacity and kinetics for hydrophobic and hydrophilic fluids. In this study, heavy oil, light oil, diesel fuel, and water/water vapor were used as sorbate fluids. Hemp stalk in different forms, including loose material (hammer milled (HM) and shredded (Sh) with low bulk densities) and densified forms (pellet form (P) and crumbled pellets (CP)) with high bulk densities, were used as sorbents. The sorption/retention tests were conducted according to ASTM 726 standard. For a quick-purpose application of the sorbents, the sorption tests were conducted for 15 min, and for an ideal sorption capacity of the materials, the tests were carried out for 24 h. During the test, the sorbent material was exposed to the fluid by immersion, followed by filtration through a stainless-steel wire screen. Water vapor adsorption was carried out in a controlled environment chamber with the capability of controlling relative humidity (RH) and temperature. To determine the kinetics of sorption for each fluid and sorbent, the retention capacity also was determined intervalley for up to 24 h. To analyze the kinetics of sorption, pseudo-first-order, pseudo-second order and intraparticle diffusion models were employed with the objective of minimal deviation of the experimental results from the models. The results indicated that HM and Sh materials had the highest sorption capacity for the hydrophobic fluids with approximately 6 times compared to P and CP materials. For example, average retention values of heavy oil on HM and Sh was 560% and 470% of the mass of the sorbents, respectively. Whereas, the retention of heavy oil on P and CP was up to 85% of the mass of the sorbents. This lower sorption capacity for P and CP can be due to the less exposed surface area of these materials and compacted voids or capillary tubes in the structures. For water uptake application, HM and Sh resulted in at least 40% higher sorption capacity compared to those obtained for P and CP. On average, the performance of sorbate uptake from high to low was as follows: water, heavy oil, light oil, diesel fuel. The kinetic analysis indicated that the second-pseudo order model can describe the sorption process of the oil and diesel better than other models. However, the kinetics of water absorption was better described by the pseudo-first-order model. Acetylation of HM materials could improve its oil and diesel sorption to some extent. Water vapor adsorption of hemp fiber was a function of temperature and RH, and among the models studied, the modified Oswin model was the best model in describing this phenomenon.

Keywords: environment, fiber, petroleum, sorption

Procedia PDF Downloads 107
127 Statistical Comparison of Ensemble Based Storm Surge Forecasting Models

Authors: Amin Salighehdar, Ziwen Ye, Mingzhe Liu, Ionut Florescu, Alan F. Blumberg

Abstract:

Storm surge is an abnormal water level caused by a storm. Accurate prediction of a storm surge is a challenging problem. Researchers developed various ensemble modeling techniques to combine several individual forecasts to produce an overall presumably better forecast. There exist some simple ensemble modeling techniques in literature. For instance, Model Output Statistics (MOS), and running mean-bias removal are widely used techniques in storm surge prediction domain. However, these methods have some drawbacks. For instance, MOS is based on multiple linear regression and it needs a long period of training data. To overcome the shortcomings of these simple methods, researchers propose some advanced methods. For instance, ENSURF (Ensemble SURge Forecast) is a multi-model application for sea level forecast. This application creates a better forecast of sea level using a combination of several instances of the Bayesian Model Averaging (BMA). An ensemble dressing method is based on identifying best member forecast and using it for prediction. Our contribution in this paper can be summarized as follows. First, we investigate whether the ensemble models perform better than any single forecast. Therefore, we need to identify the single best forecast. We present a methodology based on a simple Bayesian selection method to select the best single forecast. Second, we present several new and simple ways to construct ensemble models. We use correlation and standard deviation as weights in combining different forecast models. Third, we use these ensembles and compare with several existing models in literature to forecast storm surge level. We then investigate whether developing a complex ensemble model is indeed needed. To achieve this goal, we use a simple average (one of the simplest and widely used ensemble model) as benchmark. Predicting the peak level of Surge during a storm as well as the precise time at which this peak level takes place is crucial, thus we develop a statistical platform to compare the performance of various ensemble methods. This statistical analysis is based on root mean square error of the ensemble forecast during the testing period and on the magnitude and timing of the forecasted peak surge compared to the actual time and peak. In this work, we analyze four hurricanes: hurricanes Irene and Lee in 2011, hurricane Sandy in 2012, and hurricane Joaquin in 2015. Since hurricane Irene developed at the end of August 2011 and hurricane Lee started just after Irene at the beginning of September 2011, in this study we consider them as a single contiguous hurricane event. The data set used for this study is generated by the New York Harbor Observing and Prediction System (NYHOPS). We find that even the simplest possible way of creating an ensemble produces results superior to any single forecast. We also show that the ensemble models we propose generally have better performance compared to the simple average ensemble technique.

Keywords: Bayesian learning, ensemble model, statistical analysis, storm surge prediction

Procedia PDF Downloads 290
126 Development of a Risk Governance Index and Examination of Its Determinants: An Empirical Study in Indian Context

Authors: M. V. Shivaani, P. K. Jain, Surendra S. Yadav

Abstract:

Risk management has been gaining extensive focus from international organizations like Committee of Sponsoring Organizations and Financial Stability Board, and, the foundation of such an effective and efficient risk management system lies in a strong risk governance structure. In view of this, an attempt (perhaps a first of its kind) has been made to develop a risk governance index, which could be used as proxy for quality of risk governance structures. The index (normative framework) is based on eleven variables, namely, size of board, board diversity in terms of gender, proportion of executive directors, executive/non-executive status of chairperson, proportion of independent directors, CEO duality, chief risk officer (CRO), risk management committee, mandatory committees, voluntary committees and existence/non-existence of whistle blower policy. These variables are scored on a scale of 1 to 5 with the exception of the variables, namely, status of chairperson and CEO duality (which have been scored on a dichotomous scale with the score of 3 or 5). In case there is a legal/statutory requirement in respect of above-mentioned variables and there is a non-compliance with such requirement a score of one has been envisaged. Though there is no legal requirement, for the larger part of study, in context of CRO, risk management committee and whistle blower policy, still a score of 1 has been assigned in the event of their non-existence. Recognizing the importance of these variables in context of risk governance structure and the fact that the study basically focuses on risk governance, the absence of these variables has been equated to non-compliance with a legal/statutory requirement. Therefore, based on this the minimum score is 15 and the maximum possible is 55. In addition, an attempt has been made to explore the determinants of this index. For this purpose, the sample consists of non-financial companies (429) that constitute S&P CNX500 index. The study covers a 10 years period from April 1, 2005 to March 31, 2015. Given the panel nature of data, Hausman test was applied, and it suggested that fixed effects regression would be appropriate. The results indicate that age and size of firms have significant positive impact on its risk governance structures. Further, post-recession period (2009-2015) has witnessed significant improvement in quality of governance structures. In contrast, profitability (positive relationship), leverage (negative relationship) and growth (negative relationship) do not have significant impact on quality of risk governance structures. The value of rho indicates that about 77.74% variation in risk governance structures is due to firm specific factors. Given the fact that each firm is unique in terms of its risk exposure, risk culture, risk appetite, and risk tolerance levels, it appears reasonable to assume that the specific conditions and circumstances that a company is beset with, could be the biggest determinants of its risk governance structures. Given the recommendations put forth in the paper (particularly for regulators and companies), the study is expected to be of immense utility in an important yet neglected aspect of risk management.

Keywords: corporate governance, ERM, risk governance, risk management

Procedia PDF Downloads 235
125 Flexural Performance of the Sandwich Structures Having Aluminum Foam Core with Different Thicknesses

Authors: Emre Kara, Ahmet Fatih Geylan, Kadir Koç, Şura Karakuzu, Metehan Demir, Halil Aykul

Abstract:

The structures obtained with the use of sandwich technologies combine low weight with high energy absorbing capacity and load carrying capacity. Hence, there is a growing and markedly interest in the use of sandwiches with aluminium foam core because of very good properties such as flexural rigidity and energy absorption capability. The static (bending and penetration) and dynamic (dynamic bending and low velocity impact) tests were already performed on the aluminum foam cored sandwiches with different types of outer skins by some of the authors. In the current investigation, the static three-point bending tests were carried out on the sandwiches with aluminum foam core and glass fiber reinforced polymer (GFRP) skins at different values of support span distances (L= 55, 70, 80, 125 mm) aiming the analyses of their flexural performance. The influence of the core thickness and the GFRP skin type was reported in terms of peak load, energy absorption capacity and energy efficiency. For this purpose, the skins with two different types of fabrics ([0°/90°] cross ply E-Glass Woven and [0°/90°] cross ply S-Glass Woven which have same thickness value of 1.5 mm) and the aluminum foam core with two different thicknesses (h=10 and 15 mm) were bonded with a commercial polyurethane based flexible adhesive in order to combine the composite sandwich panels. The GFRP skins fabricated via Vacuum Assisted Resin Transfer Molding (VARTM) technique used in the study can be easily bonded to the aluminum foam core and it is possible to configure the base materials (skin, adhesive and core), fiber angle orientation and number of layers for a specific application. The main results of the bending tests are: force-displacement curves, peak force values, absorbed energy, energy efficiency, collapse mechanisms and the effect of the support span length and core thickness. The results of the experimental study showed that the sandwich with the skins made of S-Glass Woven fabrics and with the thicker foam core presented higher mechanical values such as load carrying and energy absorption capacities. The increment of the support span distance generated the decrease of the mechanical values for each type of panels, as expected, because of the inverse proportion between the force and span length. The most common failure types of the sandwiches are debonding of the upper or lower skin and the core shear. The obtained results have particular importance for applications that require lightweight structures with a high capacity of energy dissipation, such as the transport industry (automotive, aerospace, shipbuilding and marine industry), where the problems of collision and crash have increased in the last years.

Keywords: aluminum foam, composite panel, flexure, transport application

Procedia PDF Downloads 309
124 Accounting and Prudential Standards of Banks and Insurance Companies in EU: What Stakes for Long Term Investment?

Authors: Sandra Rigot, Samira Demaria, Frederic Lemaire

Abstract:

The starting point of this research is the contemporary capitalist paradox: there is a real scarcity of long term investment despite the boom of potential long term investors. This gap represents a major challenge: there are important needs for long term financing in developed and emerging countries in strategic sectors such as energy, transport infrastructure, information and communication networks. Moreover, the recent financial and sovereign debt crises, which have respectively reduced the ability of financial banking intermediaries and governments to provide long term financing, questions the identity of the actors able to provide long term financing, their methods of financing and the most appropriate forms of intermediation. The issue of long term financing is deemed to be very important by the EU Commission, as it issued a 2013 Green Paper (GP) on long-term financing of the EU economy. Among other topics, the paper discusses the impact of the recent regulatory reforms on long-term investment, both in terms of accounting (in particular fair value) and prudential standards for banks. For banks, prudential and accounting standards are also crucial. Fair value is indeed well adapted to the trading book in a short term view, but this method hardly suits for a medium and long term portfolio. Banks’ ability to finance the economy and long term projects depends on their ability to distribute credit and the way credit is valued (fair value or amortised cost) leads to different banking strategies. Furthermore, in the banking industry, accounting standards are directly connected to the prudential standards, as the regulatory requirements of Basel III use accounting figures with prudential filter to define the needs for capital and to compute regulatory ratios. The objective of these regulatory requirements is to prevent insolvency and financial instability. In the same time, they can represent regulatory constraints to long term investing. The balance between financial stability and the need to stimulate long term financing is a key question raised by the EU GP. Does fair value accounting contributes to short-termism in the investment behaviour? Should prudential rules be “appropriately calibrated” and “progressively implemented” not to prevent banks from providing long-term financing? These issues raised by the EU GP lead us to question to what extent the main regulatory requirements incite or constrain banks to finance long term projects. To that purpose, we study the 292 responses received by the EU Commission during the public consultation. We analyze these contributions focusing on particular questions related to fair value accounting and prudential norms. We conduct a two stage content analysis of the responses. First, we proceed to a qualitative coding to identify arguments of respondents and subsequently we run a quantitative coding in order to conduct statistical analyses. This paper provides a better understanding of the position that a large panel of European stakeholders have on these issues. Moreover, it adds to the debate on fair value accounting and its effects on prudential requirements for banks. This analysis allows us to identify some short term bias in banking regulation.

Keywords: basel 3, fair value, securitization, long term investment, banks, insurers

Procedia PDF Downloads 268
123 Fast Detection of Local Fiber Shifts by X-Ray Scattering

Authors: Peter Modregger, Özgül Öztürk

Abstract:

Glass fabric reinforced thermoplastic (GFRT) are composite materials, which combine low weight and resilient mechanical properties rendering them especially suitable for automobile construction. However, defects in the glass fabric as well as in the polymer matrix can occur during manufacturing, which may compromise component lifetime or even safety. One type of these defects is local fiber shifts, which can be difficult to detect. Recently, we have experimentally demonstrated the reliable detection of local fiber shifts by X-ray scattering based on the edge-illumination (EI) principle. EI constitutes a novel X-ray imaging technique that utilizes two slit masks, one in front of the sample and one in front of the detector, in order to simultaneously provide absorption, phase, and scattering contrast. The principle of contrast formation is as follows. The incident X-ray beam is split into smaller beamlets by the sample mask, resulting in small beamlets. These are distorted by the interaction with the sample, and the distortions are scaled up by the detector masks, rendering them visible to a pixelated detector. In the experiment, the sample mask is laterally scanned, resulting in Gaussian-like intensity distributions in each pixel. The area under the curves represents absorption, the peak offset refraction, and the width of the curve represents the scattering occurring in the sample. Here, scattering is caused by the numerous glass fiber/polymer matrix interfaces. In our recent publication, we have shown that the standard deviation of the absorption and scattering values over a selected field of view can be used to distinguish between intact samples and samples with local fiber shift defects. The quantification of defect detection performance was done by using p-values (p=0.002 for absorption and p=0.009 for scattering) and contrast-to-noise ratios (CNR=3.0 for absorption and CNR=2.1 for scattering) between the two groups of samples. This was further improved for the scattering contrast to p=0.0004 and CNR=4.2 by utilizing a harmonic decomposition analysis of the images. Thus, we concluded that local fiber shifts can be reliably detected by the X-ray scattering contrasts provided by EI. However, a potential application in, for example, production monitoring requires fast data acquisition times. For the results above, the scanning of the sample masks was performed over 50 individual steps, which resulted in long total scan times. In this paper, we will demonstrate that reliable detection of local fiber shift defects is also possible by using single images, which implies a speed up of total scan time by a factor of 50. Additional performance improvements will also be discussed, which opens the possibility for real-time acquisition. This contributes a vital step for the translation of EI to industrial applications for a wide variety of materials consisting of numerous interfaces on the micrometer scale.

Keywords: defects in composites, X-ray scattering, local fiber shifts, X-ray edge Illumination

Procedia PDF Downloads 41
122 Coupled Field Formulation – A Unified Method for Formulating Structural Mechanics Problems

Authors: Ramprasad Srinivasan

Abstract:

Engineers create inventions and put their ideas in concrete terms to design new products. Design drivers must be established, which requires, among other things, a complete understanding of the product design, load paths, etc. For Aerospace Vehicles, weight/strength ratio, strength, stiffness and stability are the important design drivers. A complex built-up structure is made up of an assemblage of primitive structural forms of arbitrary shape, which include 1D structures like beams and frames, 2D structures like membranes, plate and shell structures, and 3D solid structures. Justification through simulation involves a check for all the quantities of interest, namely stresses, deformation, frequencies, and buckling loads and is normally achieved through the finite element (FE) method. Over the past few decades, Fiber-reinforced composites are fast replacing the traditional metallic structures in the weight-sensitive aerospace and aircraft industries due to their high specific strength, high specific stiffness, anisotropic properties, design freedom for tailoring etc. Composite panel constructions are used in aircraft to design primary structure components like wings, empennage, ailerons, etc., while thin-walled composite beams (TWCB) are used to model slender structures like stiffened panels, helicopter, and wind turbine rotor blades, etc. The TWCB demonstrates many non-classical effects like torsional and constrained warping, transverse shear, coupling effects, heterogeneity, etc., which makes the analysis of composite structures far more complex. Conventional FE formulations to model 1D structures suffer from many limitations like shear locking, particularly in slender beams, lower convergence rates due to material coupling in composites, inability to satisfy, equilibrium in the domain and natural boundary conditions (NBC) etc. For 2D structures, the limitations of conventional displacement-based FE formulations include the inability to satisfy NBC explicitly and many pathological problems such as shear and membrane locking, spurious modes, stress oscillations, lower convergence due to mesh distortion etc. This mandates frequent re-meshing to even achieve an acceptable mesh (satisfy stringent quality metrics) for analysis leading to significant cycle time. Besides, currently, there is a need for separate formulations (u/p) to model incompressible materials, and a single unified formulation is missing in the literature. Hence coupled field formulation (CFF) is a unified formulation proposed by the author for the solution of complex 1D and 2D structures addressing the gaps in the literature mentioned above. The salient features of CFF and its many advantages over other conventional methods shall be presented in this paper.

Keywords: coupled field formulation, kinematic and material coupling, natural boundary condition, locking free formulation

Procedia PDF Downloads 54
121 Trainability of Executive Functions during Preschool Age Analysis of Inhibition of 5-Year-Old Children

Authors: Christian Andrä, Pauline Hähner, Sebastian Ludyga

Abstract:

Introduction: In the recent past, discussions on the importance of physical activity for child development have contributed to a growing interest in executive functions, which refer to cognitive processes. By controlling, modulating and coordinating sub-processes, they make it possible to achieve superior goals. Major components include working memory, inhibition and cognitive flexibility. While executive functions can be trained easily in school children, there are still research deficits regarding the trainability during preschool age. Methodology: This quasi-experimental study with pre- and post-design analyzes 23 children [age: 5.0 (mean value) ± 0.7 (standard deviation)] from four different sports groups. The intervention group was made up of 13 children (IG: 4.9 ± 0.6), while the control group consisted of ten children (CG: 5.1 ± 0.9). Between pre-test and post-test, children from the intervention group participated special games that train executive functions (i.e., changing rules of the game, introduction of new stimuli in familiar games) for ten units of their weekly sports program. The sports program of the control group was not modified. A computer-based version of the Eriksen Flanker Task was employed in order to analyze the participants’ inhibition ability. In two rounds, the participants had to respond 50 times and as fast as possible to a certain target (direction of sight of a fish; the target was always placed in a central position between five fish). Congruent (all fish have the same direction of sight) and incongruent (central fish faces opposite direction) stimuli were used. Relevant parameters were response time and accuracy. The main objective was to investigate whether children from the intervention group show more improvement in the two parameters than the children from the control group. Major findings: The intervention group revealed significant improvements in congruent response time (pre: 1.34 s, post: 1.12 s, p<.01), while the control group did not show any statistically relevant difference (pre: 1.31 s, post: 1.24 s). Likewise, the comparison of incongruent response times indicates a comparable result (IG: pre: 1.44 s, post: 1.25 s, p<.05 vs. CG: pre: 1.38 s, post: 1.38 s). In terms of accuracy for congruent stimuli, the intervention group showed significant improvements (pre: 90.1 %, post: 95.9 %, p<.01). In contrast, no significant improvement was found for the control group (pre: 88.8 %, post: 92.9 %). Vice versa, the intervention group did not display any significant results for incongruent stimuli (pre: 74.9 %, post: 83.5 %), while the control group revealed a significant difference (pre: 68.9 %, post: 80.3 %, p<.01). The analysis of three out of four criteria demonstrates that children who took part in a special sports program improved more than children who did not. The contrary results for the last criterion could be caused by the control group’s low results from the pre-test. Conclusion: The findings illustrate that inhibition can be trained as early as in preschool age. The combination of familiar games with increased requirements for attention and control processes appears to be particularly suitable.

Keywords: executive functions, flanker task, inhibition, preschool children

Procedia PDF Downloads 233
120 Augmenting Navigational Aids: The Development of an Assistive Maritime Navigation Application

Authors: A. Mihoc, K. Cater

Abstract:

On the bridge of a ship the officers are looking for visual aids to guide navigation in order to reconcile the outside world with the position communicated by the digital navigation system. Aids to navigation include: Lighthouses, lightships, sector lights, beacons, buoys, and others. They are designed to help navigators calculate their position, establish their course or avoid dangers. In poor visibility and dense traffic areas, it can be very difficult to identify these critical aids to guide navigation. The paper presents the usage of Augmented Reality (AR) as a means to present digital information about these aids to support navigation. To date, nautical navigation related mobile AR applications have been limited to the leisure industry. If proved viable, this prototype can facilitate the creation of other similar applications that could help commercial officers with navigation. While adopting a user centered design approach, the team has developed the prototype based on insights from initial research carried on board of several ships. The prototype, built on Nexus 9 tablet and Wikitude, features a head-up display of the navigational aids (lights) in the area, presented in AR and a bird’s eye view mode presented on a simplified map. The application employs the aids to navigation data managed by Hydrographic Offices and the tablet’s sensors: GPS, gyroscope, accelerometer, compass and camera. Sea trials on board of a Navy and a commercial ship revealed the end-users’ interest in using the application and further possibility of other data to be presented in AR. The application calculates the GPS position of the ship, the bearing and distance to the navigational aids; all within a high level of accuracy. However, during testing several issues were highlighted which need to be resolved as the prototype is developed further. The prototype stretched the capabilities of Wikitude, loading over 500 objects during tests in a major port. This overloaded the display and required over 45 seconds to load the data. Therefore, extra filters for the navigational aids are being considered in order to declutter the screen. At night, the camera is not powerful enough to distinguish all the lights in the area. Also, magnetic interference with the bridge of the ship generated a continuous compass error of the AR display that varied between 5 and 12 degrees. The deviation of the compass was consistent over the whole testing durations so the team is now looking at the possibility of allowing users to manually calibrate the compass. It is expected that for the usage of AR in professional maritime contexts, further development of existing AR tools and hardware is needed. Designers will also need to implement a user-centered design approach in order to create better interfaces and display technologies for enhanced solutions to aid navigation.

Keywords: compass error, GPS, maritime navigation, mobile augmented reality

Procedia PDF Downloads 306
119 Understanding the Role of Social Entrepreneurship in Building Mobility of a Service Transportation Models

Authors: Liam Fassam, Pouria Liravi, Jacquie Bridgman

Abstract:

Introduction: The way we travel is rapidly changing, car ownership and use are declining among young people and those residents in urban areas. Also, the increasing role and popularity of sharing economy companies like Uber highlight a movement towards consuming transportation solutions as a service [Mobility of a Service]. This research looks to bridge the knowledge gap that exists between city mobility, smart cities, sharing economy and social entrepreneurship business models. Understanding of this subject is crucial for smart city design, as access to affordable transport has been identified as a contributing factor to social isolation leading to issues around health and wellbeing. Methodology: To explore the current fit vis-a-vis transportation business models and social impact this research undertook a comparative analysis between a systematic literature review and a Delphi study. The systematic literature review was undertaken to gain an appreciation of the current academic thinking on ‘social entrepreneurship and smart city mobility’. The second phase of the research initiated a Delphi study across a group of 22 participants to review future opinion on ‘how social entrepreneurship can assist city mobility sharing models?’. The Delphi delivered an initial 220 results, which once cross-checked for duplication resulted in 130. These 130 answers were sent back to participants to score importance against a 5-point LIKERT scale, enabling a top 10 listing of areas for shared user transports in society to be gleaned. One further round (4) identified no change in the coefficient of variant thus no further rounds were required. Findings: Initial results of the literature review returned 1,021 journals using the search criteria ‘social entrepreneurship and smart city mobility’. Filtering allied to ‘peer review’, ‘date’, ‘region’ and ‘Chartered associated of business school’ ranking proffered a resultant journal list of 75. Of these, 58 focused on smart city design, 9 on social enterprise in cityscapes, 6 relating to smart city network design and 3 on social impact, with no journals purporting the need for social entrepreneurship to be allied to city mobility. The future inclusion factors from the Delphi expert panel indicated that smart cities needed to include shared economy models in their strategies. Furthermore, social isolation born by costs of infrastructure needed addressing through holistic A-political social enterprise models, and a better understanding of social benefit measurement is needed. Conclusion: In investigating the collaboration between key public transportation stakeholders, a theoretical model of social enterprise transportation models that positively impact upon the smart city needs of reduced transport poverty and social isolation was formed. As such, the research has identified how a revised business model of Mobility of a Service allied to a social entrepreneurship can deliver impactful measured social benefits associated to smart city design existent research.

Keywords: social enterprise, collaborative transportation, new models of ownership, transport social impact

Procedia PDF Downloads 125
118 Examination of Porcine Gastric Biomechanics in the Antrum Region

Authors: Sif J. Friis, Mette Poulsen, Torben Strom Hansen, Peter Herskind, Jens V. Nygaard

Abstract:

Gastric biomechanics governs a large range of scientific and engineering fields, from gastric health issues to interaction mechanisms between external devices and the tissue. Determination of mechanical properties of the stomach is, thus, crucial, both for understanding gastric pathologies as well as for the development of medical concepts and device designs. Although the field of gastric biomechanics is emerging, advances within medical devices interacting with the gastric tissue could greatly benefit from an increased understanding of tissue anisotropy and heterogeneity. Thus, in this study, uniaxial tensile tests of gastric tissue were executed in order to study biomechanical properties within the same individual as well as across individuals. With biomechanical tests in the strain domain, tissue from the antrum region of six porcine stomachs was tested using eight samples from each stomach (n = 48). The samples were cut so that they followed dominant fiber orientations. Accordingly, from each stomach, four samples were longitudinally oriented, and four samples were circumferentially oriented. A step-wise stress relaxation test with five incremental steps up to 25 % strain with 200 s rest periods for each step was performed, followed by a 25 % strain ramp test with three different strain rates. Theoretical analysis of the data provided stress-strain/time curves as well as 20 material parameters (e.g., stiffness coefficients, dissipative energy densities, and relaxation time coefficients) used for statistical comparisons between samples from the same stomach as well as in between stomachs. Results showed that, for the 20 material parameters, heterogeneity across individuals, when extracting samples from the same area, was in the same order of variation as the samples within the same stomach. For samples from the same stomach, the mean deviation percentage for all 20 parameters was 21 % and 18 % for longitudinal and circumferential orientations, compared to 25 % and 19 %, respectively, for samples across individuals. This observation was also supported by a nonparametric one-way ANOVA analysis, where results showed that the 20 material parameters from each of the six stomachs came from the same distribution with a level of statistical significance of P > 0.05. Direction-dependency was also examined, and it was found that the maximum stress for longitudinal samples was significantly higher than for circumferential samples. However, there were no significant differences in the 20 material parameters, with the exception of the equilibrium stiffness coefficient (P = 0.0039) and two other stiffness coefficients found from the relaxation tests (P = 0.0065, 0.0374). Nor did the stomach tissue show any significant differences between the three strain-rates used in the ramp test. Heterogeneity within the same region has not been examined earlier, yet, the importance of the sampling area has been demonstrated in this study. All material parameters found are essential to understand the passive mechanics of the stomach and may be used for mathematical and computational modeling. Additionally, an extension of the protocol used may be relevant for compiling a comparative study between the human stomach and the pig stomach.

Keywords: antrum region, gastric biomechanics, loading-unloading, stress relaxation, uniaxial tensile testing

Procedia PDF Downloads 404
117 The Influence of Active Breaks on the Attention/Concentration Performance in Eighth-Graders

Authors: Christian Andrä, Luisa Zimmermann, Christina Müller

Abstract:

Introduction: The positive relation between physical activity and cognition is commonly known. Relevant studies show that in everyday school life active breaks can lead to improvement in certain abilities (e.g. attention and concentration). A beneficial effect is in particular attributed to moderate activity. It is still unclear whether active breaks are beneficial after relatively short phases of cognitive load and whether the postulated effects of activity really have an immediate impact. The objective of this study was to verify whether an active break after 18 minutes of cognitive load leads to enhanced attention/concentration performance, compared to inactive breaks with voluntary mobile phone activity. Methodology: For this quasi-experimental study, 36 students [age: 14.0 (mean value) ± 0.3 (standard deviation); male/female: 21/15] of a secondary school were tested. In week 1, every student’s maximum heart rate (Hfmax) was determined through maximum effort tests conducted during physical education classes. The task was to run 3 laps of 300 m with increasing subjective effort (lap 1: 60%, lap 2: 80%, lap 3: 100% of the maximum performance capacity). Furthermore, first attention/concentration tests (D2-R) took place (pretest). The groups were matched on the basis of the pretest results. During week 2 and 3, crossover testing was conducted, comprising of 18 minutes of cognitive preload (test for concentration performance, KLT-R), a break and an attention/concentration test after a 2-minutes transition. Different 10-minutes breaks (active break: moderate physical activity with 65% Hfmax or inactive break: mobile phone activity) took place between preloading and transition. Major findings: In general, there was no impact of the different break interventions on the concentration test results (symbols processed after physical activity: 185.2 ± 31.3 / after inactive break: 184.4 ± 31.6; errors after physical activity: 5.7 ± 6.3 / after inactive break: 7.0. ± 7.2). There was, however, a noticeable development of the values over the testing periods. Although no difference in the number of processed symbols was detected (active/inactive break: period 1: 49.3 ± 8.8/46.9 ± 9.0; period 2: 47.0 ± 7.7/47.3 ± 8.4; period 3: 45.1 ± 8.3/45.6 ± 8.0; period 4: 43.8 ± 7.8/44.6 ± 8.0), error rates decreased successively after physical activity and increased gradually after an inactive break (active/inactive break: period 1: 1.9 ± 2.4/1.2 ± 1.4; period 2: 1.7 ± 1.8/ 1.5 ± 2.0, period 3: 1.2 ± 1.6/1.8 ± 2.1; period 4: 0.9 ± 1.5/2.5 ± 2.6; p= .012). Conclusion: Taking into consideration only the study’s overall results, the hypothesis must be dismissed. However, more differentiated evaluation shows that the error rates decreased after active breaks and increased after inactive breaks. Obviously, the effects of active intervention occur with a delay. The 2-minutes transition (regeneration time) used for this study seems to be insufficient due to the longer adaptation time of the cardio-vascular system in untrained individuals, which might initially affect the concentration capacity. To use the positive effects of physical activity for teaching and learning processes, physiological characteristics must also be considered. Only this will ensure optimum ability to perform.

Keywords: active breaks, attention/concentration test, cognitive performance capacity, heart rate, physical activity

Procedia PDF Downloads 294
116 [Keynote] Implementation of Quality Control Procedures in Radiotherapy CT Simulator

Authors: B. Petrović, L. Rutonjski, M. Baucal, M. Teodorović, O. Čudić, B. Basarić

Abstract:

Purpose/Objective: Radiotherapy treatment planning requires use of CT simulator, in order to acquire CT images. The overall performance of CT simulator determines the quality of radiotherapy treatment plan, and at the end, the outcome of treatment for every single patient. Therefore, it is strongly advised by international recommendations, to set up a quality control procedures for every machine involved in radiotherapy treatment planning process, including the CT scanner/ simulator. The overall process requires number of tests, which are used on daily, weekly, monthly or yearly basis, depending on the feature tested. Materials/Methods: Two phantoms were used: a dedicated phantom CIRS 062QA, and a QA phantom obtained with the CT simulator. The examined CT simulator was Siemens Somatom Definition as Open, dedicated for radiation therapy treatment planning. The CT simulator has a built in software, which enables fast and simple evaluation of CT QA parameters, using the phantom provided with the CT simulator. On the other hand, recommendations contain additional test, which were done with the CIRS phantom. Also, legislation on ionizing radiation protection requires CT testing in defined periods of time. Taking into account the requirements of law, built in tests of a CT simulator, and international recommendations, the intitutional QC programme for CT imulator is defined, and implemented. Results: The CT simulator parameters evaluated through the study were following: CT number accuracy, field uniformity, complete CT to ED conversion curve, spatial and contrast resolution, image noise, slice thickness, and patient table stability.The following limits are established and implemented: CT number accuracy limits are +/- 5 HU of the value at the comissioning. Field uniformity: +/- 10 HU in selected ROIs. Complete CT to ED curve for each tube voltage must comply with the curve obtained at comissioning, with deviations of not more than 5%. Spatial and contrast resultion tests must comply with the tests obtained at comissioning, otherwise machine requires service. Result of image noise test must fall within the limit of 20% difference of the base value. Slice thickness must meet manufacturer specifications, and patient stability with longitudinal transfer of loaded table must not differ of more than 2mm vertical deviation. Conclusion: The implemented QA tests gave overall basic understanding of CT simulator functionality and its clinical effectiveness in radiation treatment planning. The legal requirement to the clinic is to set up it’s own QA programme, with minimum testing, but it remains user’s decision whether additional testing, as recommended by international organizations, will be implemented, so to improve the overall quality of radiation treatment planning procedure, as the CT image quality used for radiation treatment planning, influences the delineation of a tumor and calculation accuracy of treatment planning system, and finally delivery of radiation treatment to a patient.

Keywords: CT simulator, radiotherapy, quality control, QA programme

Procedia PDF Downloads 504
115 Clinical and Analytical Performance of Glial Fibrillary Acidic Protein and Ubiquitin C-Terminal Hydrolase L1 Biomarkers for Traumatic Brain Injury in the Alinity Traumatic Brain Injury Test

Authors: Raj Chandran, Saul Datwyler, Jaime Marino, Daniel West, Karla Grasso, Adam Buss, Hina Syed, Zina Al Sahouri, Jennifer Yen, Krista Caudle, Beth McQuiston

Abstract:

The Alinity i TBI test is Therapeutic Goods Administration (TGA) registered and is a panel of in vitro diagnostic chemiluminescent microparticle immunoassays for the measurement of glial fibrillary acidic protein (GFAP) and ubiquitin C-terminal hydrolase L1 (UCH-L1) in plasma and serum. The Alinity i TBI performance was evaluated in a multi-center pivotal study to demonstrate the capability to assist in determining the need for a CT scan of the head in adult subjects (age 18+) presenting with suspected mild TBI (traumatic brain injury) with a Glasgow Coma Scale score of 13 to 15. TBI has been recognized as an important cause of death and disability and is a growing public health problem. An estimated 69 million people globally experience a TBI annually1. Blood-based biomarkers such as glial fibrillary acidic protein (GFAP) and ubiquitin C-terminal hydrolase L1 (UCH-L1) have shown utility to predict acute traumatic intracranial injury on head CT scans after TBI. A pivotal study using prospectively collected archived (frozen) plasma specimens was conducted to establish the clinical performance of the TBI test on the Alinity i system. The specimens were originally collected in a prospective, multi-center clinical study. Testing of the specimens was performed at three clinical sites in the United States. Performance characteristics such as detection limits, imprecision, linearity, measuring interval, expected values, and interferences were established following Clinical and Laboratory Standards Institute (CLSI) guidance. Of the 1899 mild TBI subjects, 120 had positive head CT scan results; 116 of the 120 specimens had a positive TBI interpretation (Sensitivity 96.7%; 95% CI: 91.7%, 98.7%). Of the 1779 subjects with negative CT scan results, 713 had a negative TBI interpretation (Specificity 40.1%; 95% CI: 37.8, 42.4). The negative predictive value (NPV) of the test was 99.4% (713/717, 95% CI: 98.6%, 99.8%). The analytical measuring interval (AMI) extends from the limit of quantitation (LoQ) to the upper LoQ and is determined by the range that demonstrates acceptable performance for linearity, imprecision, and bias. The AMI is 6.1 to 42,000 pg/mL for GFAP and 26.3 to 25,000 pg/mL for UCH-L1. Overall, within-laboratory imprecision (20 day) ranged from 3.7 to 5.9% CV for GFAP and 3.0 to 6.0% CV for UCH-L1, when including lot and instrument variances. The Alinity i TBI clinical performance results demonstrated high sensitivity and high NPV, supporting the utility to assist in determining the need for a head CT scan in subjects presenting to the emergency department with suspected mild TBI. The GFAP and UCH-L1 assays show robust analytical performance across a broad concentration range of GFAP and UCH-L1 and may serve as a valuable tool to help evaluate TBI patients across the spectrum of mild to severe injury.

Keywords: biomarker, diagnostic, neurology, TBI

Procedia PDF Downloads 45
114 Ahmad Sabzi Balkhkanloo, Motahareh Sadat Hashemi, Seyede Marzieh Hosseini, Saeedeh Shojaee-Aliabadi, Leila Mirmoghtadaie

Authors: Elyria Kemp, Kelly Cowart, My Bui

Abstract:

According to the National Institute of Mental Health, an estimated 31.9% of adolescents have had an anxiety disorder. Several environmental factors may help to contribute to high levels of anxiety and depression in young people (i.e., Generation Z, Millennials). However, as young people negotiate life on social media, they may begin to evaluate themselves using excessively high standards and adopt self-perfectionism tendencies. Broadly defined, self-perfectionism involves very critical evaluations of the self. Perfectionism may also come from others and may manifest as socially prescribed perfectionism, and young adults are reporting higher levels of socially prescribed perfectionism than previous generations. This rising perfectionism is also associated with anxiety, greater physiological reactivity, and a sense of social disconnection. However, theories from psychology suggest that improvement in emotion regulation can contribute to enhanced psychological and emotional well-being. Emotion regulation refers to the ways people manage how and when they experience and express their emotions. Cognitive reappraisal and expressive suppression are common emotion regulation strategies. Cognitive reappraisal involves changing the meaning of a stimulus that involves construing a potentially emotion-eliciting situation in a way that changes its emotional impact. By contrast, expressive suppression involves inhibiting the behavioral expression of emotion. The purpose of this research is to examine the efficacy of social marketing initiatives which promote emotion regulation strategies to help young adults regulate their emotions. In Study 1 a single factor (emotional regulation strategy: a cognitive reappraisal, expressive, control) between-subjects design was conducted using an online, non-student consumer panel (n=96). Sixty-eight percent of participants were male, and 32% were female. Study participants belonged to the Millennial and Gen Z cohort, ranging in age from 22 to 35 (M=27). Participants were first told to spend at least three minutes writing about a public speaking appearance which made them anxious. The purpose of this exercise was to induce anxiety. Next, participants viewed one of three advertisements (randomly assigned) which promoted an emotion regulation strategy—cognitive reappraisal, expressive suppression, or an advertisement non-emotional in nature. After being exposed to one of the ads, participants responded to a measure composed of two items to access their emotional state and the efficacy of the messages in fostering emotion management. Findings indicated that individuals in the cognitive reappraisal condition (M=3.91) exhibited the most positive feelings and more effective emotion regulation than the expressive suppression (M=3.39) and control conditions (M=3.72, F(1,92) = 3.3, p<.05). Results from this research can be used by institutions (e.g., schools) in taking a leadership role in attacking anxiety and other mental health issues. Social stigmas regarding mental health can be removed and a more proactive stance can be taken in promoting healthy coping behaviors and strategies to manage negative emotions.

Keywords: emotion regulation, anxiety, social marketing, generation z

Procedia PDF Downloads 182
113 An Unusual Case of Wrist Pain: Idiopathic Avascular Necrosis of the Scaphoid, Preiser’s Disease

Authors: Adae Amoako, Daniel Montero, Peter Murray, George Pujalte

Abstract:

We present a case of a 42-year-old, right-handed Caucasian male who presented to a medical orthopedics clinic with left wrist pain. The patient indicated that the pain started two months prior to the visit. He could only remember helping a friend move furniture prior to the onset of pain. Examination of the left wrist showed limited extension compared to the right. There was clicking with flexion and extension of the wrist on the dorsal aspect. Mild tenderness was noticed over the distal radioulnar joint. There was ulnar and radial deviation on provocation. Initial 4-view x-rays of the left wrist showed mild radiocarpal and scapho-trapezium-trapezoid (ST-T) osteoarthritis, with subchondral cysts seen in the lunate and scaphoid, with no obvious fractures. The patient was initially put in a wrist brace and diclofenac topical gel was prescribed for pain control, as a patient could not take non-steroidal anti-inflammatory drugs (NSAIDs) due to gastritis. Despite diclofenac topical gel use and bracing, symptoms remained, and a steroid injection with 1 mL of lidocaine with 10 mg of triamcinolone acetonide was performed under fluoroscopy. He obtained some relief but after 3 months, the injection had to be repeated. On 2-month follow up after the initial evaluation, symptoms persisted. Magnetic resonance imaging (MRI) was obtained which showed an abnormal T1 hypodense signal involving the proximal pole of the scaphoid and articular collapse proximally of the scaphoid, with marked irregularity of the overlying cartilage, suggesting a remote injury, findings consistent with avascular necrosis of the proximal pole of the scaphoid. A month after that, the patient had the left proximal pole of the scaphoid debrided and an intercompartmental supraretinacular artery vascularized. Pedicle bone graft reconstruction of the proximal pole of the left scaphoid was done. A non-vascularized autograft from the left radius was also applied. He was put in a thumb spica cast with the interphalangeal joint free for 6 weeks. On 6-week follow-up after surgery, the patient was healing well and could make a composite fist with his left hand. The diagnosis of Preiser’s disease is primarily based on radiological findings. Due to the fact that necrosis happens over a period of time, most AVNs are diagnosed at the late stages of the disease. There appear to be no specific guidelines on the management AVN of the scaphoid. In the past, immobilization and arthroscopic debridement had been used. Radial osteotomy has also been tried. Vascularized bone grafts have also been used to treat Preiser’s disease. In our patient, we used three of these treatment modalities, starting with conservative management with topical NSAIDS and immobilization, then debridement with vascularized bone grafts.

Keywords: wrist pain, avascular necrosis of the scaphoid, Preiser’s disease, vascularized bone grafts

Procedia PDF Downloads 277
112 Influence of Counter-Face Roughness on the Friction of Bionic Microstructures

Authors: Haytam Kasem

Abstract:

The problem of quick and easy reversible attachment has become of great importance in different fields of technology. For the reason, during the last decade, a new emerging field of adhesion science has been developed. Essentially inspired by some animals and insects, which during their natural evolution have developed fantastic biological attachment systems allowing them to adhere and run on walls and ceilings of uneven surfaces. Potential applications of engineering bio-inspired solutions include climbing robots, handling systems for wafers in nanofabrication facilities, and mobile sensor platforms, to name a few. However, despite the efforts provided to apply bio-inspired patterned adhesive-surfaces to the biomedical field, they are still in the early stages compared with their conventional uses in other industries mentioned above. In fact, there are some critical issues that still need to be addressed for the wide usage of the bio-inspired patterned surfaces as advanced biomedical platforms. For example, surface durability and long-term stability of surfaces with high adhesive capacity should be improved, but also the friction and adhesion capacities of these bio-inspired microstructures when contacting rough surfaces. One of the well-known prototypes for bio-inspired attachment systems is biomimetic wall-shaped hierarchical microstructure for gecko-like attachments. Although physical background of these attachment systems is widely understood, the influence of counter-face roughness and its relationship with the friction force generated when sliding against wall-shaped hierarchical microstructure have yet to be fully analyzed and understood. To elucidate the effect of the counter-face roughness on the friction of biomimetic wall-shaped hierarchical microstructure we have replicated the isotropic topography of 12 different surfaces using replicas made of the same epoxy material. The different counter-faces were fully characterized under 3D optical profilometer to measure roughness parameters. The friction forces generated by spatula-shaped microstructure in contact with the tested counter-faces were measured on a home-made tribometer and compared with the friction forces generated by the spatulae in contact with a smooth reference. It was found that classical roughness parameters, such as average roughness Ra and others, could not be utilized to explain topography-related variation in friction force. This has led us to the development of an integrated roughness parameter obtained by combining different parameters which are the mean asperity radius of curvature (R), the asperity density (η), the deviation of asperities high (σ) and the mean asperities angle (SDQ). This new integrated parameter is capable of explaining the variation of results of friction measurements. Based on the experimental results, we developed and validated an analytical model to predict the variation of the friction force as a function of roughness parameters of the counter-face and the applied normal load, as well.

Keywords: friction, bio-mimetic micro-structure, counter-face roughness, analytical model

Procedia PDF Downloads 224
111 Bisphenol-A Concentrations in Urine and Drinking Water Samples of Adults Living in Ankara

Authors: Hasan Atakan Sengul, Nergis Canturk, Bahar Erbas

Abstract:

Drinking water is indispensable for life. With increasing awareness of communities, the content of drinking water and tap water has been a matter of curiosity. The presence of Bisphenol-A is the top one when content curiosity is concerned. The most used chemical worldwide for production of polycarbonate plastics and epoxy resins is Bisphenol-A. People are exposed to Bisphenol-A chemical, which disrupts the endocrine system, almost every day. Each year it is manufactured an average of 5.4 billion kilograms of Bisphenol-A. Linear formula of Bisphenol-A is (CH₃)₂C(C₆H₄OH)₂, its molecular weight is 228.29 and CAS number is 80-05-7. Bisphenol-A is known to be used in the manufacturing of plastics, along with various chemicals. Bisphenol-A, an industrial chemical, is used in the raw materials of packaging mate-rials in the monomers of polycarbonate and epoxy resins. The pass through the nutrients of Bisphenol-A substance happens by packaging. This substance contaminates with nutrition and penetrates into body by consuming. International researches show that BPA is transported through body fluids, leading to hormonal disorders in animals. Experimental studies on animals report that BPA exposure also affects the gender of the newborn and its time to reach adolescence. The extent to what similar endocrine disrupting effects are on humans is a debate topic in many researches. In our country, detailed studies on BPA have not been done. However, it is observed that 'BPA-free' phrases are beginning to appear on plastic packaging such as baby products and water carboys. Accordingly, this situation increases the interest of the society about the subject; yet it causes information pollution. In our country, all national and international studies on exposure to BPA have been examined and Ankara province has been designated as testing region. To assess the effects of plastic use in daily habits of people and the plastic amounts removed out of the body, the results of the survey conducted with volunteers who live in Ankara has been analyzed with Sciex appliance by means of LC-MS/MS in the laboratory and the amount of exposure and BPA removal have been detected by comparing the results elicited before. The results have been compared with similar studies done in international arena and the relation between them has been exhibited. Consequently, there has been found no linear correlation between the amount of BPA in drinking water and the amount of BPA in urine. This has also revealed that environmental exposure and the habits of daily plastic use have also direct effects a human body. When the amount of BPA in drinking water is considered; minimum 0.028 µg/L, maximum 1.136 µg/L, mean 0.29194 µg/L and SD(standard deviation)= 0.199 have been detected. When the amount of BPA in urine is considered; minimum 0.028 µg/L, maximum 0.48 µg/L, mean 0.19181 µg/L and SD= 0.099 have been detected. In conclusion, there has been found no linear correlation between the amount of BPA in drinking water and the amount of BPA in urine (r= -0.151). The p value of the comparison between drinking water’s and urine’s BPA amounts is 0.004 which shows that there is a significant change and the amounts of BPA in urine is dependent on the amounts in drinking waters (p < 0.05). This has revealed that environmental exposure and daily plastic habits have also direct effects on the human body.

Keywords: analyze of bisphenol-A, BPA, BPA in drinking water, BPA in urine

Procedia PDF Downloads 111