Search results for: mean bias error
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2506

Search results for: mean bias error

376 Accounting and Prudential Standards of Banks and Insurance Companies in EU: What Stakes for Long Term Investment?

Authors: Sandra Rigot, Samira Demaria, Frederic Lemaire

Abstract:

The starting point of this research is the contemporary capitalist paradox: there is a real scarcity of long term investment despite the boom of potential long term investors. This gap represents a major challenge: there are important needs for long term financing in developed and emerging countries in strategic sectors such as energy, transport infrastructure, information and communication networks. Moreover, the recent financial and sovereign debt crises, which have respectively reduced the ability of financial banking intermediaries and governments to provide long term financing, questions the identity of the actors able to provide long term financing, their methods of financing and the most appropriate forms of intermediation. The issue of long term financing is deemed to be very important by the EU Commission, as it issued a 2013 Green Paper (GP) on long-term financing of the EU economy. Among other topics, the paper discusses the impact of the recent regulatory reforms on long-term investment, both in terms of accounting (in particular fair value) and prudential standards for banks. For banks, prudential and accounting standards are also crucial. Fair value is indeed well adapted to the trading book in a short term view, but this method hardly suits for a medium and long term portfolio. Banks’ ability to finance the economy and long term projects depends on their ability to distribute credit and the way credit is valued (fair value or amortised cost) leads to different banking strategies. Furthermore, in the banking industry, accounting standards are directly connected to the prudential standards, as the regulatory requirements of Basel III use accounting figures with prudential filter to define the needs for capital and to compute regulatory ratios. The objective of these regulatory requirements is to prevent insolvency and financial instability. In the same time, they can represent regulatory constraints to long term investing. The balance between financial stability and the need to stimulate long term financing is a key question raised by the EU GP. Does fair value accounting contributes to short-termism in the investment behaviour? Should prudential rules be “appropriately calibrated” and “progressively implemented” not to prevent banks from providing long-term financing? These issues raised by the EU GP lead us to question to what extent the main regulatory requirements incite or constrain banks to finance long term projects. To that purpose, we study the 292 responses received by the EU Commission during the public consultation. We analyze these contributions focusing on particular questions related to fair value accounting and prudential norms. We conduct a two stage content analysis of the responses. First, we proceed to a qualitative coding to identify arguments of respondents and subsequently we run a quantitative coding in order to conduct statistical analyses. This paper provides a better understanding of the position that a large panel of European stakeholders have on these issues. Moreover, it adds to the debate on fair value accounting and its effects on prudential requirements for banks. This analysis allows us to identify some short term bias in banking regulation.

Keywords: basel 3, fair value, securitization, long term investment, banks, insurers

Procedia PDF Downloads 291
375 The Artificial Intelligence Driven Social Work

Authors: Avi Shrivastava

Abstract:

Our world continues to grapple with a lot of social issues. Economic growth and scientific advancements have not completely eradicated poverty, homelessness, discrimination and bias, gender inequality, health issues, mental illness, addiction, and other social issues. So, how do we improve the human condition in a world driven by advanced technology? The answer is simple: we will have to leverage technology to address some of the most important social challenges of the day. AI, or artificial intelligence, has emerged as a critical tool in the battle against issues that deprive marginalized and disadvantaged groups of the right to enjoy benefits that a society offers. Social work professionals can transform their lives by harnessing it. The lack of reliable data is one of the reasons why a lot of social work projects fail. Social work professionals continue to rely on expensive and time-consuming primary data collection methods, such as observation, surveys, questionnaires, and interviews, instead of tapping into AI-based technology to generate useful, real-time data and necessary insights. By leveraging AI’s data-mining ability, we can gain a deeper understanding of how to solve complex social problems and change lives of people. We can do the right work for the right people and at the right time. For example, AI can enable social work professionals to focus their humanitarian efforts on some of the world’s poorest regions, where there is extreme poverty. An interdisciplinary team of Stanford scientists, Marshall Burke, Stefano Ermon, David Lobell, Michael Xie, and Neal Jean, used AI to spot global poverty zones – identifying such zones is a key step in the fight against poverty. The scientists combined daytime and nighttime satellite imagery with machine learning algorithms to predict poverty in Nigeria, Uganda, Tanzania, Rwanda, and Malawi. In an article published by Stanford News, Stanford researchers use dark of night and machine learning, Ermon explained that they provided the machine-learning system, an application of AI, with the high-resolution satellite images and asked it to predict poverty in the African region. “The system essentially learned how to solve the problem by comparing those two sets of images [daytime and nighttime].” This is one example of how AI can be used by social work professionals to reach regions that need their aid the most. It can also help identify sources of inequality and conflict, which could reduce inequalities, according to Nature’s study, titled The role of artificial intelligence in achieving the Sustainable Development Goals, published in 2020. The report also notes that AI can help achieve 79 percent of the United Nation’s (UN) Sustainable Development Goals (SDG). AI is impacting our everyday lives in multiple amazing ways, yet some people do not know much about it. If someone is not familiar with this technology, they may be reluctant to use it to solve social issues. So, before we talk more about the use of AI to accomplish social work objectives, let’s put the spotlight on how AI and social work can complement each other.

Keywords: social work, artificial intelligence, AI based social work, machine learning, technology

Procedia PDF Downloads 102
374 Selection of Optimal Reduced Feature Sets of Brain Signal Analysis Using Heuristically Optimized Deep Autoencoder

Authors: Souvik Phadikar, Nidul Sinha, Rajdeep Ghosh

Abstract:

In brainwaves research using electroencephalogram (EEG) signals, finding the most relevant and effective feature set for identification of activities in the human brain is a big challenge till today because of the random nature of the signals. The feature extraction method is a key issue to solve this problem. Finding those features that prove to give distinctive pictures for different activities and similar for the same activities is very difficult, especially for the number of activities. The performance of a classifier accuracy depends on this quality of feature set. Further, more number of features result in high computational complexity and less number of features compromise with the lower performance. In this paper, a novel idea of the selection of optimal feature set using a heuristically optimized deep autoencoder is presented. Using various feature extraction methods, a vast number of features are extracted from the EEG signals and fed to the autoencoder deep neural network. The autoencoder encodes the input features into a small set of codes. To avoid the gradient vanish problem and normalization of the dataset, a meta-heuristic search algorithm is used to minimize the mean square error (MSE) between encoder input and decoder output. To reduce the feature set into a smaller one, 4 hidden layers are considered in the autoencoder network; hence it is called Heuristically Optimized Deep Autoencoder (HO-DAE). In this method, no features are rejected; all the features are combined into the response of responses of the hidden layer. The results reveal that higher accuracy can be achieved using optimal reduced features. The proposed HO-DAE is also compared with the regular autoencoder to test the performance of both. The performance of the proposed method is validated and compared with the other two methods recently reported in the literature, which reveals that the proposed method is far better than the other two methods in terms of classification accuracy.

Keywords: autoencoder, brainwave signal analysis, electroencephalogram, feature extraction, feature selection, optimization

Procedia PDF Downloads 114
373 Influence of a High-Resolution Land Cover Classification on Air Quality Modelling

Authors: C. Silveira, A. Ascenso, J. Ferreira, A. I. Miranda, P. Tuccella, G. Curci

Abstract:

Poor air quality is one of the main environmental causes of premature deaths worldwide, and mainly in cities, where the majority of the population lives. It is a consequence of successive land cover (LC) and use changes, as a result of the intensification of human activities. Knowing these landscape modifications in a comprehensive spatiotemporal dimension is, therefore, essential for understanding variations in air pollutant concentrations. In this sense, the use of air quality models is very useful to simulate the physical and chemical processes that affect the dispersion and reaction of chemical species into the atmosphere. However, the modelling performance should always be evaluated since the resolution of the input datasets largely dictates the reliability of the air quality outcomes. Among these data, the updated LC is an important parameter to be considered in atmospheric models, since it takes into account the Earth’s surface changes due to natural and anthropic actions, and regulates the exchanges of fluxes (emissions, heat, moisture, etc.) between the soil and the air. This work aims to evaluate the performance of the Weather Research and Forecasting model coupled with Chemistry (WRF-Chem), when different LC classifications are used as an input. The influence of two LC classifications was tested: i) the 24-classes USGS (United States Geological Survey) LC database included by default in the model, and the ii) CLC (Corine Land Cover) and specific high-resolution LC data for Portugal, reclassified according to the new USGS nomenclature (33-classes). Two distinct WRF-Chem simulations were carried out to assess the influence of the LC on air quality over Europe and Portugal, as a case study, for the year 2015, using the nesting technique over three simulation domains (25 km2, 5 km2 and 1 km2 horizontal resolution). Based on the 33-classes LC approach, particular emphasis was attributed to Portugal, given the detail and higher LC spatial resolution (100 m x 100 m) than the CLC data (5000 m x 5000 m). As regards to the air quality, only the LC impacts on tropospheric ozone concentrations were evaluated, because ozone pollution episodes typically occur in Portugal, in particular during the spring/summer, and there are few research works relating to this pollutant with LC changes. The WRF-Chem results were validated by season and station typology using background measurements from the Portuguese air quality monitoring network. As expected, a better model performance was achieved in rural stations: moderate correlation (0.4 – 0.7), BIAS (10 – 21µg.m-3) and RMSE (20 – 30 µg.m-3), and where higher average ozone concentrations were estimated. Comparing both simulations, small differences grounded on the Leaf Area Index and air temperature values were found, although the high-resolution LC approach shows a slight enhancement in the model evaluation. This highlights the role of the LC on the exchange of atmospheric fluxes, and stresses the need to consider a high-resolution LC characterization combined with other detailed model inputs, such as the emission inventory, to improve air quality assessment.

Keywords: land use, spatial resolution, WRF-Chem, air quality assessment

Procedia PDF Downloads 158
372 EFL Teachers’ Sequential Self-Led Reflection and Possible Modifications in Their Classroom Management Practices

Authors: Sima Modirkhameneh, Mohammad Mohammadpanah

Abstract:

In the process of EFL teachers’ development, self-led reflection (SLR) is thought to have an imperative role because it may help teachers analyze, evaluate, and contemplate what is happening in their classes. Such contemplations can not only enhance the quality of their instruction and provide better learning environments for learners but also improve the quality of their classroom management (CM). Accordingly, understanding the effect of teachers’ SLR practices may help us gain valuable insights into what possible modifications SLR may bring about in all aspects of EFL teachers' practitioners, especially their CM. The main purpose of this case study was, thus, to investigate the impact of SLR practices of 12 Iranian EFL teachers on their CM based on the universal classroom management checklist (UCMC). In addition, another objective of the current study was to have a clear image of EFL teachers’ perceptions of their own SLR practices and their possible outcomes. By conducting repeated reflective interviews, observations, and feedback of the participants over five teaching sessions, the researcher analyzed the outcomes qualitatively through the process of meaning categorization and data interpretation based on the principles of Grounded Theory. The results demonstrated that EFL teachers utilized SLR practices to improve different aspects of their language teaching skills and CM in different contexts. Almost all participants had positive comments and reactions about the effect of SLR on their CM procedures in different aspects (expectations and routines, behavior-specific praise, error corrections, prompts and precorrections, opportunity to respond, strengths and weaknesses of CM, teachers’ perception, CM ability, and learning process). Otherwise stated, results implied that familiarity with the UCMC criteria and reflective practices contributes to modifying teacher participants’ perceptions about their CM procedure and utilizing the reflective practices in their teaching styles. The results are thought to be valuably beneficial for teachers, teacher educators, and policymakers, who are recommended to pay special attention to the contributions as well as the complexity of reflective teaching. The study concludes with more detailed results and implications and useful directions for future research.

Keywords: classroom management, EFL teachers, reflective practices, self-led reflection

Procedia PDF Downloads 54
371 Assessment of Ocular Morbidity, Knowledge and Barriers to Access Eye Care Services among the Children Live in Offshore Island, Bangladesh

Authors: Abir Dey, Shams Noman

Abstract:

Introduction: Offshore Island is the remote and isolated area from the terrestrial mainland. They are deprived of their needs. The children from an offshore island are usually underserved in the case of health care because it is a remote area where the health care systems are quite poor compared to mainland. So, the proper information is required for appropriate planning to reduce underlying causes behind visual deprivation among the surviving children of the Offshore Island. Purpose: The purpose of this study was to determine ocular morbidities, knowledge, and barriers of eye care services among children in an Offshore Island. Methods: The study team visited, and all data were collected from different rural communities at Sandwip Upazila, Chittagong district for screening the children aged 5-16 years old by doing spot examination. The whole study was conducted in both qualitative and quantitative methods. To determine ocular status of children, examinations were done under skilled Ophthalmologists and Optometrists. A focus group discussion was held. The sample size was 490. It was a community based descriptive study and the sampling method was purposive sampling. Results: In total 490 children, about 56.90% were female and 43.10% were male. Among them 456 were school-going children (93.1%) and 34 were non-school going children (6.9%). In this study the most common ocular morbidity was Allergic Conjunctivitis (35.2%). Other mentionable ocular morbidities were Refractive error (27.7%), Blepharitis (13.8%), Meibomian Gland Dysfunction (7.5%), Strabismus (6.3%) and Amblyopia (6.3%). Most of the non-school going children were involved in different types of domestic work like farming, fishing, etc. About 90.04% children who had different ocular abnormalities could not attend to the doctor due to various reasons. Conclusions: The ocular morbidity was high in rate on the offshore island. Eye health care facility was also not well established there. Awareness should be raised about necessity of maintaining hygiene and eye healthcare among the island people. Timely intervention through available eye care facilities and management can reduce the ocular morbidity rate in that area.

Keywords: morbidities, screening, barriers, offshore island, knowledge

Procedia PDF Downloads 160
370 Exclusive Breastfeeding Abandonment among Adolescent Mothers: A Cohort Study

Authors: Maria I. Nuñez-Hernández, Maria L. Riesco

Abstract:

Background: Exclusive breastfeeding (EBF) up to 6 months old infant have been considered one of the most important factors in the overall development of children. Nevertheless, as resources are scarce, it is essential to identify the most vulnerable groups that have major risk of EBF abandonment, in order to deliver the best strategies. Children of adolescent mothers are within these groups. Aims: To determine the EBF abandonment rate among adolescent mothers and to analyze the associated factors. Methods: Prospective cohort study of adolescent mothers in the southern area of Santiago, Chile, conducted in primary care services of public health system. The cohort was established from 2014 to 2015, with a sample of 105 adolescent mothers and their children at 2 months of life. The inclusion criteria were: adolescent mother from 14 to 19 years old; not twin babies; mother and baby leaving the hospital together after birthchild; correct attachment of the baby to the breast; no difficulty understanding the Spanish language or communicating. Follow-up was performed at 4 and 6 months old infant. Data were collected by interviews, considering EBF as breastfeeding only, without adding other milk, tea, juice, water or other product that not breast milk, except drugs. Data were analyzed by descriptive and inferential statistics, by Kaplan-Meier estimator and Log-Rank test, admitting the probability of occurrence of type I error of 5% (p-value = 0.05). Results: The cumulative EBF abandonment rate at 2, 4 and 6 months was 33.3%, 52.2% and 63.8%, respectively. Factors associated with EBF abandonment were maternal perception of the quality of milk as poor (p < 0.001), maternal perception that the child was not satisfied after breastfeeding (p < 0.001), use of pacifier (p < 0.001), maternal consumption of illicit drugs after delivery (p < 0.001), mother return to school (p = 0.040) and presence of nipple trauma (p = 0.045). Conclusion: EBF abandonment rate was higher in the first 4 months of life and is superior to the population of women who breastfeed. Among the EBF abandonment factors, one of them is related to the adolescent condition, and two are related to the maternal subjective perception.

Keywords: adolescent, breastfeeding, midwifery, nursing

Procedia PDF Downloads 322
369 Comparison of Feedforward Back Propagation and Self-Organizing Map for Prediction of Crop Water Stress Index of Rice

Authors: Aschalew Cherie Workneh, K. S. Hari Prasad, Chandra Shekhar Prasad Ojha

Abstract:

Due to the increase in water scarcity, the crop water stress index (CWSI) is receiving significant attention these days, especially in arid and semiarid regions, for quantifying water stress and effective irrigation scheduling. Nowadays, machine learning techniques such as neural networks are being widely used to determine CWSI. In the present study, the performance of two artificial neural networks, namely, Self-Organizing Maps (SOM) and Feed Forward-Back Propagation Artificial Neural Networks (FF-BP-ANN), are compared while determining the CWSI of rice crop. Irrigation field experiments with varying degrees of irrigation were conducted at the irrigation field laboratory of the Indian Institute of Technology, Roorkee, during the growing season of the rice crop. The CWSI of rice was computed empirically by measuring key meteorological variables (relative humidity, air temperature, wind speed, and canopy temperature) and crop parameters (crop height and root depth). The empirically computed CWSI was compared with SOM and FF-BP-ANN predicted CWSI. The upper and lower CWSI baselines are computed using multiple regression analysis. The regression analysis showed that the lower CWSI baseline for rice is a function of crop height (h), air vapor pressure deficit (AVPD), and wind speed (u), whereas the upper CWSI baseline is a function of crop height (h) and wind speed (u). The performance of SOM and FF-BP-ANN were compared by computing Nash-Sutcliffe efficiency (NSE), index of agreement (d), root mean squared error (RMSE), and coefficient of correlation (R²). It is found that FF-BP-ANN performs better than SOM while predicting the CWSI of rice crops.

Keywords: artificial neural networks; crop water stress index; canopy temperature, prediction capability

Procedia PDF Downloads 117
368 Dutch Disease and Industrial Development: An Investigation of the Determinants of Manufacturing Sector Performance in Nigeria

Authors: Kayode Ilesanmi Ebenezer Bowale, Dominic Azuh, Busayo Aderounmu, Alfred Ilesanmi

Abstract:

There has been a debate among scholars and policymakers about the effects of oil exploration and production on industrial development. In Nigeria, there were many reforms resulting in an increase in crude oil production in the recent past. There is a controversy on the importance of oil production in the development of the manufacturing sector in Nigeria. Some scholars claim that oil has been a blessing to the development of the manufacturing sector, while others regard it as a curse. The objective of the study is to determine if empirical analysis supports the presence of Dutch Disease and de-industrialisation in the Nigerian manufacturing sector between 2019- 2022. The study employed data that were sourced from World Development Indicators, Nigeria Bureau of Statistics, and the Central Bank of Nigeria Statistical Bulletin on manufactured exports, manufacturing employment, agricultural employment, and service employment in line with the theory of Dutch Disease using the unit root test to establish their level of stationarity, Engel and Granger cointegration test to check their long-run relationship. Autoregressive. Distributed Lagged bound test was also used. The Vector Error Correction Model will be carried out to determine the speed of adjustment of the manufacturing export and resource movement effect. The results showed that the Nigerian manufacturing industry suffered from both direct and indirect de-industrialisation over the period. The findings also revealed that there was resource movement as labour moved away from the manufacturing sector to both the oil sector and the services sector. The study concluded that there was the presence of Dutch Disease in the manufacturing industry, and the problem of de-industrialisation led to the crowding out of manufacturing output. The study recommends that efforts should be made to diversify the Nigerian economy. Furthermore, a conducive business environment should be provided to encourage more involvement of the private sector in the agriculture and manufacturing sectors of the economy.

Keywords: Dutch disease, resource movement, manufacturing sector performance, Nigeria

Procedia PDF Downloads 79
367 Curvature Based-Methods for Automatic Coarse and Fine Registration in Dimensional Metrology

Authors: Rindra Rantoson, Hichem Nouira, Nabil Anwer, Charyar Mehdi-Souzani

Abstract:

Multiple measurements by means of various data acquisition systems are generally required to measure the shape of freeform workpieces for accuracy, reliability and holisticity. The obtained data are aligned and fused into a common coordinate system within a registration technique involving coarse and fine registrations. Standardized iterative methods have been established for fine registration such as Iterative Closest Points (ICP) and its variants. For coarse registration, no conventional method has been adopted yet despite a significant number of techniques which have been developed in the literature to supply an automatic rough matching between data sets. Two main issues are addressed in this paper: the coarse registration and the fine registration. For coarse registration, two novel automated methods based on the exploitation of discrete curvatures are presented: an enhanced Hough Transformation (HT) and an improved Ransac Transformation. The use of curvature features in both methods aims to reduce computational cost. For fine registration, a new variant of ICP method is proposed in order to reduce registration error using curvature parameters. A specific distance considering the curvature similarity has been combined with Euclidean distance to define the distance criterion used for correspondences searching. Additionally, the objective function has been improved by combining the point-to-point (P-P) minimization and the point-to-plane (P-Pl) minimization with automatic weights. These ones are determined from the preliminary calculated curvature features at each point of the workpiece surface. The algorithms are applied on simulated and real data performed by a computer tomography (CT) system. The obtained results reveal the benefit of the proposed novel curvature-based registration methods.

Keywords: discrete curvature, RANSAC transformation, hough transformation, coarse registration, ICP variant, point-to-point and point-to-plane minimization combination, computer tomography

Procedia PDF Downloads 424
366 Clinical and Analytical Performance of Glial Fibrillary Acidic Protein and Ubiquitin C-Terminal Hydrolase L1 Biomarkers for Traumatic Brain Injury in the Alinity Traumatic Brain Injury Test

Authors: Raj Chandran, Saul Datwyler, Jaime Marino, Daniel West, Karla Grasso, Adam Buss, Hina Syed, Zina Al Sahouri, Jennifer Yen, Krista Caudle, Beth McQuiston

Abstract:

The Alinity i TBI test is Therapeutic Goods Administration (TGA) registered and is a panel of in vitro diagnostic chemiluminescent microparticle immunoassays for the measurement of glial fibrillary acidic protein (GFAP) and ubiquitin C-terminal hydrolase L1 (UCH-L1) in plasma and serum. The Alinity i TBI performance was evaluated in a multi-center pivotal study to demonstrate the capability to assist in determining the need for a CT scan of the head in adult subjects (age 18+) presenting with suspected mild TBI (traumatic brain injury) with a Glasgow Coma Scale score of 13 to 15. TBI has been recognized as an important cause of death and disability and is a growing public health problem. An estimated 69 million people globally experience a TBI annually1. Blood-based biomarkers such as glial fibrillary acidic protein (GFAP) and ubiquitin C-terminal hydrolase L1 (UCH-L1) have shown utility to predict acute traumatic intracranial injury on head CT scans after TBI. A pivotal study using prospectively collected archived (frozen) plasma specimens was conducted to establish the clinical performance of the TBI test on the Alinity i system. The specimens were originally collected in a prospective, multi-center clinical study. Testing of the specimens was performed at three clinical sites in the United States. Performance characteristics such as detection limits, imprecision, linearity, measuring interval, expected values, and interferences were established following Clinical and Laboratory Standards Institute (CLSI) guidance. Of the 1899 mild TBI subjects, 120 had positive head CT scan results; 116 of the 120 specimens had a positive TBI interpretation (Sensitivity 96.7%; 95% CI: 91.7%, 98.7%). Of the 1779 subjects with negative CT scan results, 713 had a negative TBI interpretation (Specificity 40.1%; 95% CI: 37.8, 42.4). The negative predictive value (NPV) of the test was 99.4% (713/717, 95% CI: 98.6%, 99.8%). The analytical measuring interval (AMI) extends from the limit of quantitation (LoQ) to the upper LoQ and is determined by the range that demonstrates acceptable performance for linearity, imprecision, and bias. The AMI is 6.1 to 42,000 pg/mL for GFAP and 26.3 to 25,000 pg/mL for UCH-L1. Overall, within-laboratory imprecision (20 day) ranged from 3.7 to 5.9% CV for GFAP and 3.0 to 6.0% CV for UCH-L1, when including lot and instrument variances. The Alinity i TBI clinical performance results demonstrated high sensitivity and high NPV, supporting the utility to assist in determining the need for a head CT scan in subjects presenting to the emergency department with suspected mild TBI. The GFAP and UCH-L1 assays show robust analytical performance across a broad concentration range of GFAP and UCH-L1 and may serve as a valuable tool to help evaluate TBI patients across the spectrum of mild to severe injury.

Keywords: biomarker, diagnostic, neurology, TBI

Procedia PDF Downloads 66
365 Spatial Variation of WRF Model Rainfall Prediction over Uganda

Authors: Isaac Mugume, Charles Basalirwa, Daniel Waiswa, Triphonia Ngailo

Abstract:

Rainfall is a major climatic parameter affecting many sectors such as health, agriculture and water resources. Its quantitative prediction remains a challenge to weather forecasters although numerical weather prediction models are increasingly being used for rainfall prediction. The performance of six convective parameterization schemes, namely the Kain-Fritsch scheme, the Betts-Miller-Janjic scheme, the Grell-Deveny scheme, the Grell-3D scheme, the Grell-Fretas scheme, the New Tiedke scheme of the weather research and forecast (WRF) model regarding quantitative rainfall prediction over Uganda is investigated using the root mean square error for the March-May (MAM) 2013 season. The MAM 2013 seasonal rainfall amount ranged from 200 mm to 900 mm over Uganda with northern region receiving comparatively lower rainfall amount (200–500 mm); western Uganda (270–550 mm); eastern Uganda (400–900 mm) and the lake Victoria basin (400–650 mm). A spatial variation in simulated rainfall amount by different convective parameterization schemes was noted with the Kain-Fritsch scheme over estimating the rainfall amount over northern Uganda (300–750 mm) but also presented comparable rainfall amounts over the eastern Uganda (400–900 mm). The Betts-Miller-Janjic, the Grell-Deveny, and the Grell-3D underestimated the rainfall amount over most parts of the country especially the eastern region (300–600 mm). The Grell-Fretas captured rainfall amount over the northern region (250–450 mm) but also underestimated rainfall over the lake Victoria Basin (150–300 mm) while the New Tiedke generally underestimated rainfall amount over many areas of Uganda. For deterministic rainfall prediction, the Grell-Fretas is recommended for rainfall prediction over northern Uganda while the Kain-Fritsch scheme is recommended over eastern region.

Keywords: convective parameterization schemes, March-May 2013 rainfall season, spatial variation of parameterization schemes over Uganda, WRF model

Procedia PDF Downloads 311
364 Influence of Stress Relaxation and Hysteresis Effect for Pressure Garment Design

Authors: Chia-Wen Yeh, Ting-Sheng Lin, Chih-Han Chang

Abstract:

Pressure garment has been used to prevent and treat the hypertrophic scars following serious burns since 1970s. The use of pressure garment is believed to hasten the maturation process and decrease the highness of scars. Pressure garment is custom made by reducing circumferential measurement of the patient by 10%~20%, called Reduction Factor. However the exact reducing value used depends on the subjective judgment of the therapist and the feeling of patients throughout the try and error process. The Laplace Law can be applied to calculate the pressure from the dimension of the pressure garment by the circumferential measurements of the patients and the tension profile of the fabrics. The tension profile currently obtained neglects the stress relaxation and hysteresis effect within most elastic fabrics. The purpose of this study was to investigate the influence of the tension attenuation, from stress relaxation and hysteresis effect of the fabrics. Samples of pressure garment were obtained from Sunshine Foundation Organization, a nonprofit organization for burn patients in Taiwan. The wall tension profile of pressure garments were measured on a material testing system. Specimens were extended to 10% of the original length, held for 1 hour for the influence of the stress relaxation effect to take place. Then, specimens were extended to 15% of the original length for 10 seconds, then reduced to 10% to simulate donning movement for the influence of the hysteresis effect to take place. The load history was recorded. The stress relaxation effect is obvious from the load curves. The wall tension is decreased by 8.5%~10% after 60mins of holding. The hysteresis effect is obvious from the load curves. The wall tension is increased slightly, then decreased by 1.5%~2.5% and lower than stress relaxation results after 60mins of holding. The wall tension attenuation of the fabric exists due to stress relaxation and hysteresis effect. The influence of hysteresis is more than stress relaxation. These effect should be considered in order to design and evaluate the pressure of pressure garment more accurately.

Keywords: hypertrophic scars, hysteresis, pressure garment, stress relaxation

Procedia PDF Downloads 512
363 Investigation of Scaling Laws for Stiffness and strength in Bioinspired Glass Sponge Structures Produced by Fused Filament Fabrication

Authors: Hassan Beigi Rizi, Harold Auradou, Lamine Hattali

Abstract:

Various industries, including civil engineering, automotive, aerospace, and biomedical fields, are currently seeking novel and innovative high-performance lightweight materials to reduce energy consumption. Inspired by the structure of Euplectella Aspergillum Glass Sponges (EA-sponge), 2D unit cells were created and fabricated using a Fused Filament Fabrication (FFF) process with Polylactic acid (PLA) filaments. The stiffness and strength of bio-inspired EA-sponge lattices were investigated both experimentally and numerically under uniaxial tensile loading and are compared to three standard square lattices with diagonal struts (Designs B and C) and non-diagonal struts (Design D) reinforcements. The aim is to establish predictive scaling laws models and examine the deformation mechanisms involved. The results indicated that for the EA-sponge structure, the relative moduli and yield strength scaled linearly with relative density, suggesting that the deformation mechanism is stretching-dominated. The Finite element analysis (FEA), with periodic boundary conditions for volumetric homogenization, confirms these trends and goes beyond the experimental limits imposed by the FFF printing process. Therefore, the stretching-dominated behavior, investigated from 0.1 to 0.5 relative density, demonstrate that the study of EA-sponge structure can be exploited for the realization of square lattice topologies that are stiff and strong and have attractive potential for lightweight structural applications. However, the FFF process introduces an accuracy limitation, with approximately 10% error, making it challenging to print structures with a relative density below 0.2. Future work could focus on exploring the impact of different printing materials on the performance of EA-sponge structures.

Keywords: bio-inspiration, lattice structures, fused filament fabrication, scaling laws

Procedia PDF Downloads 6
362 Exploring the Carer Gender Support Gap: Results from Freedom of Information Requests to Adult Social Services in England

Authors: Stephen Bahooshy

Abstract:

Our understanding of gender inequality has advanced in recent years. Differences in pay and societal gendered behaviour expectations have been emphasized. It is acknowledged globally that gender shapes everyone’s experiences of health and social care, including access to care, use of services and products, and the interaction with care providers. NHS Digital in England collects data from local authorities on the number of carers and people with support needs and the services they access. This data does not provide a gender breakdown. Caring can have many positive and negative impacts on carers’ health and wellbeing. For example, caring can improve physical health, provide a sense of pride and purpose, and reduced stress levels for those who undertake a caring role by choice. Negatives of caring include financial concerns, social isolation, a reduction in earnings, and not being recognized as a carer or involved and consulted by health and social care professionals. Treating male and female carers differently is by definition unequitable and precludes one gender from receiving the benefits of caring whilst potentially overburdening the other with the negatives of caring. In order to explore the issue on a preliminary basis, five local authorities who provide statutory adult social care services in England were sent Freedom of Information requests in 2019. The authorities were selected to include county councils and London boroughs. The authorities were asked to provide data on the amount of money spent on care at home packages to people over 65 years, broken down by gender and carer gender for each financial year between 2013 and 2019. Results indicated that in each financial year, female carers supporting someone over 65 years received less financial support for care at home support packages than male carers. Over the six-year period, this difference equated to a £9.5k deficit in financial support received on average per female carer when compared to male carers. An example of a London borough with the highest disparity presented an average weekly spend on care at home for people over 65 with a carer of £261.35 for male carers and £165.46 for female carers. Consequently, female carers in this borough received on average £95.89 less per week in care at home support than male carers. This highlights a real and potentially detrimental disparity in the care support received to female carers in order to support them to continue to care in parts of England. More research should be undertaken in this area to better explore this issue and to understand if these findings are unique to these social care providers or part of a wider phenomenon. NHS Digital should request local authorities collect data on gender in the same way that large employers in the United Kingdom are required by law to provide data on staff salaries by gender. People who allocate social care packages of support should consider the impact of gender when allocating support packages to people with support needs and who have carers to reduce any potential impact of gender bias on their decision-making.

Keywords: caregivers, carers, gender equality, social care

Procedia PDF Downloads 165
361 The Efficacy of Video Education to Improve Treatment or Illness-Related Knowledge in Patients with a Long-Term Physical Health Condition: A Systematic Review

Authors: Megan Glyde, Louise Dye, David Keane, Ed Sutherland

Abstract:

Background: Typically patient education is provided either verbally, in the form of written material, or with a multimedia-based tool such as videos, CD-ROMs, DVDs, or via the internet. By providing patients with effective educational tools, this can help to meet their information needs and subsequently empower these patients and allow them to participate within medical-decision making. Video education may have some distinct advantages compared to other modalities. For instance, whilst eHealth is emerging as a promising modality of patient education, an individual’s ability to access, read, and navigate through websites or online modules varies dramatically in relation to health literacy levels. Literacy levels may also limit patients’ ability to understand written education, whereas video education can be watched passively by patients and does not require high literacy skills. Other benefits of video education include that the same information is provided consistently to each patient, it can be a cost-effective method after the initial cost of producing the video, patients can choose to watch the videos by themselves or in the presence of others, and they can pause and re-watch videos to suit their needs. Health information videos are not only viewed by patients in formal educational sessions, but are increasingly being viewed on websites such as YouTube. Whilst there is a lot of anecdotal and sometimes misleading information on YouTube, videos from government organisations and professional associations contain trustworthy and high-quality information and could enable YouTube to become a powerful information dissemination platform for patients and carers. This systematic review will examine the efficacy of video education to improve treatment or illness-related knowledge in patients with various long-term conditions, in comparison to other modalities of education. Methods: Only studies which match the following criteria will be included: participants will have a long-term physical health condition, video education will aim to improve treatment or illness related knowledge and will be tested in isolation, and the study must be a randomised controlled trial. Knowledge will be the primary outcome measure, with modality preference, anxiety, and behaviour change as secondary measures. The searches have been conducted in the following databases: OVID Medline, OVID PsycInfo, OVID Embase, CENTRAL and ProQuest, and hand searching for relevant published and unpublished studies has also been carried out. Screening and data extraction will be conducted independently by 2 researchers. Included studies will be assessed for their risk of bias in accordance with Cochrane guidelines, and heterogeneity will also be assessed before deciding whether a meta-analysis is appropriate or not. Results and Conclusions: Appropriate synthesis of the studies in relation to each outcome measure will be reported, along with the conclusions and implications.

Keywords: long-term condition, patient education, systematic review, video

Procedia PDF Downloads 115
360 Effect of Deficit Irrigation on Barley Yield and Water Productivity through Field Experiment and Modeling at Koga Irrigation Scheme, Amhara Region, Ethiopia

Authors: Bekalu Melis Alehegn, Dagnenet Sultan Alemu

Abstract:

The insufficiency of water is the most severe restraint for the expansion of agriculture in arid and semi-arid areas. An important strategy for increasing water productivity and improving water productivity deficit irrigation at different growth stages is important to advance the yield and Water Productivity of barley in water scarce areas. A field experiment was conducted at the Koga irrigation scheme in Ethiopia to examine barley yield response to different irrigation regimes and validate the aqua crop model. The experimental setup comprised six randomized treatments (T) with three replications for one irrigation season because of financial limitations. The irrigation regimes were selected 100%, 75%, and 50% application levels in different growth stages of gross irrigation requirements using trial and error in order to select the optimal water application level. The treatments were: no stress at all (T1), 25% stressed during all crop stages (T2), 50% stressed at all stages (T3), 50% stressed at the development stage (T4), 50% stressed at mid-stage (T5) and 50% stress at initial and late season (T6). The agronomic parameters, including canopy cover, biomass, and grain yield, were collected to compare the ground-based crop yield and the aqua crop model. The results showed that the initial and late stages and stress 25% through the whole season were the right time for practice deficit irrigation without significant yield reduction. The highest (2.62kg/m³) and the lowest (2.03 kg/m³) water productivity were found under T3 and T4, respectively. The stress of 50% at the mid-growth stage and stress 50% of the full irrigation water requirement at all growth stages significantly (α=5%) affected the canopy expansion, biomass and yield production. The aqua Crop model performed well in simulating the yield of barley for most of the treatments (R2 = 0.84 and RMSE = 0.7 t ha–¹).

Keywords: aqua crop, barley, deficit irrigation, irrigation regimes, water productivity

Procedia PDF Downloads 26
359 Self-Organizing Maps for Exploration of Partially Observed Data and Imputation of Missing Values in the Context of the Manufacture of Aircraft Engines

Authors: Sara Rejeb, Catherine Duveau, Tabea Rebafka

Abstract:

To monitor the production process of turbofan aircraft engines, multiple measurements of various geometrical parameters are systematically recorded on manufactured parts. Engine parts are subject to extremely high standards as they can impact the performance of the engine. Therefore, it is essential to analyze these databases to better understand the influence of the different parameters on the engine's performance. Self-organizing maps are unsupervised neural networks which achieve two tasks simultaneously: they visualize high-dimensional data by projection onto a 2-dimensional map and provide clustering of the data. This technique has become very popular for data exploration since it provides easily interpretable results and a meaningful global view of the data. As such, self-organizing maps are usually applied to aircraft engine condition monitoring. As databases in this field are huge and complex, they naturally contain multiple missing entries for various reasons. The classical Kohonen algorithm to compute self-organizing maps is conceived for complete data only. A naive approach to deal with partially observed data consists in deleting items or variables with missing entries. However, this requires a sufficient number of complete individuals to be fairly representative of the population; otherwise, deletion leads to a considerable loss of information. Moreover, deletion can also induce bias in the analysis results. Alternatively, one can first apply a common imputation method to create a complete dataset and then apply the Kohonen algorithm. However, the choice of the imputation method may have a strong impact on the resulting self-organizing map. Our approach is to address simultaneously the two problems of computing a self-organizing map and imputing missing values, as these tasks are not independent. In this work, we propose an extension of self-organizing maps for partially observed data, referred to as missSOM. First, we introduce a criterion to be optimized, that aims at defining simultaneously the best self-organizing map and the best imputations for the missing entries. As such, missSOM is also an imputation method for missing values. To minimize the criterion, we propose an iterative algorithm that alternates the learning of a self-organizing map and the imputation of missing values. Moreover, we develop an accelerated version of the algorithm by entwining the iterations of the Kohonen algorithm with the updates of the imputed values. This method is efficiently implemented in R and will soon be released on CRAN. Compared to the standard Kohonen algorithm, it does not come with any additional cost in terms of computing time. Numerical experiments illustrate that missSOM performs well in terms of both clustering and imputation compared to the state of the art. In particular, it turns out that missSOM is robust to the missingness mechanism, which is in contrast to many imputation methods that are appropriate for only a single mechanism. This is an important property of missSOM as, in practice, the missingness mechanism is often unknown. An application to measurements on one type of part is also provided and shows the practical interest of missSOM.

Keywords: imputation method of missing data, partially observed data, robustness to missingness mechanism, self-organizing maps

Procedia PDF Downloads 151
358 The Relationships between Carbon Dioxide (CO2) Emissions, Energy Consumption and GDP for Israel: Time Series Analysis, 1980-2010

Authors: Jinhoa Lee

Abstract:

The relationships between environmental quality, energy use and economic output have created growing attention over the past decades among researchers and policy makers. Focusing on the empirical aspects of the role of CO2 emissions and energy use in affecting the economic output, this paper is an effort to fulfill the gap in a comprehensive case study at a country level using modern econometric techniques. To achieve the goal, this country-specific study examines the short-run and long-run relationships among energy consumption (using disaggregated energy sources: crude oil, coal, natural gas, electricity), carbon dioxide (CO2) emissions and gross domestic product (GDP) for Israel using time series analysis from the year 1980-2010. To investigate the relationships between the variables, this paper employs the Phillips–Perron (PP) test for stationarity, Johansen maximum likelihood method for cointegration and a Vector Error Correction Model (VECM) for both short- and long-run causality among the research variables for the sample. The long-run equilibrium in the VECM suggests significant positive impacts of coal and natural gas consumptions on GDP in Israel. In the short run, GDP positively affects coal consumption. While there exists a positive unidirectional causality running from coal consumption to consumption of petroleum products and the direct combustion of crude oil, there exists a negative unidirectional causality running from natural gas consumption to consumption of petroleum products and the direct combustion of crude oil in the short run. Overall, the results support arguments that there are relationships among environmental quality, energy use and economic output but the associations can to be differed by the sources of energy in the case of Israel over of period 1980-2010.

Keywords: CO2 emissions, energy consumption, GDP, Israel, time series analysis

Procedia PDF Downloads 651
357 Odor-Color Association Stroop-Task and the Importance of an Odorant in an Odor-Imagery Task

Authors: Jonathan Ham, Christopher Koch

Abstract:

There are consistently observed associations between certain odors and colors, and there is an association between the ability to imagine vivid visual objects and imagine vivid odors. However, little has been done to investigate how the associations between odors and visual information effect visual processes. This study seeks to understand the relationship between odor imaging, color associations, and visual attention by utilizing a Stroop-task based on common odor-color associations. This Stroop-task was designed using three fruits with distinct odors that are associated with the color of the fruit: lime with green, strawberry with red, and lemon with yellow. Each possible word-color combination was presented in the experimental trials. When the word matched the associated color (lime written in green) it was considered congruent; if it did not, it was considered incongruent (lime written in red or yellow). In experiment I (n = 34) participants were asked to both imagine the odor of the fruit on the screen and identify which fruit it was, and each word-color combination was presented 20 times (a total of 180 trials, with 60 congruent and 120 incongruent instances). Response time and error rate of the participant responses were recorded. There was no significant difference in either measure between the congruent and incongruent trials. In experiment II participants (n = 18) followed the identical procedure as in the previous experiment with the addition of an odorant in the room. The odorant (orange) was not the fruit or color used in the experimental trials. With a fruit-based odorant in the room, the response times (measured in milliseconds) between congruent and incongruent trials were significantly different, with incongruent trials (M = 755.919, SD = 239.854) having significantly longer response times than congruent trials (M = 690.626, SD = 198.822), t (1, 17) = 4.154, p < 0.01. This suggests that odor imagery does affect visual attention to colors, and the ability to inhibit odor-color associations; however, odor imagery is difficult and appears to be facilitated in the presence of a related odorant.

Keywords: odor-color associations, odor imagery, visual attention, inhibition

Procedia PDF Downloads 176
356 Examining the Development of Complexity, Accuracy and Fluency in L2 Learners' Writing after L2 Instruction

Authors: Khaled Barkaoui

Abstract:

Research on second-language (L2) learning tends to focus on comparing students with different levels of proficiency at one point in time. However, to understand L2 development, we need more longitudinal research. In this study, we adopt a longitudinal approach to examine changes in three indicators of L2 ability, complexity, accuracy, and fluency (CAF), as reflected in the writing of L2 learners when writing on different tasks before and after a period L2 instruction. Each of 85 Chinese learners of English at three levels of English language proficiency responded to two writing tasks (independent and integrated) before and after nine months of English-language study in China. Each essay (N= 276) was analyzed in terms of numerous CAF indices using both computer coding and human rating: number of words written, number of errors per 100 words, ratings of error severity, global syntactic complexity (MLS), complexity by coordination (T/S), complexity by subordination (C/T), clausal complexity (MLC), phrasal complexity (NP density), syntactic variety, lexical density, lexical variation, lexical sophistication, and lexical bundles. Results were then compared statistically across tasks, L2 proficiency levels, and time. Overall, task type had significant effects on fluency and some syntactic complexity indices (complexity by coordination, structural variety, clausal complexity, phrase complexity) and lexical density, sophistication, and bundles, but not accuracy. L2 proficiency had significant effects on fluency, accuracy, and lexical variation, but not syntactic complexity. Finally, fluency, frequency of errors, but not accuracy ratings, syntactic complexity indices (clausal complexity, global complexity, complexity by subordination, phrase complexity, structural variety) and lexical complexity (lexical density, variation, and sophistication) exhibited significant changes after instruction, particularly for the independent task. We discuss the findings and their implications for assessment, instruction, and research on CAF in the context of L2 writing.

Keywords: second language writing, Fluency, accuracy, complexity, longitudinal

Procedia PDF Downloads 153
355 Modelling of Exothermic Reactions during Carbon Fibre Manufacturing and Coupling to Surrounding Airflow

Authors: Musa Akdere, Gunnar Seide, Thomas Gries

Abstract:

Carbon fibres are fibrous materials with a carbon atom amount of more than 90%. They combine excellent mechanicals properties with a very low density. Thus carbon fibre reinforced plastics (CFRP) are very often used in lightweight design and construction. The precursor material is usually polyacrylonitrile (PAN) based and wet-spun. During the production of carbon fibre, the precursor has to be stabilized thermally to withstand the high temperatures of up to 1500 °C which occur during carbonization. Even though carbon fibre has been used since the late 1970s in aerospace application, there is still no general method available to find the optimal production parameters and the trial-and-error approach is most often the only resolution. To have a much better insight into the process the chemical reactions during stabilization have to be analyzed particularly. Therefore, a model of the chemical reactions (cyclization, dehydration, and oxidation) based on the research of Dunham and Edie has been developed. With the presented model, it is possible to perform a complete simulation of the fibre undergoing all zones of stabilization. The fiber bundle is modeled as several circular fibers with a layer of air in-between. Two thermal mechanisms are considered to be the most important: the exothermic reactions inside the fiber and the convective heat transfer between the fiber and the air. The exothermic reactions inside the fibers are modeled as a heat source. Differential scanning calorimetry measurements have been performed to estimate the amount of heat of the reactions. To shorten the required time of a simulation, the number of fibers is decreased by similitude theory. Experiments were conducted to validate the simulation results of the fibre temperature during stabilization. The experiments for the validation were conducted on a pilot scale stabilization oven. To measure the fibre bundle temperature, a new measuring method is developed. The comparison of the results shows that the developed simulation model gives good approximations for the temperature profile of the fibre bundle during the stabilization process.

Keywords: carbon fibre, coupled simulation, exothermic reactions, fibre-air-interface

Procedia PDF Downloads 273
354 Corruption, Institutional Quality and Economic Growth in Nigeria

Authors: Ogunlana Olarewaju Fatai, Kelani Fatai Adeshina

Abstract:

The interplay of corruption and institutional quality determines how effective and efficient an economy progresses. An efficient institutional quality is a key requirement for economic stability. Institutional quality in most cases has been used interchangeably with Governance and these have given room for proxies that legitimized Governance as measures for institutional quality. A poorly-tailored institutional quality has a penalizing effect on corruption and economic growth, while defective institutional quality breeds corruption. Corruption is a hydra-headed phenomenon as it manifests in different forms. The most celebrated definition of corruption is given as “the use or abuse of public office for private benefits or gains”. It also denotes an arrangement between two mutual parties in the determination and allocation of state resources for pecuniary benefits to circumvent state efficiency. This study employed Barro (1990) type augmented model to analyze the nexus among corruption, institutional quality and economic growth in Nigeria using annual time series data, which spanned the period 1996-2019. Within the analytical framework of Johansen Cointegration technique, Error Correction Mechanism (ECM) and Granger Causality tests, findings revealed a long-run relationship between economic growth, corruption and selected measures of institutional quality. The long run results suggested that all the measures of institutional quality except voice & accountability and regulatory quality are positively disposed to economic growth. Moreover, the short-run estimation indicated a reconciliation of the divergent views on corruption which pointed at “sand the wheel” and “grease the wheel” of growth. In addition, regulatory quality and the rule of law indicated a negative influence on economic growth in Nigeria. Government effectiveness and voice & accountability, however, indicated a positive influence on economic growth. The Granger causality test results suggested a one-way causality between GDP and Corruption and also between corruption and institutional quality. Policy implications from this study pointed at checking corruption and streamlining institutional quality framework for better and sustained economic development.

Keywords: institutional quality, corruption, economic growth, public policy

Procedia PDF Downloads 170
353 Modern Seismic Design Approach for Buildings with Hysteretic Dampers

Authors: Vanessa A. Segovia, Sonia E. Ruiz

Abstract:

The use of energy dissipation systems for seismic applications has increased worldwide, thus it is necessary to develop practical and modern criteria for their optimal design. Here, a direct displacement-based seismic design approach for frame buildings with hysteretic energy dissipation systems (HEDS) is applied. The building is constituted by two individual structural systems consisting of: 1) A main elastic structural frame designed for service loads and 2) A secondary system, corresponding to the HEDS, that controls the effects of lateral loads. The procedure implies to control two design parameters: A) The stiffness ratio (α=K_frame/K_(total system)), and B) The strength ratio (γ= V_damper / V_(total system)). The proposed damage-controlled approach contributes to the design of a more sustainable and resilient building because the structural damage is concentrated on the HEDS. The reduction of the design displacement spectrum is done by means of a damping factor (recently published) for elastic structural systems with HEDS, located in Mexico City. Two limit states are verified: Serviceability and near collapse. Instead of the traditional trial-error approach, a procedure that allows the designer to establish the preliminary sizes of the structural elements of both systems is proposed. The design methodology is applied to an 8-story steel building with buckling restrained braces, located in soft soil of Mexico City. With the aim of choosing the optimal design parameters, a parametric study is developed considering different values of α and γ. The simplified methodology is for preliminary sizing, design, and evaluation of the effectiveness of HEDS, and it constitutes a modern and practical tool that enables the structural designer to select the best design parameters.

Keywords: damage-controlled buildings, direct displacement-based seismic design, optimal hysteretic energy dissipation systems, hysteretic dampers

Procedia PDF Downloads 483
352 Middle School as a Developmental Context for Emergent Citizenship

Authors: Casta Guillaume, Robert Jagers, Deborah Rivas-Drake

Abstract:

Civically engaged youth are critical to maintaining and/or improving the functioning of local, national and global communities and their institutions. The present study investigated how school climate and academic beliefs (academic self-efficacy and school belonging) may inform emergent civic behaviors (emergent citizenship) among self-identified middle school youth of color (African American, Multiracial or Mixed, Latino, Asian American or Pacific Islander, Native American, and other). Study aims: 1) Understand whether and how school climate is associated with civic engagement behaviors, directly and indirectly, by fostering a positive sense of connection to the school and/or engendering feelings of self-efficacy in the academic domain. Accordingly, we examined 2) The association of youths’ sense of school connection and academic self-efficacy with their personally responsible and participatory civic behaviors in school and community contexts—both concurrently and longitudinally. Data from two subsamples of a larger study of social/emotional development among middle school students were used for longitudinal and cross sectional analysis. The cross-sectional sample included 324 6th-8th grade students, of which 43% identified as African American, 20% identified as Multiracial or Mixed, 18% identified as Latino, 12% identified as Asian American or Pacific Islander, 6% identified as Other, and 1% identified as Native American. The age of the sample ranged from 11 – 15 (M = 12.33, SD = .97). For the longitudinal test of our mediation model, we drew on data from the 6th and 7th grade cohorts only (n =232); the ethnic and racial diversity of this longitudinal subsample was virtually identical to that of the cross-sectional sample. For both the cross-sectional and longitudinal analyses, full information maximum likelihood was used to deal with missing data. Fit indices were inspected to determine if they met the recommended thresholds of RMSEA below .05 and CFI and TLI values of at least .90. To determine if particular mediation pathways were significant, the bias-corrected bootstrap confidence intervals for each indirect pathway were inspected. Fit indices for the latent variable mediation model using the cross-sectional data suggest that the hypothesized model fit the observed data well (CFI = .93; TLI =. 92; RMSEA = .05, 90% CI = [.04, .06]). In the model, students’ perceptions of school climate were significantly and positively associated with greater feelings of school connectedness, which were in turn significantly and positively associated with civic engagement. In addition, school climate was significantly and positively associated with greater academic self-efficacy, but academic self-efficacy was not significantly associated with civic engagement. Tests of mediation indicated there was one significant indirect pathway between school climate and civic engagement behavior. There was an indirect association between school climate and civic engagement via its association with sense of school connectedness, indirect association estimate = .17 [95% CI: .08, .32]. The aforementioned indirect association via school connectedness accounted for 50% (.17/.34) of the total effect. Partial support was found for the prediction that students’ perceptions of a positive school climate are linked to civic engagement in part through their role in students’ sense of connection to school.

Keywords: civic engagement, early adolescence, school climate, school belonging, developmental niche

Procedia PDF Downloads 370
351 Effect of Climate Change on the Genomics of Invasiveness of the Whitefly Bemisia tabaci Species Complex by Estimating the Effective Population Size via a Coalescent Method

Authors: Samia Elfekih, Wee Tek Tay, Karl Gordon, Paul De Barro

Abstract:

Invasive species represent an increasing threat to food biosecurity, causing significant economic losses in agricultural systems. An example is the sweet potato whitefly, Bemisia tabaci, which is a complex of morphologically indistinguishable species causing average annual global damage estimated at US$2.4 billion. The Bemisia complex represents an interesting model for evolutionary studies because of their extensive distribution and potential for invasiveness and population expansion. Within this complex, two species, Middle East-Asia Minor 1 (MEAM1) and Mediterranean (MED) have invaded well beyond their home ranges whereas others, such as Indian Ocean (IO) and Australia (AUS), have not. In order to understand why some Bemisia species have become invasive, genome-wide sequence scans were used to estimate population dynamics over time and relate these to climate. The Bayesian Skyline Plot (BSP) method as implemented in BEAST was used to infer the historical effective population size. In order to overcome sampling bias, the populations were combined based on geographical origin. The datasets used for this particular analysis are genome-wide SNPs (single nucleotide polymorphisms) called separately in each of the following groups: Sub-Saharan Africa (Burkina Faso), Europe (Spain, France, Greece and Croatia), USA (Arizona), Mediterranean-Middle East (Israel, Italy), Middle East-Central Asia (Turkmenistan, Iran) and Reunion Island. The non-invasive ‘AUS’ species endemic to Australia was used as an outgroup. The main findings of this study show that the BSP for the Sub-Saharan African MED population is different from that observed in MED populations from the Mediterranean Basin, suggesting evolution under a different set of environmental conditions. For MED, the effective size of the African (Burkina Faso) population showed a rapid expansion ≈250,000-310,000 years ago (YA), preceded by a period of slower growth. The European MED populations (i.e., Spain, France, Croatia, and Greece) showed a single burst of expansion at ≈160,000-200,000 YA. The MEAM1 populations from Israel and Italy and the ones from Iran and Turkmenistan are similar as they both show the earlier expansion at ≈250,000-300,000 YA. The single IO population lacked the latter expansion but had the earlier one. This pattern is shared with the Sub-Saharan African (Burkina Faso) MED, suggesting IO also faced a similar history of environmental change, which seems plausible given their relatively close geographical distributions. In conclusion, populations within the invasive species MED and MEAM1 exhibited signatures of population expansion lacking in non-invasive species (IO and AUS) during the Pleistocene, a geological epoch marked by repeated climatic oscillations with cycles of glacial and interglacial periods. These expansions strongly suggested the potential of some Bemisia species’ genomes to affect their adaptability and invasiveness.

Keywords: whitefly, RADseq, invasive species, SNP, climate change

Procedia PDF Downloads 126
350 Sider Bee Honey: Antitumor Effect in Some Experimental Tumor Cell Lines

Authors: Aliaa M. Issa, Mahmoud N. ElRouby, Sahar A. S. Ahmad, Mahmoud M. El-Merzabani

Abstract:

Sider honey is a type of honey produced by bees feeding on the nectar of Sider tree, Ziziphus spina-christi (L) Desf . Honey is an effective agent for preventing, inhibiting and treating the growth of human and animal cancer cell lines in vitro and in vivo. The aim of the present study was to evaluate the impact of different dilutions from crude Sider honey and different duration times of exposure on the growth of six tumor cell lines (human cervical cancer cell line, HeLa; human hepatocellular carcinoma cell line, HepG-2; human larynx carcinoma cell line, Hep-2; brain tumor cell line, U251) as well as one animal cancerous cell line (Ehrlich ascites carcinoma cells line, EAC) and one normal cell line, Homo sapiens, human, (WISH) CCL-25. Different concentrations and treatment durations with Sider honey were tested on the growth of several cancer cell lines types. Histopathological changes in the tumor masses, animal survival, apoptosis and necrosis of the used cancer cell lines (using flow cytometry) were evaluated. Sider honey was administers either to the tumor mass itself by intratumoral injection or via drinking water. One-way ANOVA test was used for the analysis of (the means + standard error) of the optical density obtained from the Elisa reader and flow cytometry. The study revealed that different concentrations of Sider honey affected the growth patterns of all the studied cancer cell lines as well as their histopathological changes, and it depended on the cell line nature and the concentration of honey used. It is obvious that the relative animal survival percentage (bearing Ehrlich ascites carcinoma, EAC cells) was proportionally increased with the increase in the used honey concentrations. The study of apoptosis and necrosis using the flow cytometry technique emphasized the viability results. In conclusion, Sider honey was effective as antitumor agent, in the used concentrations.

Keywords: antitumor, honey, sider, tumor cell lines

Procedia PDF Downloads 537
349 Dynamic Modeling of Advanced Wastewater Treatment Plants Using BioWin

Authors: Komal Rathore, Aydin Sunol, Gita Iranipour, Luke Mulford

Abstract:

Advanced wastewater treatment plants have complex biological kinetics, time variant influent flow rates and long processing times. Due to these factors, the modeling and operational control of advanced wastewater treatment plants become complicated. However, development of a robust model for advanced wastewater treatment plants has become necessary in order to increase the efficiency of the plants, reduce energy costs and meet the discharge limits set by the government. A dynamic model was designed using the Envirosim (Canada) platform software called BioWin for several wastewater treatment plants in Hillsborough County, Florida. Proper control strategies for various parameters such as mixed liquor suspended solids, recycle activated sludge and waste activated sludge were developed for models to match the plant performance. The models were tuned using both the influent and effluent data from the plant and their laboratories. The plant SCADA was used to predict the influent wastewater rates and concentration profiles as a function of time. The kinetic parameters were tuned based on sensitivity analysis and trial and error methods. The dynamic models were validated by using experimental data for influent and effluent parameters. The dissolved oxygen measurements were taken to validate the model by coupling them with Computational Fluid Dynamics (CFD) models. The Biowin models were able to exactly mimic the plant performance and predict effluent behavior for extended periods. The models are useful for plant engineers and operators as they can take decisions beforehand by predicting the plant performance with the use of BioWin models. One of the important findings from the model was the effects of recycle and wastage ratios on the mixed liquor suspended solids. The model was also useful in determining the significant kinetic parameters for biological wastewater treatment systems.

Keywords: BioWin, kinetic modeling, flowsheet simulation, dynamic modeling

Procedia PDF Downloads 154
348 Assessing the Environmental Efficiency of China’s Power System: A Spatial Network Data Envelopment Analysis Approach

Authors: Jianli Jiang, Bai-Chen Xie

Abstract:

The climate issue has aroused global concern. Achieving sustainable development is a good path for countries to mitigate environmental and climatic pressures, although there are many difficulties. The first step towards sustainable development is to evaluate the environmental efficiency of the energy industry with proper methods. The power sector is a major source of CO2, SO2, and NOx emissions. Evaluating the environmental efficiency (EE) of power systems is the premise to alleviate the terrible situation of energy and the environment. Data Envelopment Analysis (DEA) has been widely used in efficiency studies. However, measuring the efficiency of a system (be it a nation, region, sector, or business) is a challenging task. The classic DEA takes the decision-making units (DMUs) as independent, which neglects the interaction between DMUs. While ignoring these inter-regional links may result in a systematic bias in the efficiency analysis; for instance, the renewable power generated in a certain region may benefit the adjacent regions while the SO2 and CO2 emissions act oppositely. This study proposes a spatial network DEA (SNDEA) with a slack measure that can capture the spatial spillover effects of inputs/outputs among DMUs to measure efficiency. This approach is used to study the EE of China's power system, which consists of generation, transmission, and distribution departments, using a panel dataset from 2014 to 2020. In the empirical example, the energy and patent inputs, the undesirable CO2 output, and the renewable energy (RE) power variables are tested for a significant spatial spillover effect. Compared with the classic network DEA, the SNDEA result shows an obvious difference tested by the global Moran' I index. From a dynamic perspective, the EE of the power system experiences a visible surge from 2015, then a sharp downtrend from 2019, which keeps the same trend with the power transmission department. This phenomenon benefits from the market-oriented reform in the Chinese power grid enacted in 2015. The rapid decline in the environmental efficiency of the transmission department in 2020 was mainly due to the Covid-19 epidemic, which hinders economic development seriously. While the EE of the power generation department witnesses a declining trend overall, this is reasonable, taking the RE power into consideration. The installed capacity of RE power in 2020 is 4.40 times that in 2014, while the power generation is 3.97 times; in other words, the power generation per installed capacity shrank. In addition, the consumption cost of renewable power increases rapidly with the increase of RE power generation. These two aspects make the EE of the power generation department show a declining trend. Incorporation of the interactions among inputs/outputs into the DEA model, this paper proposes an efficiency evaluation method on the basis of the DEA framework, which sheds some light on efficiency evaluation in regional studies. Furthermore, the SNDEA model and the spatial DEA concept can be extended to other fields, such as industry, country, and so on.

Keywords: spatial network DEA, environmental efficiency, sustainable development, power system

Procedia PDF Downloads 108
347 Improving Lane Detection for Autonomous Vehicles Using Deep Transfer Learning

Authors: Richard O’Riordan, Saritha Unnikrishnan

Abstract:

Autonomous Vehicles (AVs) are incorporating an increasing number of ADAS features, including automated lane-keeping systems. In recent years, many research papers into lane detection algorithms have been published, varying from computer vision techniques to deep learning methods. The transition from lower levels of autonomy defined in the SAE framework and the progression to higher autonomy levels requires increasingly complex models and algorithms that must be highly reliable in their operation and functionality capacities. Furthermore, these algorithms have no room for error when operating at high levels of autonomy. Although the current research details existing computer vision and deep learning algorithms and their methodologies and individual results, the research also details challenges faced by the algorithms and the resources needed to operate, along with shortcomings experienced during their detection of lanes in certain weather and lighting conditions. This paper will explore these shortcomings and attempt to implement a lane detection algorithm that could be used to achieve improvements in AV lane detection systems. This paper uses a pre-trained LaneNet model to detect lane or non-lane pixels using binary segmentation as the base detection method using an existing dataset BDD100k followed by a custom dataset generated locally. The selected roads will be modern well-laid roads with up-to-date infrastructure and lane markings, while the second road network will be an older road with infrastructure and lane markings reflecting the road network's age. The performance of the proposed method will be evaluated on the custom dataset to compare its performance to the BDD100k dataset. In summary, this paper will use Transfer Learning to provide a fast and robust lane detection algorithm that can handle various road conditions and provide accurate lane detection.

Keywords: ADAS, autonomous vehicles, deep learning, LaneNet, lane detection

Procedia PDF Downloads 104