Search results for: systematic literature reviews
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8419

Search results for: systematic literature reviews

979 Measuring the Effect of a Music Therapy Intervention in a Neonatal Intensive Care Unit in Spain

Authors: Pablo González Álvarez, Anna Vinaixa Vergés, Paula Sol Ventura, Paula Fernández, Mercè Redorta, Gemma Ginovart Galiana, Maria Méndez Hernández

Abstract:

Context: The use of music therapy is gaining popularity worldwide, and it has shown positive effects in neonatology. Hospital Germans Trias i Pujol has recently established a music therapy unit and initiated a project in their neonatal intensive care unit (NICU). Research Aim: The aim of this study is to measure the effect of a music therapy intervention in the NICU of Hospital Germans Trias i Pujol in Spain. Methodology: The study will be an observational analytical case-control study. All newborns admitted to the neonatology unit, both term and preterm, and their parents will be offered a session of music therapy. Data will be collected from families who receive at least two music therapy sessions. Maternal and paternal anxiety levels will be measured through a pre- and post-intervention test. Findings: The study aims to demonstrate the benefits and acceptance of music therapy by patients, parents, and healthcare workers in the neonatal unit. The findings are expected to show a reduction in maternal and paternal anxiety levels following the music therapy sessions. Theoretical Importance: This study contributes to the growing body of literature on the effectiveness of music therapy in neonatal care. It will provide evidence of the acceptance and potential benefits of music therapy in reducing anxiety levels in both parents and babies in the NICU setting. Data Collection: Data will be collected from families who receive at least two music therapy sessions. This will include pre- and post-intervention test results to measure anxiety levels. Analysis Procedures: The collected data will be analyzed using appropriate statistical methods to determine the impact of music therapy on reducing anxiety levels in parents. Questions Addressed: - What is the effect of music therapy on maternal anxiety levels? - What is the effect of music therapy on paternal anxiety levels? - What is the acceptability and perceived benefits of music therapy among patients and healthcare workers in the NICU? Conclusion: The study aims to provide evidence supporting the value of music therapy in the neonatal intensive care unit. It seeks to demonstrate the positive effect of music therapy on reducing anxiety levels among parents.

Keywords: neonatology, music therapy, neonatal intensive care unit, babies, parents

Procedia PDF Downloads 38
978 The Effect of Ambient Temperature on the Performance of the Simple and Modified Cycle Gas Turbine Plants

Authors: Ogbe E. E., Ossia. C. V., Saturday. E. G., Ezekwe M. C.

Abstract:

The disparity in power output between a simple and a modified gas turbine plant is noticeable when the gas turbine functions under local environmental conditions that deviate from the standard ISO specifications. Extensive research and literature have demonstrated a well-known direct correlation between ambient temperature and the power output of a gas turbine plant. In this study, the Omotosho gas turbine plant was modified into three different configurations. The reason for the modification is to improve its performance and reduce the fuel consumption and emission rate. Aspen Hysys software was used to simulate both the simple (Omotosho) and the three modified gas turbine plants. The input parameters considered include ambient temperature, air mass flow rate, fuel mass flow rate, water mass flow rate, turbine inlet temperature, compressor efficiency, and turbine efficiency, while the output parameters considered are thermal efficiency, specific fuel consumption, heat rate, emission rate, compressor power, turbine power and power output. The three modified gas turbine power plants incorporate an inlet air cooling system and a heat recovery steam generator. The variations between the modifications are due to additional components or enhancements alongside the inlet air cooling system and heat recovery steam generator incorporated; the first modification has an additional turbine, the second modification has an additional combustion chamber, and the third modification has an additional turbine and combustion chamber. This paper clearly shows ambient temperature effects on both the simple and three modified gas turbine plants. for every 10-degree kelvin increase in ambient temperature, there is an approximate reduction of 3977 kW, 4795 kW, 4681 kW, and 4793 kW of the power output for the simple gas turbine, first, second, and third modifications, respectively. Also, for every 10-degree kelvin increase in temperature, there is a thermal efficiency decrease of 1.22%, 1.45%, 1.43%, and 1.44% for the simple gas turbine, first, second, and third modifications respectively. Low ambient temperature will help save fuel; looking at the high price of fuel presently in Nigeria for every 10 degrees kelvin increase in temperature, there is a specific fuel consumption increase of 0.0074 kg/kWh, 0.0051 kg/kWh, 0.0061 kg/kWh, and 0.0057 kg/kWh for the simple gas turbine, first, second, and third modifications respectively. These findings will aid in accurately evaluating local power generating plants, particularly in hotter regions, for installing gas turbine inlet air cooling (GTIAC) systems.

Keywords: Aspen HYSYS software, Brayton Cycle, modified gas turbine, power plant, simple gas turbine, thermal efficiency.

Procedia PDF Downloads 15
977 A Study for Area-level Mosquito Abundance Prediction by Using Supervised Machine Learning Point-level Predictor

Authors: Theoktisti Makridou, Konstantinos Tsaprailis, George Arvanitakis, Charalampos Kontoes

Abstract:

In the literature, the data-driven approaches for mosquito abundance prediction relaying on supervised machine learning models that get trained with historical in-situ measurements. The counterpart of this approach is once the model gets trained on pointlevel (specific x,y coordinates) measurements, the predictions of the model refer again to point-level. These point-level predictions reduce the applicability of those solutions once a lot of early warning and mitigation actions applications need predictions for an area level, such as a municipality, village, etc... In this study, we apply a data-driven predictive model, which relies on public-open satellite Earth Observation and geospatial data and gets trained with historical point-level in-Situ measurements of mosquito abundance. Then we propose a methodology to extract information from a point-level predictive model to a broader area-level prediction. Our methodology relies on the randomly spatial sampling of the area of interest (similar to the Poisson hardcore process), obtaining the EO and geomorphological information for each sample, doing the point-wise prediction for each sample, and aggregating the predictions to represent the average mosquito abundance of the area. We quantify the performance of the transformation from the pointlevel to the area-level predictions, and we analyze it in order to understand which parameters have a positive or negative impact on it. The goal of this study is to propose a methodology that predicts the mosquito abundance of a given area by relying on point-level prediction and to provide qualitative insights regarding the expected performance of the area-level prediction. We applied our methodology to historical data (of Culex pipiens) of two areas of interest (Veneto region of Italy and Central Macedonia of Greece). In both cases, the results were consistent. The mean mosquito abundance of a given area can be estimated with similar accuracy to the point-level predictor, sometimes even better. The density of the samples that we use to represent one area has a positive effect on the performance in contrast to the actual number of sampling points which is not informative at all regarding the performance without the size of the area. Additionally, we saw that the distance between the sampling points and the real in-situ measurements that were used for training did not strongly affect the performance.

Keywords: mosquito abundance, supervised machine learning, culex pipiens, spatial sampling, west nile virus, earth observation data

Procedia PDF Downloads 138
976 Demographic Determinants of Spatial Patterns of Urban Crime

Authors: Natalia Sypion-Dutkowska

Abstract:

Abstract — The main research objective of the paper is to discover the relationship between the age groups of residents and crime in particular districts of a large city. The basic analytical tool is specific crime rates, calculated not in relation to the total population, but for age groups in a different social situation - property, housing, work, and representing different generations with different behavior patterns. They are the communities from which criminals and victims of crimes come. The analysis of literature and national police reports gives rise to hypotheses about the ability of a given age group to generate crime as a source of offenders and as a group of victims. These specific indicators are spatially differentiated, which makes it possible to detect socio-demographic determinants of spatial patterns of urban crime. A multi-feature classification of districts was also carried out, in which specific crime rates are the diagnostic features. In this way, areas with a similar structure of socio-demographic determinants of spatial patterns on urban crime were designated. The case study is the city of Szczecin in Poland. It has about 400,000 inhabitants and its area is about 300 sq km. Szczecin is located in the immediate vicinity of Germany and is the economic, academic and cultural capital of the region. It also has a seaport and an airport. Moreover, according to ESPON 2007, Szczecin is the Transnational and National Functional Urban Area. Szczecin is divided into 37 districts - auxiliary administrative units of the municipal government. The population of each of them in 2015-17 was divided into 8 age groups: babes (0-2 yrs.), children (3-11 yrs.), teens (12-17 yrs.), younger adults (18-30 yrs.), middle-age adults (31-45 yrs.), older adults (46-65 yrs.), early older (66-80) and late older (from 81 yrs.). The crimes reported in 2015-17 in each of the districts were divided into 10 groups: fights and beatings, other theft, car theft, robbery offenses, burglary into an apartment, break-in into a commercial facility, car break-in, break-in into other facilities, drug offenses, property damage. In total, 80 specific crime rates have been calculated for each of the districts. The analysis was carried out on an intra-city scale, this is a novel approach as this type of analysis is usually carried out at the national or regional level. Another innovative research approach is the use of specific crime rates in relation to age groups instead of standard crime rates. Acknowledgments: This research was funded by the National Science Centre, Poland, registration number 2019/35/D/HS4/02942.

Keywords: age groups, determinants of crime, spatial crime pattern, urban crime

Procedia PDF Downloads 166
975 Safer Staff: A Survey of Staff Experiences of Violence and Aggression at Work in Coventry and Warwickshire Partnership National Health Service Trust

Authors: Rupinder Kaler, Faith Ndebele, Nadia Saleem, Hafsa Sheikh

Abstract:

Background: Workplace related violence and aggression seems to be considered an acceptable occupational hazard for staff in mental health services. There is literature evidence that healthcare workers in mental health settings are at higher risk from aggression from patients. Aggressive behaviours pose a physical and psychological threat to the psychiatric staff and can result in stress, burnout, sickness, and exhaustion. Further evidence informs that health professionals are the most exposed to psychological disorders such as anxiety, depression and post-traumatic stress disorder. Fear that results from working in a dangerous environment and exhaustion can have a damaging impact on patient care and healthcare relationship. Aim: The aim of this study is to investigate the prevalence and impact of aggressive behaviour on staff working at Coventry and Warwickshire Partnership Trust. Methodology: The study methodology included carrying out a manual, anonymised, multi-disciplinary cross-sectional survey questionnaire across all clinical and non-clinical staff at CWPT from both inpatient and community settings. Findings: The unsurprising finding was that of higher prevalence of aggressive behaviours in in-patients in comparison to community staff. Conclusion: There is a high rate of verbal and physical aggression at work and this has a negative impact on the staff emotional and physical well- being. There is also a higher reliance on colleagues for support on an informal basis than formal organisational support systems. Recommendations: A workforce that is well and functioning is the biggest resource for an organisation. Staff safety during working hours is everyone's responsibility and sits with both individual staff members and the organisation. Post-incident organisational support needs to be consolidated, and hands-on, timely support offered to help maintain emotionally well staff on CWPT. The authors recommend development of preventative and practical protocols for aggression with patient and carer involvement. Post-incident organisational support needs to be consolidated, and hands-on, timely support offered to help maintain emotionally well staff on CWPT.

Keywords: safer staff, survey of staff experiences, violence and aggression, mental health

Procedia PDF Downloads 196
974 On the Dwindling Supply of the Observable Cosmic Microwave Background Radiation

Authors: Jia-Chao Wang

Abstract:

The cosmic microwave background radiation (CMB) freed during the recombination era can be considered as a photon source of small duration; a one-time event happened everywhere in the universe simultaneously. If space is divided into concentric shells centered at an observer’s location, one can imagine that the CMB photons originated from the nearby shells would reach and pass the observer first, and those in shells farther away would follow as time goes forward. In the Big Bang model, space expands rapidly in a time-dependent manner as described by the scale factor. This expansion results in an event horizon coincident with one of the shells, and its radius can be calculated using cosmological calculators available online. Using Planck 2015 results, its value during the recombination era at cosmological time t = 0.379 million years (My) is calculated to be Revent = 56.95 million light-years (Mly). The event horizon sets a boundary beyond which the freed CMB photons will never reach the observer. The photons within the event horizon also exhibit a peculiar behavior. Calculated results show that the CMB observed today was freed in a shell located at 41.8 Mly away (inside the boundary set by Revent) at t = 0.379 My. These photons traveled 13.8 billion years (Gy) to reach here. Similarly, the CMB reaching the observer at t = 1, 5, 10, 20, 40, 60, 80, 100 and 120 Gy are calculated to be originated at shells of R = 16.98, 29.96, 37.79, 46.47, 53.66, 55.91, 56.62, 56.85 and 56.92 Mly, respectively. The results show that as time goes by, the R value approaches Revent = 56.95 Mly but never exceeds it, consistent with the earlier statement that beyond Revent the freed CMB photons will never reach the observer. The difference Revert - R can be used as a measure of the remaining observable CMB photons. Its value becomes smaller and smaller as R approaching Revent, indicating a dwindling supply of the observable CMB radiation. In this paper, detailed dwindling effects near the event horizon are analyzed with the help of online cosmological calculators based on the lambda cold dark matter (ΛCDM) model. It is demonstrated in the literature that assuming the CMB to be a blackbody at recombination (about 3000 K), then it will remain so over time under cosmological redshift and homogeneous expansion of space, but with the temperature lowered (2.725 K now). The present result suggests that the observable CMB photon density, besides changing with space expansion, can also be affected by the dwindling supply associated with the event horizon. This raises the question of whether the blackbody of CMB at recombination can remain so over time. Being able to explain the blackbody nature of the observed CMB is an import part of the success of the Big Bang model. The present results cast some doubts on that and suggest that the model may have an additional challenge to deal with.

Keywords: blackbody of CMB, CMB radiation, dwindling supply of CMB, event horizon

Procedia PDF Downloads 114
973 Multiscale Modelling of Textile Reinforced Concrete: A Literature Review

Authors: Anicet Dansou

Abstract:

Textile reinforced concrete (TRC)is increasingly used nowadays in various fields, in particular civil engineering, where it is mainly used for the reinforcement of damaged reinforced concrete structures. TRC is a composite material composed of multi- or uni-axial textile reinforcements coupled with a fine-grained cementitious matrix. The TRC composite is an alternative solution to the traditional Fiber Reinforcement Polymer (FRP) composite. It has good mechanical performance and better temperature stability but also, it makes it possible to meet the criteria of sustainable development better.TRCs are highly anisotropic composite materials with nonlinear hardening behavior; their macroscopic behavior depends on multi-scale mechanisms. The characterization of these materials through numerical simulation has been the subject of many studies. Since TRCs are multiscale material by definition, numerical multi-scale approaches have emerged as one of the most suitable methods for the simulation of TRCs. They aim to incorporate information pertaining to microscale constitute behavior, mesoscale behavior, and macro-scale structure response within a unified model that enables rapid simulation of structures. The computational costs are hence significantly reduced compared to standard simulation at a fine scale. The fine scale information can be implicitly introduced in the macro scale model: approaches of this type are called non-classical. A representative volume element is defined, and the fine scale information are homogenized over it. Analytical and computational homogenization and nested mesh methods belong to these approaches. On the other hand, in classical approaches, the fine scale information are explicitly introduced in the macro scale model. Such approaches pertain to adaptive mesh refinement strategies, sub-modelling, domain decomposition, and multigrid methods This research presents the main principles of numerical multiscale approaches. Advantages and limitations are identified according to several criteria: the assumptions made (fidelity), the number of input parameters required, the calculation costs (efficiency), etc. A bibliographic study of recent results and advances and of the scientific obstacles to be overcome in order to achieve an effective simulation of textile reinforced concrete in civil engineering is presented. A comparative study is further carried out between several methods for the simulation of TRCs used for the structural reinforcement of reinforced concrete structures.

Keywords: composites structures, multiscale methods, numerical modeling, textile reinforced concrete

Procedia PDF Downloads 101
972 The Implementation of a Nurse-Driven Palliative Care Trigger Tool

Authors: Sawyer Spurry

Abstract:

Problem: Palliative care providers at an academic medical center in Maryland stated medical intensive care unit (MICU) patients are often referred late in their hospital stay. The MICU has performed well below the hospital quality performance metric of 80% of patients who expire with expected outcomes should have received a palliative care consult within 48 hours of admission. Purpose: The purpose of this quality improvement (QI) project is to increase palliative care utilization in the MICU through the implementation of a Nurse-Driven PalliativeTriggerTool to prompt the need for specialty palliative care consult. Methods: MICU nursing staff and providers received education concerning the implications of underused palliative care services and the literature data supporting the use of nurse-driven palliative care tools as a means of increasing utilization of palliative care. A MICU population specific criteria of palliative triggers (Palliative Care Trigger Tool) was formulated by the QI implementation team, palliative care team, and patient care services department. Nursing staff were asked to assess patients daily for the presence of palliative triggers using the Palliative Care Trigger Tool and present findings during bedside rounds. MICU providers were asked to consult palliative medicinegiven the presence of palliative triggers; following interdisciplinary rounds. Rates of palliative consult, given the presence of triggers, were collected via electronic medical record e-data pull, de-identified, and recorded in the data collection tool. Preliminary Results: Over 140 MICU registered nurses were educated on the palliative trigger initiative along with 8 nurse practitioners, 4 intensivists, 2 pulmonary critical care fellows, and 2 palliative medicine physicians. Over 200 patients were admitted to the MICU and screened for palliative triggers during the 15-week implementation period. Primary outcomes showed an increase in palliative care consult rates to those patients presenting with triggers, a decreased mean time from admission to palliative consult, and increased recognition of unmet palliative care needs by MICU nurses and providers. Conclusions: Anticipatory findings of this QI project would suggest a positive correlation between utilizing palliative care trigger criteria and decreased time to palliative care consult. The direct outcomes of effective palliative care results in decreased length of stay, healthcare costs, and moral distress, as well as improved symptom management and quality of life (QOL).

Keywords: palliative care, nursing, quality improvement, trigger tool

Procedia PDF Downloads 181
971 Incentive-Based Motivation to Network with Coworkers: Strengthening Professional Networks via Online Social Networks

Authors: Jung Lee

Abstract:

The last decade has witnessed more people than ever before using social media and broadening their social circles. Social media users connect not only with their friends but also with professional acquaintances, primarily coworkers, and clients; personal and professional social circles are mixed within the same social media platform. Considering the positive aspect of social media in facilitating communication and mutual understanding between individuals, we infer that social media interactions with co-workers could indeed benefit one’s professional life. However, given privacy issues, sharing all personal details with one’s co-workers is not necessarily the best practice. Should one connect with coworkers via social media? Will social media connections with coworkers eventually benefit one’s long-term career? Will the benefit differ across cultures? To answer, this study examines how social media can contribute to organizational communication by tracing the foundation of user motivation based on social capital theory, leader-member exchange (LMX) theory and expectancy theory of motivation. Although social media was originally designed for personal communication, users have shown intentions to extend social media use for professional communication, especially when the proper incentive is expected. To articulate the user motivation and the mechanism of the incentive expectation scheme, this study applies those three theories and identify six antecedents and three moderators of social media use motivation including social network flaunt, shared interest, perceived social inclusion. It also hypothesizes that the moderating effects of those constructs would significantly differ based on the relationship hierarchy among the workers. To validate, this study conducted a survey of 329 active social media users with acceptable levels of job experiences. The analysis result confirms the specific roles of the three moderators in social media adoption for organizational communication. The present study contributes to the literature by developing a theoretical modeling of ambivalent employee perceptions about establishing social media connections with co-workers. This framework shows not only how both positive and negative expectations of social media connections with co-workers are formed based on expectancy theory of motivation, but also how such expectations lead to behavioral intentions using career success model. It also enhances understanding of how various relationships among employees can be influenced through social media use and such usage can potentially affect both performance and careers. Finally, it shows how cultural factors induced by social media use can influence relations among the coworkers.

Keywords: the social network, workplace, social capital, motivation

Procedia PDF Downloads 115
970 Administrative Supervision of Local Authorities’ Activities in Selected European Countries

Authors: Alina Murtishcheva

Abstract:

The development of an effective system of administrative supervision is a prerequisite for the functioning of local self-government on the basis of the rule of law. Administrative supervision of local self-government is of particular importance in the EU countries due to the influence of integration processes. The central authorities act on the international level; however, subnational authorities also have to implement European legislation in order to strengthen integration. Therefore, the central authority, being the connecting link between supranational and subnational authorities, should bear responsibility, including financial responsibility, for possible mistakes of subnational authorities. Consequently, the state should have sufficient mechanisms of control over local and regional authorities in order to correct their mistakes. At the same time, the control mechanisms do not deny the autonomy of local self-government. The paper analyses models of administrative supervision of local self-government in Ukraine, Poland, Lithuania, Belgium, Great Britain, Italy, and France. The research methods used in this paper are theoretical methods of analysis of scientific literature, constitutions, legal acts, Congress of Local and Regional Authorities of the Council of Europe reports, and constitutional court decisions, as well as comparative and logical analysis. The legislative basis of administrative supervision was scrutinized, and the models of administrative supervision were classified, including a priori control and ex-post control or their combination. The advantages and disadvantages of these models of administrative supervision are analysed. Compliance with Article 8 of the European Charter of Local Self-Government is of great importance for countries achieving common goals and sharing common values. However, countries under study have problems and, in some cases, demonstrate non-compliance with provisions of Article 8. Such non-conformity as the endorsement of a mayor by the Flemish Government in Belgium, supervision with a view to expediency in Great Britain, and the tendency to overuse supervisory power in Poland are analysed. On the basis of research, the tendencies of administrative supervision of local authorities’ activities in selected European countries are described. Several recommendations for Ukraine as a country that had been granted the EU candidate status are formulated. Having emphasised its willingness to become a member of the European community, Ukraine should not only follow the best European practices but also avoid the mistakes of countries that have long-term experience in developing the local self-government institution. This project has received funding from the Research Council of Lithuania (LMTLT), agreement № P-PD-22-194

Keywords: administrative supervision, decentralisation, legality, local authorities, local self-government

Procedia PDF Downloads 56
969 Structural Equation Modelling Based Approach to Integrate Customers and Suppliers with Internal Practices for Lean Manufacturing Implementation in the Indian Context

Authors: Protik Basu, Indranil Ghosh, Pranab K. Dan

Abstract:

Lean management is an integrated socio-technical system to bring about a competitive state in an organization. The purpose of this paper is to explore and integrate the role of customers and suppliers with the internal practices of the Indian manufacturing industries towards successful implementation of lean manufacturing (LM). An extensive literature survey is carried out. An attempt is made to build an exhaustive list of all the input manifests related to customers, suppliers and internal practices necessary for LM implementation, coupled with a similar exhaustive list of the benefits accrued from its successful implementation. A structural model is thus conceptualized, which is empirically validated based on the data from the Indian manufacturing sector. With the current impetus on developing the industrial sector, the Government of India recently introduced the Lean Manufacturing Competitiveness Scheme that aims to increase competitiveness with the help of lean concepts. There is a huge scope to enrich the Indian industries with the lean benefits, the implementation status being quite low. Hardly any survey-based empirical study in India has been found to integrate customers and suppliers with the internal processes towards successful LM implementation. This empirical research is thus carried out in the Indian manufacturing industries. The basic steps of the research methodology followed in this research are the identification of input and output manifest variables and latent constructs, model proposition and hypotheses development, development of survey instrument, sampling and data collection and model validation (exploratory factor analysis, confirmatory factor analysis, and structural equation modeling). The analysis reveals six key input constructs and three output constructs, indicating that these constructs should act in unison to maximize the benefits of implementing lean. The structural model presented in this paper may be treated as a guide to integrating customers and suppliers with internal practices to successfully implement lean. Integrating customers and suppliers with internal practices into a unified, coherent manufacturing system will lead to an optimum utilization of resources. This work is one of the very first researches to have a survey-based empirical analysis of the role of customers, suppliers and internal practices of the Indian manufacturing sector towards an effective lean implementation.

Keywords: customer management, internal manufacturing practices, lean benefits, lean implementation, lean manufacturing, structural model, supplier management

Procedia PDF Downloads 171
968 Phytoremediation Alternative for Landfill Leachate Sludges Doña Juana Bogotá D.C. Colombia Treatment

Authors: Pinzón Uribe Luis Felipe, Chávez Porras Álvaro, Ruge Castellanos Liliana Constanza

Abstract:

According to global data, solid waste management of has low economic investment for its management in underdeveloped countries; being the main factor the advanced technologies acknowledge for proper operation and at the same time the technical development. Has been evidenced that communities have a distorted perception of the role and legalized final destinations for waste or "Landfill" places specific management; influenced primarily by their physical characteristics and the information that the media provide of these, as well as their wrong association with "open dumps". One of the major inconveniences in these landfills is the leachate sludge management from treatment plants; as this exhibit a composition highly contaminating (physical, chemical and biological) for the natural environment due to improper handling and disposal. This is the case Landfill Doña Juana (RSDJ), Bogotá, Colombia, considered among the largest in South America; where management problems have persisted for decades, since its creation being definitive on the concept that society has acquired about this form of waste disposal and improper leachate handling. Within this research process for treating phytoremediation alternatives were determined by using plants that are able to degrade heavy metals contained in these; allowing the resulting sludge to be used as a seal in the final landfill cover; within a restoration process, providing option to solve the landscape contamination problem, as well as in the communities perception and conflicts that generates landfill. For the project chemical assays were performed in sludge leachate that allowed the characterization of metals such as chromium (Cr), lead (Pb), arsenic (As) and mercury (Hg), in order to meet the amount in the biosolids regard to the provisions of the USEPA 40 CFR 503. The evaluations showed concentrations of 102.2 mg / kg of Cr, 0.49 mg / kg Pb, 0.390 mg / kg of As and 0.104 mg / kg of Hg; being lower than of the standards. A literature review on native plant species suitable for an alternative process of phytoremediation, these metals degradation capable was developed. Concluding that among them, Vetiveria zizanioides, Eichhornia crassipes and Limnobium laevigatum, for their hiperacumulativas in their leaves, stems and roots characteristics may allow these toxic elements reduction of in the environment, improving the outlook for disposal.

Keywords: health, filling slurry of leachate, heavy metals, phytoremediation

Procedia PDF Downloads 319
967 Market Solvency Capital Requirement Minimization: How Non-linear Solvers Provide Portfolios Complying with Solvency II Regulation

Authors: Abraham Castellanos, Christophe Durville, Sophie Echenim

Abstract:

In this article, a portfolio optimization problem is performed in a Solvency II context: it illustrates how advanced optimization techniques can help to tackle complex operational pain points around the monitoring, control, and stability of Solvency Capital Requirement (SCR). The market SCR of a portfolio is calculated as a combination of SCR sub-modules. These sub-modules are the results of stress-tests on interest rate, equity, property, credit and FX factors, as well as concentration on counter-parties. The market SCR is non convex and non differentiable, which does not make it a natural optimization criteria candidate. In the SCR formulation, correlations between sub-modules are fixed, whereas risk-driven portfolio allocation is usually driven by the dynamics of the actual correlations. Implementing a portfolio construction approach that is efficient on both a regulatory and economic standpoint is not straightforward. Moreover, the challenge for insurance portfolio managers is not only to achieve a minimal SCR to reduce non-invested capital but also to ensure stability of the SCR. Some optimizations have already been performed in the literature, simplifying the standard formula into a quadratic function. But to our knowledge, it is the first time that the standard formula of the market SCR is used in an optimization problem. Two solvers are combined: a bundle algorithm for convex non- differentiable problems, and a BFGS (Broyden-Fletcher-Goldfarb- Shanno)-SQP (Sequential Quadratic Programming) algorithm, to cope with non-convex cases. A market SCR minimization is then performed with historical data. This approach results in significant reduction of the capital requirement, compared to a classical Markowitz approach based on the historical volatility. A comparative analysis of different optimization models (equi-risk-contribution portfolio, minimizing volatility portfolio and minimizing value-at-risk portfolio) is performed and the impact of these strategies on risk measures including market SCR and its sub-modules is evaluated. A lack of diversification of market SCR is observed, specially for equities. This was expected since the market SCR strongly penalizes this type of financial instrument. It was shown that this direct effect of the regulation can be attenuated by implementing constraints in the optimization process or minimizing the market SCR together with the historical volatility, proving the interest of having a portfolio construction approach that can incorporate such features. The present results are further explained by the Market SCR modelling.

Keywords: financial risk, numerical optimization, portfolio management, solvency capital requirement

Procedia PDF Downloads 110
966 Evaluating Textbooks for Brazilian Air Traffic Controllers’ English Language Training: A Checklist Proposal

Authors: Elida M. R. Bonifacio

Abstract:

English language proficiency has become an essential issue in aviation communication after aviation incidents, and accidents happened. Lack of proficiency or inappropriate use of the English language has been found as one of the factors that cause most of those incidents or accidents. Therefore, the International Civil Aviation Organization (ICAO) established the requirements for minimum English language proficiency of aviation personnel, especially pilots and air traffic controllers in the 192 member states. In Brazil, the discussions about this topic became patent after an accident that occurred in 2006, which was a mid-air collision and costed the life of 154 passengers and crew members. Thus, the number of schools and private practitioners willing to teach English for aviation purposes started to increase. Although the number of teaching materials internationally used for general purposes is relatively large, it would be inappropriate to adopt the same materials in classes that focus on communication in aviation contexts. On the contrary, the options of aviation English materials are scarce; moreover, they are internationally used and may not fulfill the linguistic needs of all their users around the world. In order to diminish the problems that Brazilian practitioners may encounter in the adoption of materials that demand a great level of adaptation to meet their students’ needs, a checklist was thought to evaluate textbooks. The aim of this paper is to propose a checklist that evaluates textbooks used in English language training of Brazilian air traffic controllers. The criteria used to compound the checklist are based on materials development literature, as well as on linguistic requirements established by ICAO on its publications, on English for Specific Purposes (ESP) principles, and on Brazilian aviation English language proficiency test format. The checklist has as main indicators the language learning tenets under which the book was written, graphical features, lexical, grammatical and functional competencies required for minimum proficiency, similarities to official testing format, and support materials, totaling 117 items marked as YES, NO or PARTIALLY. In order to verify if the use of the checklist is effective, an aviation English textbook was evaluated. From this evaluation, it is possible to measure quantitatively how much the material meets the students’ needs and to offer a tool to help professionals engaged in aviation English teaching around the world to choose the most appropriate textbook according to their audience. From the results, practitioners are able to verify which items the material does not fulfill and to make proper adaptations since the perfect material will be difficult to find.

Keywords: aviation English, ICAO, materials development, English language proficiency

Procedia PDF Downloads 124
965 Simultaneous Detection of Cd⁺², Fe⁺², Co⁺², and Pb⁺² Heavy Metal Ions by Stripping Voltammetry Using Polyvinyl Chloride Modified Glassy Carbon Electrode

Authors: Sai Snehitha Yadavalli, K. Sruthi, Swati Ghosh Acharyya

Abstract:

Heavy metal ions are toxic to humans and all living species when exposed in large quantities or for long durations. Though Fe acts as a nutrient, when intake is in large quantities, it becomes toxic. These toxic heavy metal ions, when consumed through water, will cause many disorders and are harmful to all flora and fauna through biomagnification. Specifically, humans are prone to innumerable diseases ranging from skin to gastrointestinal, neurological, etc. In higher quantities, they even cause cancer in humans. Detection of these toxic heavy metal ions in water is thus important. Traditionally, the detection of heavy metal ions in water has been done by techniques like Inductively Coupled Plasma Mass Spectroscopy (ICPMS) and Atomic Absorption Spectroscopy (AAS). Though these methods offer accurate quantitative analysis, they require expensive equipment and cannot be used for on-site measurements. Anodic Stripping Voltammetry is a good alternative as the equipment is affordable, and measurements can be made at the river basins or lakes. In the current study, Square Wave Anodic Stripping Voltammetry (SWASV) was used to detect the heavy metal ions in water. Literature reports various electrodes on which deposition of heavy metal ions was carried out like Bismuth, Polymers, etc. The working electrode used in this study is a polyvinyl chloride (PVC) modified glassy carbon electrode (GCE). Ag/AgCl reference electrode and Platinum counter electrode were used. Biologic Potentiostat SP 300 was used for conducting the experiments. Through this work of simultaneous detection, four heavy metal ions were successfully detected at a time. The influence of modifying GCE with PVC was studied in comparison with unmodified GCE. The simultaneous detection of Cd⁺², Fe⁺², Co⁺², Pb⁺² heavy metal ions was done using PVC modified GCE by drop casting 1 wt.% of PVC dissolved in Tetra Hydro Furan (THF) solvent onto GCE. The concentration of all heavy metal ions was 0.2 mg/L, as shown in the figure. The scan rate was 0.1 V/s. Detection parameters like pH, scan rate, temperature, time of deposition, etc., were optimized. It was clearly understood that PVC helped in increasing the sensitivity and selectivity of detection as the current values are higher for PVC-modified GCE compared to unmodified GCE. The peaks were well defined when PVC-modified GCE was used.

Keywords: cadmium, cobalt, electrochemical sensing, glassy carbon electrodes, heavy metal Ions, Iron, lead, polyvinyl chloride, potentiostat, square wave anodic stripping voltammetry

Procedia PDF Downloads 94
964 Estimation of Scour Using a Coupled Computational Fluid Dynamics and Discrete Element Model

Authors: Zeinab Yazdanfar, Dilan Robert, Daniel Lester, S. Setunge

Abstract:

Scour has been identified as the most common threat to bridge stability worldwide. Traditionally, scour around bridge piers is calculated using the empirical approaches that have considerable limitations and are difficult to generalize. The multi-physic nature of scouring which involves turbulent flow, soil mechanics and solid-fluid interactions cannot be captured by simple empirical equations developed based on limited laboratory data. These limitations can be overcome by direct numerical modeling of coupled hydro-mechanical scour process that provides a robust prediction of bridge scour and valuable insights into the scour process. Several numerical models have been proposed in the literature for bridge scour estimation including Eulerian flow models and coupled Euler-Lagrange models incorporating an empirical sediment transport description. However, the contact forces between particles and the flow-particle interaction haven’t been taken into consideration. Incorporating collisional and frictional forces between soil particles as well as the effect of flow-driven forces on particles will facilitate accurate modeling of the complex nature of scour. In this study, a coupled Computational Fluid Dynamics and Discrete Element Model (CFD-DEM) has been developed to simulate the scour process that directly models the hydro-mechanical interactions between the sediment particles and the flowing water. This approach obviates the need for an empirical description as the fundamental fluid-particle, and particle-particle interactions are fully resolved. The sediment bed is simulated as a dense pack of particles and the frictional and collisional forces between particles are calculated, whilst the turbulent fluid flow is modeled using a Reynolds Averaged Navier Stocks (RANS) approach. The CFD-DEM model is validated against experimental data in order to assess the reliability of the CFD-DEM model. The modeling results reveal the criticality of particle impact on the assessment of scour depth which, to the authors’ best knowledge, hasn’t been considered in previous studies. The results of this study open new perspectives to the scour depth and time assessment which is the key to manage the failure risk of bridge infrastructures.

Keywords: bridge scour, discrete element method, CFD-DEM model, multi-phase model

Procedia PDF Downloads 122
963 A First Step towards Automatic Evolutionary for Gas Lifts Allocation Optimization

Authors: Younis Elhaddad, Alfonso Ortega

Abstract:

Oil production by means of gas lift is a standard technique in oil production industry. To optimize the total amount of oil production in terms of the amount of gas injected is a key question in this domain. Different methods have been tested to propose a general methodology. Many of them apply well-known numerical methods. Some of them have taken into account the power of evolutionary approaches. Our goal is to provide the experts of the domain with a powerful automatic searching engine into which they can introduce their knowledge in a format close to the one used in their domain, and get solutions comprehensible in the same terms, as well. These proposals introduced in the genetic engine the most expressive formal models to represent the solutions to the problem. These algorithms have proven to be as effective as other genetic systems but more flexible and comfortable for the researcher although they usually require huge search spaces to justify their use due to the computational resources involved in the formal models. The first step to evaluate the viability of applying our approaches to this realm is to fully understand the domain and to select an instance of the problem (gas lift optimization) in which applying genetic approaches could seem promising. After analyzing the state of the art of this topic, we have decided to choose a previous work from the literature that faces the problem by means of numerical methods. This contribution includes details enough to be reproduced and complete data to be carefully analyzed. We have designed a classical, simple genetic algorithm just to try to get the same results and to understand the problem in depth. We could easily incorporate the well mathematical model, and the well data used by the authors and easily translate their mathematical model, to be numerically optimized, into a proper fitness function. We have analyzed the 100 curves they use in their experiment, similar results were observed, in addition, our system has automatically inferred an optimum total amount of injected gas for the field compatible with the addition of the optimum gas injected in each well by them. We have identified several constraints that could be interesting to incorporate to the optimization process but that could be difficult to numerically express. It could be interesting to automatically propose other mathematical models to fit both, individual well curves and also the behaviour of the complete field. All these facts and conclusions justify continuing exploring the viability of applying the approaches more sophisticated previously proposed by our research group.

Keywords: evolutionary automatic programming, gas lift, genetic algorithms, oil production

Procedia PDF Downloads 157
962 Prioritizing Temporary Shelter Areas for Disaster Affected People Using Hybrid Decision Support Model

Authors: Ashish Trivedi, Amol Singh

Abstract:

In the recent years, the magnitude and frequency of disasters have increased at an alarming rate. Every year, more than 400 natural disasters affect global population. A large-scale disaster leads to destruction or damage to houses, thereby rendering a notable number of residents homeless. Since humanitarian response and recovery process takes considerable time, temporary establishments are arranged in order to provide shelter to affected population. These shelter areas are vital for an effective humanitarian relief; therefore, they must be strategically planned. Choosing the locations of temporary shelter areas for accommodating homeless people is critical to the quality of humanitarian assistance provided after a large-scale emergency. There has been extensive research on the facility location problem both in theory and in application. In order to deliver sufficient relief aid within a relatively short timeframe, humanitarian relief organisations pre-position warehouses at strategic locations. However, such approaches have received limited attention from the perspective of providing shelters to disaster-affected people. In present research work, this aspect of humanitarian logistics is considered. The present work proposes a hybrid decision support model to determine relative preference of potential shelter locations by assessing them based on key subjective criteria. Initially, the factors that are kept in mind while locating potential areas for establishing temporary shelters are identified by reviewing extant literature and through consultation from a panel of disaster management experts. In order to determine relative importance of individual criteria by taking into account subjectivity of judgements, a hybrid approach of fuzzy sets and Analytic Hierarchy Process (AHP) was adopted. Further, Technique for order preference by similarity to ideal solution (TOPSIS) was applied on an illustrative data set to evaluate potential locations for establishing temporary shelter areas for homeless people in a disaster scenario. The contribution of this work is to propose a range of possible shelter locations for a humanitarian relief organization, using a robust multi criteria decision support framework.

Keywords: AHP, disaster preparedness, fuzzy set theory, humanitarian logistics, TOPSIS, temporary shelters

Procedia PDF Downloads 190
961 A Computational Model of the Thermal Grill Illusion: Simulating the Perceived Pain Using Neuronal Activity in Pain-Sensitive Nerve Fibers

Authors: Subhankar Karmakar, Madhan Kumar Vasudevan, Manivannan Muniyandi

Abstract:

Thermal Grill Illusion (TGI) elicits a strong and often painful sensation of burn when interlacing warm and cold stimuli that are individually non-painful, excites thermoreceptors beneath the skin. Among several theories of TGI, the “disinhibition” theory is the most widely accepted in the literature. According to this theory, TGI is the result of the disinhibition or unmasking of the pain-sensitive HPC (Heat-Pinch-Cold) nerve fibers due to the inhibition of cold-sensitive nerve fibers that are responsible for masking HPC nerve fibers. Although researchers focused on understanding TGI throughexperiments and models, none of them investigated the prediction of TGI pain intensity through a computational model. Furthermore, the comparison of psychophysically perceived TGI intensity with neurophysiological models has not yet been studied. The prediction of pain intensity through a computational model of TGI can help inoptimizing thermal displays and understanding pathological conditions related to temperature perception. The current studyfocuses on developing a computational model to predict the intensity of TGI pain and experimentally observe the perceived TGI pain. The computational model is developed based on the disinhibition theory and by utilizing the existing popular models of warm and cold receptors in the skin. The model aims to predict the neuronal activity of the HPC nerve fibers. With a temperature-controlled thermal grill setup, fifteen participants (ten males and five females) were presented with five temperature differences between warm and cold grills (each repeated three times). All the participants rated the perceived TGI pain sensation on a scale of one to ten. For the range of temperature differences, the experimentally observed perceived intensity of TGI is compared with the neuronal activity of pain-sensitive HPC nerve fibers. The simulation results show a monotonically increasing relationship between the temperature differences and the neuronal activity of the HPC nerve fibers. Moreover, a similar monotonically increasing relationship is experimentally observed between temperature differences and the perceived TGI intensity. This shows the potential comparison of TGI pain intensity observed through the experimental study with the neuronal activity predicted through the model. The proposed model intends to bridge the theoretical understanding of the TGI and the experimental results obtained through psychophysics. Further studies in pain perception are needed to develop a more accurate version of the current model.

Keywords: thermal grill Illusion, computational modelling, simulation, psychophysics, haptics

Procedia PDF Downloads 160
960 The Impact of Childhood Cancer on Young Adult Survivors: A Life Course Perspective

Authors: Bridgette Merriman, Wen Fan

Abstract:

Background: Existing cancer survivorship literature explores varying physical, psychosocial, and psychological late effects experienced by survivors of childhood cancer. However, adolescent and young adult (AYA) survivors of childhood cancer are understudied compared to their adult and pediatric cancer counterparts. Furthermore, existing quality of life (QoL) research fails to account for how cancer survivorship affects survivors across the lifespan. Given that prior research suggests positive cognitive appraisals of adverse events - such as cancer - mitigate detrimental psychosocial symptomologies later in life; it is crucial to understand cancer’s impacts on AYA survivors of childhood malignancies across the life course in order to best support these individuals and prevent maladaptive psychosocial outcomes. Methods: This qualitative study adopted the life-course perspective to investigate the experiences of AYA survivors of childhood malignancies. Eligible patients included AYA 21-30 years old who were diagnosed with cancer <18 years old and off active treatment for >2 years. Participants were recruited through social media posts. Study fulfillment included taking part in one semi-structured video interview to explore areas of survivorship previously identified as being specific to AYA survivors. Interviews were transcribed, coded, and analyzed in accordance with narrative analysis and life-course theory. This study was approved by the Boston College Institutional Review Board. Results: Of 28 individuals who met inclusion criteria and expressed interest in the study, nineteen participants (12 women, 7 men, mean age 25.4 years old) completed the study. Life course theory analysis revealed that events relating to childhood cancer are interconnected throughout the life course rather than isolated events. This “trail of survivorship” includes age at diagnosis, transitioning to life after cancer, and relationships with other childhood survivors. Despite variability in objective characteristics surrounding these events, participants recalled positive experiences regarding at least one checkpoint, ultimately finding positive meaning from their cancer experience. Conclusions: These findings suggest that favorable subjective experiences at these checkpoints are critical in fostering positive conceptions of childhood malignancy for AYA survivors of childhood cancer. Ultimately, healthcare professionals and communities may use these findings to guide support resources and interventions for childhood cancer patients and AYA survivors, therein minimizing detrimental psychosocial effects and maximizing resiliency.

Keywords: medical sociology, pediatric oncology, survivorship, qualitative, life course perspective

Procedia PDF Downloads 57
959 Leadership Lessons from Female Executives in the South African Oil Industry

Authors: Anthea Carol Nefdt

Abstract:

In this article, observations are drawn from a number of interviews conducted with female executives in the South African Oil Industry in 2017. Globally, the oil industry represents one of the most male-dominated organisational structures as well as cultures in the business world. Some of the remarkable women, who hold upper management positions, have not only emerged from the science and finance spheres (equally gendered organisations) but also navigated their way through an aggressive, patriarchal atmosphere of rivalry and competition. We examine various mythology associated with the industry, such as the cowboy myth, the frontier ideology and the queen bee syndrome directed at female executives. One of the themes to emerge from my interviews was the almost unanimous rejection of the ‘glass ceiling’ metaphor favoured by some Feminists. The women of the oil industry rather affirmed a picture of their rise to leadership positions through a strategic labyrinth of challenges and obstacles both in terms of gender and race. This article aims to share the insights of women leaders in a complex industry through both their reflections and a theoretical Feminist lens. The study is located within the South African context and given our historical legacy, it was optimal to use an intersectional approach which would allow issues of race, gender, ethnicity and language to emerge. A qualitative research methodological approach was employed as well as a thematic interpretative analysis to analyse and interpret the data. This research methodology was used precisely because it encourages and acknowledged the experiences women have and places these experiences at the centre of the research. Multiple methods of recruitment of the research participants was utilised. The initial method of recruitment was snowballing sampling, the second method used was purposive sampling. In addition to this, semi-structured interviews gave the participants an opportunity to ask questions, add information and have discussions on issues or aspects of the research area which was of interest to them. One of the key objectives of the study was to investigate if there was a difference in the leadership styles of men and women. Findings show that despite the wealth of literature on the topic, to the contrary some women do not perceive a significant difference in men and women’s leadership style. However other respondents felt that there were some important differences in the experiences of men and women superiors although they hesitated to generalise from these experiences Further findings suggest that although the oil industry provides unique challenges to women as a gendered organization, it also incorporates various progressive initiatives for their advancement.

Keywords: petroleum industry, gender, feminism, leadership

Procedia PDF Downloads 148
958 Teachers' Experience for Improving Fine Motor Skills of Children with Down Syndrome in the Context of Special Education in Southern Province of Sri Lanka

Authors: Sajee A. Gamage, Champa J. Wijesinghe, Patricia Burtner, Ananda R. Wickremasinghe

Abstract:

Background: Teachers working in the context of special education have an enormous responsibility of enhancing performance skills of children in their classroom settings. Fine Motor Skills (FMS) are essential functional skills for children to gain independence in Activities of Daily Living. Children with Down Syndrome (DS) are predisposed to specific challenges due to deficits in FMS. This study is aimed to determine the teachers’ experience on improving FMS of children with DS in the context of special education of Southern Province, Sri Lanka. Methodology: A cross-sectional study was conducted among all consenting eligible teachers (n=147) working in the context of special education in government schools of Southern Province of Sri Lanka. A self-administered questionnaire was developed based on literature and expert opinion to assess teachers’ experience regarding deficits of FMS, limitations of classroom activity performance and barriers to improve FMS of children with DS. Results: Approximately 93% of the teachers were females with a mean age ( ± SD) of 43.1 ( ± 10.1) years. Thirty percent of the teachers had training in special educationand 83% had children with DS in their classrooms. Major deficits of FMS reported were deficits in grasping (n=116; 79%), in-hand manipulation (n=103; 70%) and bilateral hand use (n=99; 67.3%). Paperwork (n=70; 47.6%), painting (n=58; 39.5%), scissor work (n=50; 34.0%), pencil use for writing (n=45; 30.6%) and use of tools in the classroom (n=41; 27.9%) were identified as major classroom performance limitations of children with DS. Parental factors (n=67; 45.6%), disease specific characteristics (n=58; 39.5%) and classroom factors (n=36; 24.5%), were identified as major barriers to improve FMS in the classroom setting. Lack of resources and standard tools, social stigma and late school admission were also identified as barriers to FMS training. Eighty nine percent of the teachers informed that training fine motor activities in a special education classroom was more successful than work with normal classroom setting. Conclusion: Major areas of FMS deficits were grasping, in-hand manipulation and bilateral hand use; classroom performance limitations included paperwork, painting and scissor work of children with DS. Teachers recommended regular practice of fine motor activities according to individual need. Further research is required to design a culturally specific FMS assessment tool and intervention methods to improve FMS of children with DS in Sri Lanka.

Keywords: classroom activities, Down syndrome, experience, fine motor skills, special education, teachers

Procedia PDF Downloads 148
957 Dual-Phase High Entropy (Ti₀.₂₅V₀.₂₅Zr₀.₂₅Hf₀.₂₅) BxCy Ceramics Produced by Spark Plasma Sintering

Authors: Ana-Carolina Feltrin, Daniel Hedman, Farid Akhtar

Abstract:

High entropy ceramic (HEC) materials are characterized by their compositional disorder due to different metallic element atoms occupying the cation position and non-metal elements occupying the anion position. Several studies have focused on the processing and characterization of high entropy carbides and high entropy borides, as these HECs present interesting mechanical and chemical properties. A few studies have been published on HECs containing two non-metallic elements in the composition. Dual-phase high entropy (Ti₀.₂₅V₀.₂₅Zr₀.₂₅Hf₀.₂₅)BxCy ceramics with different amounts of x and y, (0.25 HfC + 0.25 ZrC + 0.25 VC + 0.25 TiB₂), (0.25 HfC + 0.25 ZrC + 0.25 VB2 + 0.25 TiB₂) and (0.25 HfC + 0.25 ZrB2 + 0.25 VB2 + 0.25 TiB₂) were sintered from boride and carbide precursor powders using SPS at 2000°C with holding time of 10 min, uniaxial pressure of 50 MPa and under Ar atmosphere. The sintered specimens formed two HEC phases: a Zr-Hf rich FCC phase and a Ti-V HCP phase, and both phases contained all the metallic elements from 5-50 at%. Phase quantification analysis of XRD data revealed that the molar amount of hexagonal phase increased with increased mole fraction of borides in the starting powders, whereas cubic FCC phase increased with increased carbide in the starting powders. SPS consolidated (Ti₀.₂₅V₀.₂₅Zr₀.₂₅Hf₀.₂₅)BC0.5 and (Ti₀.₂₅V₀.₂₅Zr₀.₂₅Hf₀.₂₅)B1.5C0.25 had respectively 94.74% and 88.56% relative density. (Ti₀.₂₅V₀.₂₅Zr₀.₂₅Hf₀.₂₅)B0.5C0.75 presented the highest relative density of 95.99%, with Vickers hardness of 26.58±1.2 GPa for the borides phase and 18.29±0.8 GPa for the carbides phase, which exceeded the reported hardness values reported in the literature for high entropy ceramics. The SPS sintered specimens containing lower boron and higher carbon presented superior properties even though the metallic composition in each phase was similar to other compositions investigated. Dual-phase high entropy (Ti₀.₂₅V₀.₂₅Zr₀.₂₅H₀.₂₅)BxCy ceramics were successfully fabricated in a boride-carbide solid solution and the amount of boron and carbon was shown to influence the phase fraction, hardness of phases, and density of the consolidated HECs. The microstructure and phase formation was highly dependent on the amount of non-metallic elements in the composition and not only the molar ratio between metals when producing high entropy ceramics with more than one anion in the sublattice. These findings show the importance of further studies about the optimization of the ratio between C and B for further improvements in the properties of dual-phase high entropy ceramics.

Keywords: high-entropy ceramics, borides, carbides, dual-phase

Procedia PDF Downloads 166
956 Effects of Starvation, Glucose Treatment and Metformin on Resistance in Chronic Myeloid Leukemia Cells

Authors: Nehir Nebioglu

Abstract:

Chemotherapy is widely used for the treatment of cancer. Doxorubicin is an anti-cancer chemotherapy drug that is classified as an anthracycline antibiotic. Antitumor antibiotics consist of natural products produced by species of the soil fungus Streptomyces. These drugs act in multiple phases of the cell cycle and are known cell-cycle specific. Although DOX is a precious clinical antineoplastic agent, resistance is also a problem that limits its utility besides cardiotoxicity problem. The drug resistance of cancer cells results from multiple factors including individual variation, genetic heterogeneity within a tumor, and cellular evolution. The mechanism of resistance is thought to involve, in particular, ABCB1 (MDR1, Pgp) and ABCC1 (MRP1) as well as other transporters. Several studies on DOX-resistant cell lines have shown that resistance can be overcome by an inhibition of ABCB1, ABCC1, and ABCC2. This study attempts to understand the effects of different concentration levels of glucose treatment and starvation on the proliferation of Doxorubicin resistant cancer cells lines. To understand the effect of starvation, K562/Dox and K562 cell lines were treated with 0, 5 nM, 50 nM, 500 nM, 5 uM and 50 uM Dox concentrations in both starvation and normal medium conditions. In addition to this, to interpret the effect of glucose treatment, different concentrations (0, 1 mM, 5 mM, 25 mM) of glucose were applied to Dox-treated (with 0, 5 nM, 50 nM, 500 nM, 5 uM and 50 uM) K562/Dox and K652 cell lines. All results show significant decreasing in the cell count of K562/Dox, when cells were starved. However, while proliferation of K562/Dox lines decrease is associated with the increasingly applied Dox concentration, K562/Dox starved ones remain at the same proliferation level. Thus, the results imply that an amount of K562/Dox lines gain starvation resistance and remain resistant. Furthermore, for K562/Dox, there is no clear effect of glucose treatment in terms of cell proliferation. In the presence of a moderate level of glucose (5 mM), proliferation increases compared to other concentration of glucose for each different Dox application. On the other hand, a significant increase in cell proliferation in moderate level of glucose is only observed in 5 uM Dox concentration. The moderate concentration level of Dox can be examined in further studies. For the high amount of glucose (25 mM), cell proliferation levels are lower than moderate glucose application. The reason could be high amount of glucose may not be absorbable by cells. Also, in the presence of low amount of glucose, proliferation is decreasing in an orderly manner of increase in Dox concentration. This situation can be explained by the glucose depletion -Warburg effect- in the literature.

Keywords: drug resistance, cancer cells, chemotherapy, doxorubicin

Procedia PDF Downloads 169
955 Exploring the Psychosocial Brain: A Retrospective Analysis of Personality, Social Networks, and Dementia Outcomes

Authors: Felicia N. Obialo, Aliza Wingo, Thomas Wingo

Abstract:

Psychosocial factors such as personality traits and social networks influence cognitive aging and dementia outcomes both positively and negatively. The inherent complexity of these factors makes defining the underlying mechanisms of their influence difficult; however, exploring their interactions affords promise in the field of cognitive aging. The objective of this study was to elucidate some of these interactions by determining the relationship between social network size and dementia outcomes and by determining whether personality traits mediate this relationship. The longitudinal Alzheimer’s Disease (AD) database provided by Rush University’s Religious Orders Study/Memory and Aging Project was utilized to perform retrospective regression and mediation analyses on 3,591 participants. Participants who were cognitively impaired at baseline were excluded, and analyses were adjusted for age, sex, common chronic diseases, and vascular risk factors. Dementia outcome measures included cognitive trajectory, clinical dementia diagnosis, and postmortem beta-amyloid plaque (AB), and neurofibrillary tangle (NT) accumulation. Personality traits included agreeableness (A), conscientiousness (C), extraversion (E), neuroticism (N), and openness (O). The results show a positive correlation between social network size and cognitive trajectory (p-value = 0.004) and a negative relationship between social network size and odds of dementia diagnosis (p = 0.024/ Odds Ratio (OR) = 0.974). Only neuroticism mediates the positive relationship between social network size and cognitive trajectory (p < 2e-16). Agreeableness, extraversion, and neuroticism all mediate the negative relationship between social network size and dementia diagnosis (p=0.098, p=0.054, and p < 2e-16, respectively). All personality traits are independently associated with dementia diagnosis (A: p = 0.016/ OR = 0.959; C: p = 0.000007/ OR = 0.945; E: p = 0.028/ OR = 0.961; N: p = 0.000019/ OR = 1.036; O: p = 0.027/ OR = 0.972). Only conscientiousness and neuroticism are associated with postmortem AD pathologies; specifically, conscientiousness is negatively associated (AB: p = 0.001, NT: p = 0.025) and neuroticism is positively associated with pathologies (AB: p = 0.002, NT: p = 0.002). These results support the study’s objectives, demonstrating that social network size and personality traits are strongly associated with dementia outcomes, particularly the odds of receiving a clinical diagnosis of dementia. Personality traits interact significantly and beneficially with social network size to influence the cognitive trajectory and future dementia diagnosis. These results reinforce previous literature linking social network size to dementia risk and provide novel insight into the differential roles of individual personality traits in cognitive protection.

Keywords: Alzheimer’s disease, cognitive trajectory, personality traits, social network size

Procedia PDF Downloads 121
954 Exploring the Intersection Between the General Data Protection Regulation and the Artificial Intelligence Act

Authors: Maria Jędrzejczak, Patryk Pieniążek

Abstract:

The European legal reality is on the eve of significant change. In European Union law, there is talk of a “fourth industrial revolution”, which is driven by massive data resources linked to powerful algorithms and powerful computing capacity. The above is closely linked to technological developments in the area of artificial intelligence, which has prompted an analysis covering both the legal environment as well as the economic and social impact, also from an ethical perspective. The discussion on the regulation of artificial intelligence is one of the most serious yet widely held at both European Union and Member State level. The literature expects legal solutions to guarantee security for fundamental rights, including privacy, in artificial intelligence systems. There is no doubt that personal data have been increasingly processed in recent years. It would be impossible for artificial intelligence to function without processing large amounts of data (both personal and non-personal). The main driving force behind the current development of artificial intelligence is advances in computing, but also the increasing availability of data. High-quality data are crucial to the effectiveness of many artificial intelligence systems, particularly when using techniques involving model training. The use of computers and artificial intelligence technology allows for an increase in the speed and efficiency of the actions taken, but also creates security risks for the data processed of an unprecedented magnitude. The proposed regulation in the field of artificial intelligence requires analysis in terms of its impact on the regulation on personal data protection. It is necessary to determine what the mutual relationship between these regulations is and what areas are particularly important in the personal data protection regulation for processing personal data in artificial intelligence systems. The adopted axis of considerations is a preliminary assessment of two issues: 1) what principles of data protection should be applied in particular during processing personal data in artificial intelligence systems, 2) what regulation on liability for personal data breaches is in such systems. The need to change the regulations regarding the rights and obligations of data subjects and entities processing personal data cannot be excluded. It is possible that changes will be required in the provisions regarding the assignment of liability for a breach of personal data protection processed in artificial intelligence systems. The research process in this case concerns the identification of areas in the field of personal data protection that are particularly important (and may require re-regulation) due to the introduction of the proposed legal regulation regarding artificial intelligence. The main question that the authors want to answer is how the European Union regulation against data protection breaches in artificial intelligence systems is shaping up. The answer to this question will include examples to illustrate the practical implications of these legal regulations.

Keywords: data protection law, personal data, AI law, personal data breach

Procedia PDF Downloads 50
953 Revisiting Historical Illustrations in the Age of Digital Anatomy Education

Authors: Julia Wimmers-Klick

Abstract:

In the contemporary study of anatomy, medical students utilize a diverse array of resources, including lab handouts, lectures, and, increasingly, digital media such as interactive anatomy apps and digital images. Notably, a significant shift has occurred, with fewer students possessing traditional anatomy atlases or books, reflecting a broader trend towards digital approaches like Virtual Reality, Augmented Reality, and web-based programs. This paper seeks to explore the evolution of anatomy education by contrasting current digital tools with historical resources, such as classical anatomical illustrations and atlases, to assess their relevance and potential benefits in modern medical education. Through a comprehensive literature review, the development of anatomical illustrations is traced from the textual descriptions of Galen to the detailed and artistic representations of Da Vinci, Vesalius, and later anatomists. The examination includes how the printing press facilitated the dissemination of anatomical knowledge, transforming covert dissections into public spectacles and formalized teaching practices. Historical illustrations, often influenced by societal, religious, and aesthetic contexts, not only served educational purposes but also reflected the prevailing medical knowledge and ethical standards of their times. Critical questions are raised about the place of historical illustrations in today's anatomy curriculum. Specifically, their potential to teach critical thinking, highlight the history of medicine, and offer unique insights into past societal conditions are explored. These resources are viewed in their context, including the lack of diversity and the presence of ethical concerns, such as the use of illustrations from unethical sources like Pernkopf’s atlas. In conclusion, while digital tools offer innovative ways to visualize and interact with anatomical structures, historical illustrations provide irreplaceable value in understanding the evolution of medical knowledge and practice. The study advocates for a balanced approach that integrates traditional and modern resources to enrich medical education, promote critical thinking, and provide a comprehensive understanding of anatomy. Future research should investigate the optimal combination of these resources to meet the evolving needs of medical learners and the implications of the digital shift in anatomy education.

Keywords: human anatomy, historical illustrations, historical context, medical education

Procedia PDF Downloads 8
952 Possibilities and Prospects for the Development of the Agricultural Insurance Market (The Example of Georgia)

Authors: Nino Damenia

Abstract:

The agricultural sector plays an important role in the development of Georgia's economy, it contributes to employment and food security. It faces various types of risks that may lead to heavy financial losses. Agricultural insurance is one of the means of combating agricultural risks. The paper discusses the agricultural insurance experience of those countries (European countries and the USA) that have successfully implemented the agricultural insurance program. Analysis of international cases shows that a well-designed and implemented agri-insurance system can bring significant benefits to farmers, insurance companies and the economy as a whole. In the background of all this, the Government of Georgia recognized the importance of agro-insurance and took important steps for its development. In 2014, in cooperation with insurance companies, an agro-insurance program was introduced, the purpose of which is to increase the availability of insurance for farmers and stimulate the agro-insurance market. Despite such a step forward, challenges remain such as awareness of farmers, insufficient infrastructure for data collection and risk assessment, involvement of insurance companies and other important factors. With the support of the government and stakeholders, it is possible to overcome the existing challenges and establish a strong and effective agro-insurance system. Objectives. The purpose of the research is to analyze the development trends of the agricultural insurance market, to identify the main factors affecting its growth, and to further develop recommendations for development prospects for Georgia. Methodologies. The research uses mixed methods, which combine qualitative and quantitative research techniques. The qualitative method includes the study of the literature of Georgian and foreign economists, which allows us to get acquainted with the challenges, opportunities, legislative and regulatory frameworks of agricultural insurance. Quantitative analysis involves collecting data from stakeholders and then analyzing it. The paper also uses the methods of synthesis, comparison and statistical analysis of the agricultural insurance market in Georgia, Europe and the USA. Conclusions. As the main results of the research, we can consider that the analysis of the insurance market has been made and its main functions have been identified; The essence, features and functions of agricultural insurance are analyzed; European and US agricultural insurance market is researched; The stages of formation and development of the agricultural insurance market of Georgia are studied, its importance for the agricultural sector of Georgia is determined; The role of the state for the development of agro-insurance is analyzed and development prospects are established based on the study of the current trends of the agro-insurance market of Georgia.

Keywords: agricultural insurance, agriculture, agricultural insurance program, risk

Procedia PDF Downloads 48
951 A Construction Management Tool: Determining a Project Schedule Typical Behaviors Using Cluster Analysis

Authors: Natalia Rudeli, Elisabeth Viles, Adrian Santilli

Abstract:

Delays in the construction industry are a global phenomenon. Many construction projects experience extensive delays exceeding the initially estimated completion time. The main purpose of this study is to identify construction projects typical behaviors in order to develop a prognosis and management tool. Being able to know a construction projects schedule tendency will enable evidence-based decision-making to allow resolutions to be made before delays occur. This study presents an innovative approach that uses Cluster Analysis Method to support predictions during Earned Value Analyses. A clustering analysis was used to predict future scheduling, Earned Value Management (EVM), and Earned Schedule (ES) principal Indexes behaviors in construction projects. The analysis was made using a database with 90 different construction projects. It was validated with additional data extracted from literature and with another 15 contrasting projects. For all projects, planned and executed schedules were collected and the EVM and ES principal indexes were calculated. A complete linkage classification method was used. In this way, the cluster analysis made considers that the distance (or similarity) between two clusters must be measured by its most disparate elements, i.e. that the distance is given by the maximum span among its components. Finally, through the use of EVM and ES Indexes and Tukey and Fisher Pairwise Comparisons, the statistical dissimilarity was verified and four clusters were obtained. It can be said that construction projects show an average delay of 35% of its planned completion time. Furthermore, four typical behaviors were found and for each of the obtained clusters, the interim milestones and the necessary rhythms of construction were identified. In general, detected typical behaviors are: (1) Projects that perform a 5% of work advance in the first two tenths and maintain a constant rhythm until completion (greater than 10% for each remaining tenth), being able to finish on the initially estimated time. (2) Projects that start with an adequate construction rate but suffer minor delays culminating with a total delay of almost 27% of the planned time. (3) Projects which start with a performance below the planned rate and end up with an average delay of 64%, and (4) projects that begin with a poor performance, suffer great delays and end up with an average delay of a 120% of the planned completion time. The obtained clusters compose a tool to identify the behavior of new construction projects by comparing their current work performance to the validated database, thus allowing the correction of initial estimations towards more accurate completion schedules.

Keywords: cluster analysis, construction management, earned value, schedule

Procedia PDF Downloads 251
950 Design and Validation of the 'Teachers' Resilience Scale' for Assessing Protective Factors

Authors: Athena Daniilidou, Maria Platsidou

Abstract:

Resilience is considered to greatly affect the personal and occupational wellbeing and efficacy of individuals; therefore, it has been widely studied in the social and behavioral sciences. Given its significance, several scales have been created to assess resilience of children and adults. However, most of these scales focus on examining only the internal protective or risk factors that affect the levels of resilience. The aim of the present study is to create a reliable scale that assesses both the internal and the external protective factors that affect Greek teachers’ levels of resilience. Participants were 136 secondary school teachers (89 females, 47 males) from urban areas of Greece. Connor-Davidson Resilience Scale (CD-Risc) and Resilience Scale for Adults (RSA) were used to collect the data. First, exploratory factor analysis was employed to investigate the inner structure of each scale. For both scales, the analyses revealed a differentiated factor solution compared to the ones proposed by the creators. That prompt us to create a scale that would combine the best fitting subscales of the CD-Risc and the RSA. To this end, the items of the four factors with the best fit and highest reliability were used to create the ‘Teachers' resilience scale’. Exploratory factor analysis revealed that the scale assesses the following protective/risk factors: Personal Competence and Strength (9 items, α=.83), Family Cohesion Spiritual Influences (7 items, α=.80), Social Competence and Peers Support (7 items, α=.78) and Spiritual Influence (3 items, α=.58). This four-factor model explained 49,50% of the total variance. In the next step, a confirmatory factor analysis was performed on the 26 items of the derived scale to test the above factor solution. The fit of the model to the data was good (χ2/292 = 1.245, CFI = .921, GFI = .829, SRMR = .074, CI90% = .026-,056, RMSEA = 0.43), indicating that the proposed scale can validly measure the aforementioned four aspects of teachers' resilience and thus confirmed its factorial validity. Finally, analyses of variance were performed to check for individual differences in the levels of teachers' resilience in relation to their gender, age, marital status, level of studies, and teaching specialty. Results were consistent to previous findings, thus providing an indication of discriminant validity for the instrument. This scale has the advantage of assessing both the internal and the external protective factors of resilience in a brief yet comprehensive way, since it consists 26 items instead of the total of 58 of the CD-Risc and RSA scales. Its factorial inner structure is supported by the relevant literature on resilience, as it captures the major protective factors of resilience identified in previous studies.

Keywords: protective factors, resilience, scale development, teachers

Procedia PDF Downloads 290