Search results for: Flow-3D numerical model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18655

Search results for: Flow-3D numerical model

775 Pickering Dry Emulsion System for Dissolution Enhancement of Poorly Water Soluble Drug (Fenofibrate)

Authors: Nitin Jadhav, Pradeep R. Vavia

Abstract:

Poor water soluble drugs are difficult to promote for oral drug delivery as they demonstrate poor and variable bioavailability because of its poor solubility and dissolution in GIT fluid. Nowadays lipid based formulations especially self microemulsifying drug delivery system (SMEDDS) is found as the most effective technique. With all the impressive advantages, the need of high amount of surfactant (50% - 80%) is the major drawback of SMEDDS. High concentration of synthetic surfactant is known for irritation in GIT and also interference with the function of intestinal transporters causes changes in drug absorption. Surfactant may also reduce drug activity and subsequently bioavailability due to the enhanced entrapment of drug in micelles. In chronic treatment these issues are very conspicuous due to the long exposure. In addition the liquid self microemulsifying system also suffers from stability issues. Recently one novel approach of solid stabilized micro and nano emulsion (Pickering emulsion) has very admirable properties such as high stability, absence or very less concentration of surfactant and easily converts into the dry form. So here we are exploring pickering dry emulsion system for dissolution enhancement of anti-lipemic, extremely poorly water soluble drug (Fenofibrate). Oil moiety for emulsion preparation was selected mainly on the basis of higher solubility of drug. Captex 300 was showed higher solubility for fenofibrate, hence selected as oil for emulsion. With Silica (solid stabilizer); Span 20 was selected to improve the wetting property of it. Emulsion formed by Silica and Span20 as stabilizer at the ratio 2.5:1 (silica: span 20) was found very stable at the particle size 410 nm. The prepared emulsion was further preceded for spray drying and formed microcapsule evaluated for in-vitro dissolution study, in-vivo pharmacodynamic study and characterized for DSC, XRD, FTIR, SEM, optical microscopy etc. The in vitro study exhibits significant dissolution enhancement of formulation (85 % in 45 minutes) as compared to plain drug (14 % in 45 minutes). In-vivo study (Triton based hyperlipidaemia model) exhibits significant reduction in triglyceride and cholesterol with formulation as compared to plain drug indicating increasing in fenofibrate bioavailability. DSC and XRD study exhibit loss of crystallinity of drug in microcapsule form. FTIR study exhibit chemical stability of fenofibrate. SEM and optical microscopy study exhibit spherical structure of globule coated with solid particles.

Keywords: captex 300, fenofibrate, pickering dry emulsion, silica, span20, stability, surfactant

Procedia PDF Downloads 499
774 Strategies for Incorporating Intercultural Intelligence into Higher Education

Authors: Hyoshin Kim

Abstract:

Most post-secondary educational institutions have offered a wide variety of professional development programs and resources in order to advance the quality of education. Such programs are designed to support faculty members by focusing on topics such as course design, behavioral learning objectives, class discussion, and evaluation methods. These are based on good intentions and might help both new and experienced educators. However, the fundamental flaw is that these ‘effective methods’ are assumed to work regardless of what we teach and whom we teach. This paper is focused on intercultural intelligence and its application to education. It presents a comprehensive literature review on context and cultural diversity in terms of beliefs, values and worldviews. What has worked well with a group of homogeneous local students may not work well with more diverse and international students. It is because students hold different notions of what is means to learn or know something. It is necessary for educators to move away from certain sets of generic teaching skills, which are based on a limited, particular view of teaching and learning. The main objective of the research is to expand our teaching strategies by incorporating what students bring to the course. There have been a growing number of resources and texts on teaching international students. Unfortunately, they tend to be based on the deficiency model, which treats diversity not as strengths, but as problems to be solved. This view is evidenced by the heavy emphasis on assimilationist approaches. For example, cultural difference is negatively evaluated, either implicitly or explicitly. Therefore the pressure is on culturally diverse students. The following questions reflect the underlying assumption of deficiencies: - How can we make them learn better? - How can we bring them into the mainstream academic culture?; and - How can they adapt to Western educational systems? Even though these questions may be well-intended, there seems to be something fundamentally wrong as the assumption of cultural superiority is embedded in this kind of thinking. This paper examines how educators can incorporate intercultural intelligence into the course design by utilizing a variety of tools such as pre-course activities, peer learning and reflective learning journals. The main goal is to explore ways to engage diverse learners in all aspects of learning. This can be achieved by activities designed to understand their prior knowledge, life experiences, and relevant cultural identities. It is crucial to link course material to students’ diverse interests thereby enhancing the relevance of course content and making learning more inclusive. Internationalization of higher education can be successful only when cultural differences are respected and celebrated as essential and positive aspects of teaching and learning.

Keywords: intercultural competence, intercultural intelligence, teaching and learning, post-secondary education

Procedia PDF Downloads 212
773 A Conceptual Study for Investigating the Creation of Energy and Understanding the Properties of Nothing

Authors: Mahmoud Reza Hosseini

Abstract:

The universe is in a continuous expansion process, resulting in the reduction of its density and temperature. Also, by extrapolating back from its current state, the universe at its early times is studied, known as the big bang theory. According to this theory, moments after creation, the universe was an extremely hot and dense environment. However, its rapid expansion due to nuclear fusion led to a reduction in its temperature and density. This is evidenced through the cosmic microwave background and the universe structure at a large scale. However, extrapolating back further from this early state reaches singularity, which cannot be explained by modern physics, and the big bang theory is no longer valid. In addition, one can expect a nonuniform energy distribution across the universe from a sudden expansion. However, highly accurate measurements reveal an equal temperature mapping across the universe, which is contradictory to the big bang principles. To resolve this issue, it is believed that cosmic inflation occurred at the very early stages of the birth of the universe. According to the cosmic inflation theory, the elements which formed the universe underwent a phase of exponential growth due to the existence of a large cosmological constant. The inflation phase allows the uniform distribution of energy so that an equal maximum temperature can be achieved across the early universe. Also, the evidence of quantum fluctuations of this stage provides a means for studying the types of imperfections the universe would begin with. Although well-established theories such as cosmic inflation and the big bang together provide a comprehensive picture of the early universe and how it evolved into its current state, they are unable to address the singularity paradox at the time of universe creation. Therefore, a practical model capable of describing how the universe was initiated is needed. This research series aims at addressing the singularity issue by introducing a state of energy called a "neutral state," possessing an energy level that is referred to as the "base energy." The governing principles of base energy are discussed in detail in our second paper in the series "A Conceptual Study for Addressing the Singularity of the Emerging Universe," which is discussed in detail. To establish a complete picture, the origin of the base energy should be identified and studied. In this research paper, the mechanism which led to the emergence of this natural state and its corresponding base energy is proposed. In addition, the effect of the base energy in the space-time fabric is discussed. Finally, the possible role of the base energy in quantization and energy exchange is investigated. Therefore, the proposed concept in this research series provides a road map for enhancing our understating of the universe's creation from nothing and its evolution and discusses the possibility of base energy as one of the main building blocks of this universe.

Keywords: big bang, cosmic inflation, birth of universe, energy creation, universe evolution

Procedia PDF Downloads 102
772 Prediction of Coronary Artery Stenosis Severity Based on Machine Learning Algorithms

Authors: Yu-Jia Jian, Emily Chia-Yu Su, Hui-Ling Hsu, Jian-Jhih Chen

Abstract:

Coronary artery is the major supplier of myocardial blood flow. When fat and cholesterol are deposit in the coronary arterial wall, narrowing and stenosis of the artery occurs, which may lead to myocardial ischemia and eventually infarction. According to the World Health Organization (WHO), estimated 740 million people have died of coronary heart disease in 2015. According to Statistics from Ministry of Health and Welfare in Taiwan, heart disease (except for hypertensive diseases) ranked the second among the top 10 causes of death from 2013 to 2016, and it still shows a growing trend. According to American Heart Association (AHA), the risk factors for coronary heart disease including: age (> 65 years), sex (men to women with 2:1 ratio), obesity, diabetes, hypertension, hyperlipidemia, smoking, family history, lack of exercise and more. We have collected a dataset of 421 patients from a hospital located in northern Taiwan who received coronary computed tomography (CT) angiography. There were 300 males (71.26%) and 121 females (28.74%), with age ranging from 24 to 92 years, and a mean age of 56.3 years. Prior to coronary CT angiography, basic data of the patients, including age, gender, obesity index (BMI), diastolic blood pressure, systolic blood pressure, diabetes, hypertension, hyperlipidemia, smoking, family history of coronary heart disease and exercise habits, were collected and used as input variables. The output variable of the prediction module is the degree of coronary artery stenosis. The output variable of the prediction module is the narrow constriction of the coronary artery. In this study, the dataset was randomly divided into 80% as training set and 20% as test set. Four machine learning algorithms, including logistic regression, stepwise regression, neural network and decision tree, were incorporated to generate prediction results. We used area under curve (AUC) / accuracy (Acc.) to compare the four models, the best model is neural network, followed by stepwise logistic regression, decision tree, and logistic regression, with 0.68 / 79 %, 0.68 / 74%, 0.65 / 78%, and 0.65 / 74%, respectively. Sensitivity of neural network was 27.3%, specificity was 90.8%, stepwise Logistic regression sensitivity was 18.2%, specificity was 92.3%, decision tree sensitivity was 13.6%, specificity was 100%, logistic regression sensitivity was 27.3%, specificity 89.2%. From the result of this study, we hope to improve the accuracy by improving the module parameters or other methods in the future and we hope to solve the problem of low sensitivity by adjusting the imbalanced proportion of positive and negative data.

Keywords: decision support, computed tomography, coronary artery, machine learning

Procedia PDF Downloads 229
771 Rehabilitation Team after Brain Damages as Complex System Integrating Consciousness

Authors: Olga Maksakova

Abstract:

A work with unconscious patients after acute brain damages besides special knowledge and practical skills of all the participants requires a very specific organization. A lot of said about team approach in neurorehabilitation, usually as for outpatient mode. Rehabilitologists deal with fixed patient problems or deficits (motion, speech, cognitive or emotional disorder). Team-building means superficial paradigm of management psychology. Linear mode of teamwork fits casual relationships there. Cases with deep altered states of consciousness (vegetative states, coma, and confusion) require non-linear mode of teamwork: recovery of consciousness might not be the goal due to phenomenon uncertainty. Rehabilitation team as Semi-open Complex System includes the patient as a part. Patient's response pattern becomes formed not only with brain deficits but questions-stimuli, context, and inquiring person. Teamwork is sourcing of phenomenology knowledge of patient's processes as Third-person approach is replaced with Second- and after First-person approaches. Here is a chance for real-time change. Patient’s contacts with his own body and outward things create a basement for restoration of consciousness. The most important condition is systematic feedbacks to any minimal movement or vegetative signal of the patient. Up to now, recovery work with the most severe contingent is carried out in the mode of passive physical interventions, while an effective rehabilitation team should include specially trained psychologists and psychotherapists. It is they who are able to create a network of feedbacks with the patient and inter-professional ones building up the team. Characteristics of ‘Team-Patient’ system (TPS) are energy, entropy, and complexity. Impairment of consciousness as the absence of linear contact appears together with a loss of essential functions (low energy), vegetative-visceral fits (excessive energy and low order), motor agitation (excessive energy and excessive order), etc. Techniques of teamwork are different in these cases for resulting optimization of the system condition. Directed regulation of the system complexity is one of the recovery tools. Different signs of awareness appear as a result of system self-organization. Joint meetings are an important part of teamwork. Regular or event-related discussions form the language of inter-professional communication, as well as the patient's shared mental model. Analysis of complex communication process in TPS may be useful for creation of the general theory of consciousness.

Keywords: rehabilitation team, urgent rehabilitation, severe brain damage, consciousness disorders, complex system theory

Procedia PDF Downloads 147
770 The Current Home Hemodialysis Practices and Patients’ Safety Related Factors: A Case Study from Germany

Authors: Ilyas Khan. Liliane Pintelon, Harry Martin, Michael Shömig

Abstract:

The increasing costs of healthcare on one hand, and the rise in aging population and associated chronic disease, on the other hand, are putting increasing burden on the current health care system in many Western countries. For instance, chronic kidney disease (CKD) is a common disease and in Europe, the cost of renal replacement therapy (RRT) is very significant to the total health care cost. However, the recent advancement in healthcare technology, provide the opportunity to treat patients at home in their own comfort. It is evident that home healthcare offers numerous advantages apparently, low costs and high patients’ quality of life. Despite these advantages, the intake of home hemodialysis (HHD) therapy is still low in particular in Germany. Many factors are accounted for the low number of HHD intake. However, this paper is focusing on patients’ safety-related factors of current HHD practices in Germany. The aim of this paper is to analyze the current HHD practices in Germany and to identify risks related factors if any exist. A case study has been conducted in a dialysis center which consists of four dialysis centers in the south of Germany. In total, these dialysis centers have 350 chronic dialysis patients, of which, four patients are on HHD. The centers have 126 staff which includes six nephrologists and 120 other staff i.e. nurses and administration. The results of the study revealed several risk-related factors. Most importantly, these centers do not offer allied health services at the pre-dialysis stage, the HHD training did not have an established curriculum; however, they have just recently developed the first version. Only a soft copy of the machine manual is offered to patients. Surprisingly, the management was not aware of any standard available for home assessment and installation. The home assessment is done by a third party (i.e. the machines and equipment provider) and they may not consider the hygienic quality of the patient’s home. The type of machine provided to patients at home is similar to the one in the center. The model may not be suitable at home because of its size and complexity. Even though portable hemodialysis machines, which are specially designed for home use, are available in the market such as the NxStage series. Besides the type of machine, no assistance is offered for space management at home in particular for placing the machine. Moreover, the centers do not offer remote assistance to patients and their carer at home. However, telephonic assistance is available. Furthermore, no alternative is offered if a carer is not available. In addition, the centers are lacking medical staff including nephrologists and renal nurses.

Keywords: home hemodialysis, home hemodialysis practices, patients’ related risks in the current home hemodialysis practices, patient safety in home hemodialysis

Procedia PDF Downloads 119
769 Advanced Palliative Aquatics Care Multi-Device AuBento for Symptom and Pain Management by Sensorial Integration and Electromagnetic Fields: A Preliminary Design Study

Authors: J. F. Pollo Gaspary, F. Peron Gaspary, E. M. Simão, R. Concatto Beltrame, G. Orengo de Oliveira, M. S. Ristow Ferreira, J.C. Mairesse Siluk, I. F. Minello, F. dos Santos de Oliveira

Abstract:

Background: Although palliative care policies and services have been developed, research in this area continues to lag. An integrated model of palliative care is suggested, which includes complementary and alternative services aimed at improving the well-being of patients and their families. The palliative aquatics care multi-device (AuBento) uses several electromagnetic techniques to decrease pain and promote well-being through relaxation and interaction among patients, specialists, and family members. Aim: The scope of this paper is to present a preliminary design study of a device capable of exploring the various existing theories on the biomedical application of magnetic fields. This will be achieved by standardizing clinical data collection with sensory integration, and adding new therapeutic options to develop an advanced palliative aquatics care, innovating in symptom and pain management. Methods: The research methodology was based on the Work Package Methodology for the development of projects, separating the activities into seven different Work Packages. The theoretical basis was carried out through an integrative literature review according to the specific objectives of each Work Package and provided a broad analysis, which, together with the multiplicity of proposals and the interdisciplinarity of the research team involved, generated consistent and understandable complex concepts in the biomedical application of magnetic fields for palliative care. Results: Aubento ambience was idealized with restricted electromagnetic exposure (avoiding data collection bias) and sensory integration (allowing relaxation associated with hydrotherapy, music therapy, and chromotherapy or like floating tank). This device has a multipurpose configuration enabling classic or exploratory options on the use of the biomedical application of magnetic fields at the researcher's discretion. Conclusions: Several patients in diverse therapeutic contexts may benefit from the use of magnetic fields or fluids, thus validating the stimuli to clinical research in this area. A device in controlled and multipurpose environments may contribute to standardizing research and exploring new theories. Future research may demonstrate the possible benefits of the aquatics care multi-device AuBento to improve the well-being and symptom control in palliative care patients and their families.

Keywords: advanced palliative aquatics care, magnetic field therapy, medical device, research design

Procedia PDF Downloads 128
768 Role of Artificial Intelligence in Nano Proteomics

Authors: Mehrnaz Mostafavi

Abstract:

Recent advances in single-molecule protein identification (ID) and quantification techniques are poised to revolutionize proteomics, enabling researchers to delve into single-cell proteomics and identify low-abundance proteins crucial for biomedical and clinical research. This paper introduces a different approach to single-molecule protein ID and quantification using tri-color amino acid tags and a plasmonic nanopore device. A comprehensive simulator incorporating various physical phenomena was designed to predict and model the device's behavior under diverse experimental conditions, providing insights into its feasibility and limitations. The study employs a whole-proteome single-molecule identification algorithm based on convolutional neural networks, achieving high accuracies (>90%), particularly in challenging conditions (95–97%). To address potential challenges in clinical samples, where post-translational modifications affecting labeling efficiency, the paper evaluates protein identification accuracy under partial labeling conditions. Solid-state nanopores, capable of processing tens of individual proteins per second, are explored as a platform for this method. Unlike techniques relying solely on ion-current measurements, this approach enables parallel readout using high-density nanopore arrays and multi-pixel single-photon sensors. Convolutional neural networks contribute to the method's versatility and robustness, simplifying calibration procedures and potentially allowing protein ID based on partial reads. The study also discusses the efficacy of the approach in real experimental conditions, resolving functionally similar proteins. The theoretical analysis, protein labeler program, finite difference time domain calculation of plasmonic fields, and simulation of nanopore-based optical sensing are detailed in the methods section. The study anticipates further exploration of temporal distributions of protein translocation dwell-times and the impact on convolutional neural network identification accuracy. Overall, the research presents a promising avenue for advancing single-molecule protein identification and quantification with broad applications in proteomics research. The contributions made in methodology, accuracy, robustness, and technological exploration collectively position this work at the forefront of transformative developments in the field.

Keywords: nano proteomics, nanopore-based optical sensing, deep learning, artificial intelligence

Procedia PDF Downloads 103
767 The Two Question Challenge: Embedding the Serious Illness Conversation in Acute Care Workflows

Authors: D. M. Lewis, L. Frisby, U. Stead

Abstract:

Objective: Many patients are receiving invasive treatments in acute care or are dying in hospital without having had comprehensive goals of care conversations. Some of these treatments may not align with the patient’s wishes, may be futile, and may cause unnecessary suffering. While many staff may recognize the benefits of engaging patients and families in Serious Illness Conversations (a goal of care framework developed by Ariadne Labs in Boston), few staff feel confident and/or competent in having these conversations in acute care. Another barrier to having these conversations may be due to a lack of incorporation in the current workflow. An educational exercise, titled the Two Question Challenge, was initiated on four medical units across two Vancouver Coastal Health (VCH) hospitals in attempt to engage the entire interdisciplinary team in asking patients and families questions around goals of care and to improve the documentation of these expressed wishes and preferences. Methods: Four acute care units across two separate hospitals participated in the Two Question Challenge. On each unit, over the course of two eight-hour shifts, all members of the interdisciplinary team were asked to select at least two questions from a selection of nine goals of care questions. They were asked to pose these questions of a patient or family member throughout their shift and then asked to document their conversations in a centralized Advance Care Planning/Goals of Care discussion record in the patient’s chart. A visual representation of conversation outcomes was created to demonstrate to staff and patients the breadth of conversations that took place throughout the challenge. Staff and patients were interviewed about their experiences throughout the challenge. Two palliative approach leads remained present on the units throughout the challenge to support, guide, or role model these conversations. Results: Across four acute care medical units, 47 interdisciplinary staff participated in the Two Question Challenge, including nursing, allied health, and a physician. A total of 88 questions were asked of patients, or their families around goals of care and 50 newly documented goals of care conversations were charted. Two code statuses were changed as a result of the conversations. Patients voiced an appreciation for these conversations and staff were able to successfully incorporate these questions into their daily care. Conclusion: The Two Question Challenge proved to be an effective way of having teams explore the goals of care of patients and families in an acute care setting. Staff felt that they gained confidence and competence. Both staff and patients found these conversations to be meaningful and impactful and felt they were notably different from their usual interactions. Documentation of these conversations in a centralized location that is easily accessible to all care providers increased significantly. Application of the Two Question Challenge in non-medical units or other care settings, such as long-term care facilities or community health units, should be explored in the future.

Keywords: advance care planning, goals of care, interdisciplinary, palliative approach, serious illness conversations

Procedia PDF Downloads 102
766 Through Additive Manufacturing. A New Perspective for the Mass Production of Made in Italy Products

Authors: Elisabetta Cianfanelli, Paolo Pupparo, Maria Claudia Coppola

Abstract:

The recent evolutions in the innovation processes and in the intrinsic tendencies of the product development process, lead to new considerations on the design flow. The instability and complexity that contemporary life describes, defines new problems in the production of products, stimulating at the same time the adoption of new solutions across the entire design process. The advent of Additive Manufacturing, but also of IOT and AI technologies, continuously puts us in front of new paradigms regarding design as a social activity. The totality of these technologies from the point of view of application describes a whole series of problems and considerations immanent to design thinking. Addressing these problems may require some initial intuition and the use of some provisional set of rules or plausible strategies, i.e., heuristic reasoning. At the same time, however, the evolution of digital technology and the computational speed of new design tools describe a new and contrary design framework in which to operate. It is therefore interesting to understand the opportunities and boundaries of the new man-algorithm relationship. The contribution investigates the man-algorithm relationship starting from the state of the art of the Made in Italy model, the most known fields of application are described and then focus on specific cases in which the mutual relationship between man and AI becomes a new driving force of innovation for entire production chains. On the other hand, the use of algorithms could engulf many design phases, such as the definition of shape, dimensions, proportions, materials, static verifications, and simulations. Operating in this context, therefore, becomes a strategic action, capable of defining fundamental choices for the design of product systems in the near future. If there is a human-algorithm combination within a new integrated system, quantitative values can be controlled in relation to qualitative and material values. The trajectory that is described therefore becomes a new design horizon in which to operate, where it is interesting to highlight the good practices that already exist. In this context, the designer developing new forms can experiment with ways still unexpressed in the project and can define a new synthesis and simplification of algorithms, so that each artifact has a signature in order to define in all its parts, emotional and structural. This signature of the designer, a combination of values and design culture, will be internal to the algorithms and able to relate to digital technologies, creating a generative dialogue for design purposes. The result that is envisaged indicates a new vision of digital technologies, no longer understood only as of the custodians of vast quantities of information, but also as a valid integrated tool in close relationship with the design culture.

Keywords: decision making, design euristics, product design, product design process, design paradigms

Procedia PDF Downloads 119
765 Childhood Adversity and Delinquency in Youth: Self-Esteem and Depression as Mediators

Authors: Yuhui Liu, Lydia Speyer, Jasmin Wertz, Ingrid Obsuth

Abstract:

Childhood adversities refer to situations where a child's basic needs for safety and support are compromised, leading to substantial disruptions in their emotional, cognitive, social, or neurobiological development. Given the prevalence of adversities (8%-39%), their impact on developmental outcomes is challenging to completely avoid. Delinquency is an important consequence of childhood adversities, given its potential causing violence and other forms of victimisation, influencing victims, delinquents, their families, and the whole of society. Studying mediators helps explain the link between childhood adversity and delinquency, which aids in designing effective intervention programs that target explanatory variables to disrupt the path and mitigate the effects of childhood adversities on delinquency. The Dimensional Model of Adversity and Psychopathology suggests that threat-based adversities influence outcomes through emotion processing, while deprivation-based adversities do so through cognitive mechanisms. Thus, considering a wide range of threat-based and deprivation-based adversities and their co-occurrence and their associations with delinquency through cognitive and emotional mechanisms is essential. This study employs the Millennium Cohort Study, tracking the development of approximately 19,000 individuals born across England, Scotland, Wales and Northern Ireland, representing a nationally representative sample. Parallel mediation models compare the mediating roles of self-esteem (cognitive) and depression (affective) in the associations between childhood adversities and delinquency. Eleven types of childhood adversities were assessed both individually and through latent class analysis, considering adversity experiences from birth to early adolescence. This approach aimed to capture how threat-based, deprived-based, or combined threat and deprived-based adversities are associated with delinquency. Eight latent classes were identified: three classes (low adversity, especially direct and indirect violence; low childhood and moderate adolescent adversities; and persistent poverty with declining bullying victimisation) were negatively associated with delinquency. In contrast, three classes (high parental alcohol misuse, overall high adversities, especially regarding household instability, and high adversity) were positively associated with delinquency. When mediators were included, all classes showed a significant association with delinquency through depression, but not through self-esteem. Among the eleven single adversities, seven were positively associated with delinquency, with five linked through depression and none through self-esteem. The results imply the importance of affective variables, not just for threat-based but also deprivation-based adversities. Academically, this suggests exploring other mechanisms linking adversities and delinquency since some adversities are linked through neither depression nor self-esteem. Clinically, intervention programs should focus on affective variables like depression to mitigate the effects of childhood adversities on delinquency.

Keywords: childhood adversity, delinquency, depression, self-esteem

Procedia PDF Downloads 34
764 Safety Validation of Black-Box Autonomous Systems: A Multi-Fidelity Reinforcement Learning Approach

Authors: Jared Beard, Ali Baheri

Abstract:

As autonomous systems become more prominent in society, ensuring their safe application becomes increasingly important. This is clearly demonstrated with autonomous cars traveling through a crowded city or robots traversing a warehouse with heavy equipment. Human environments can be complex, having high dimensional state and action spaces. This gives rise to two problems. One being that analytic solutions may not be possible. The other is that in simulation based approaches, searching the entirety of the problem space could be computationally intractable, ruling out formal methods. To overcome this, approximate solutions may seek to find failures or estimate their likelihood of occurrence. One such approach is adaptive stress testing (AST) which uses reinforcement learning to induce failures in the system. The premise of which is that a learned model can be used to help find new failure scenarios, making better use of simulations. In spite of these failures AST fails to find particularly sparse failures and can be inclined to find similar solutions to those found previously. To help overcome this, multi-fidelity learning can be used to alleviate this overuse of information. That is, information in lower fidelity can simulations can be used to build up samples less expensively, and more effectively cover the solution space to find a broader set of failures. Recent work in multi-fidelity learning has passed information bidirectionally using “knows what it knows” (KWIK) reinforcement learners to minimize the number of samples in high fidelity simulators (thereby reducing computation time and load). The contribution of this work, then, is development of the bidirectional multi-fidelity AST framework. Such an algorithm, uses multi-fidelity KWIK learners in an adversarial context to find failure modes. Thus far, a KWIK learner has been used to train an adversary in a grid world to prevent an agent from reaching its goal; thus demonstrating the utility of KWIK learners in an AST framework. The next step is implementation of the bidirectional multi-fidelity AST framework described. Testing will be conducted in a grid world containing an agent attempting to reach a goal position and adversary tasked with intercepting the agent as demonstrated previously. Fidelities will be modified by adjusting the size of a time-step, with higher-fidelity effectively allowing for more responsive closed loop feedback. Results will compare the single KWIK AST learner with the multi-fidelity algorithm with respect to number of samples, distinct failure modes found, and relative effect of learning after a number of trials.

Keywords: multi-fidelity reinforcement learning, multi-fidelity simulation, safety validation, falsification

Procedia PDF Downloads 158
763 No-Par Shares Working in European LLCs

Authors: Agnieszka P. Regiec

Abstract:

Capital companies are based on monetary capital. In the traditional model, the capital is the sum of the nominal values of all shares issued. For a few years within the European countries, the limited liability companies’ (LLC) regulations are leaning towards liberalization of the capital structure in order to provide higher degree of autonomy regarding the intra-corporate governance. Reforms were based primarily on the legal system of the USA. In the USA, the tradition of no-par shares is well-established. Thus, as a point of reference, the American legal system is being chosen. Regulations of Germany, Great Britain, France, Netherlands, Finland, Poland and the USA will be taken into consideration. The analysis of the share capital is important for the development of science not only because the capital structure of the corporation has significant impact on the shareholders’ rights, but also it reflects on relationships between creditors of the company and the company itself. Multi-level comparative approach towards the problem will allow to present a wide range of the possible outcomes stemming from the novelization. The dogmatic method was applied. The analysis was based on the statutes, secondary sources and judicial awards. Both the substantive and the procedural aspects of the capital structure were considered. In Germany, as a result of the regulatory competition, typical for the EU, the structure of LLCs was reshaped. New LLC – Unternehmergesellschaft, which does not require a minimum share capital, was introduced. The minimum share capital for Gesellschaft mit beschrankter Haftung was lowered from 25 000 to 10 000 euro. In France the capital structure of corporations was also altered. In 2003, the minimum share capital of société à responsabilité limitée (S.A.R.L.) was repealed. In 2009, the minimum share capital of société par actions simplifiée – in the “simple” version of S.A.R.L. was also changed – there is no minimum share capital required by a statute. The company has to, however, indicate a share capital without the legislator imposing the minimum value of said capital. In Netherlands the reform of the Besloten Vennootschap met beperkte aansprakelijkheid (B.V.) was planned with the following change: repeal of the minimum share capital as the answer to the need for higher degree of autonomy for shareholders. It, however, preserved shares with nominal value. In Finland the novelization of yksityinen osakeyhtiö took place in 2006 and as a result the no-par shares were introduced. Despite the fact that the statute allows shares without face value, it still requires the minimum share capital in the amount of 2 500 euro. In Poland the proposal for the restructuration of the capital structure of the LLC has been introduced. The proposal provides among others: devaluation of the capital to 1 PLN or complete liquidation of the minimum share capital, allowing the no-par shares to be issued. In conclusion: American solutions, in particular, balance sheet test and solvency test provide better protection for creditors; European no-par shares are not the same as American and the existence of share capital in Poland is crucial.

Keywords: balance sheet test, limited liability company, nominal value of shares, no-par shares, share capital, solvency test

Procedia PDF Downloads 185
762 Enhance Concurrent Design Approach through a Design Methodology Based on an Artificial Intelligence Framework: Guiding Group Decision Making to Balanced Preliminary Design Solution

Authors: Loris Franchi, Daniele Calvi, Sabrina Corpino

Abstract:

This paper presents a design methodology in which stakeholders are assisted with the exploration of a so-called negotiation space, aiming to the maximization of both group social welfare and single stakeholder’s perceived utility. The outcome results in less design iterations needed for design convergence while obtaining a higher solution effectiveness. During the early stage of a space project, not only the knowledge about the system but also the decision outcomes often are unknown. The scenario is exacerbated by the fact that decisions taken in this stage imply delayed costs associated with them. Hence, it is necessary to have a clear definition of the problem under analysis, especially in the initial definition. This can be obtained thanks to a robust generation and exploration of design alternatives. This process must consider that design usually involves various individuals, who take decisions affecting one another. An effective coordination among these decision-makers is critical. Finding mutual agreement solution will reduce the iterations involved in the design process. To handle this scenario, the paper proposes a design methodology which, aims to speed-up the process of pushing the mission’s concept maturity level. This push up is obtained thanks to a guided negotiation space exploration, which involves autonomously exploration and optimization of trade opportunities among stakeholders via Artificial Intelligence algorithms. The negotiation space is generated via a multidisciplinary collaborative optimization method, infused by game theory and multi-attribute utility theory. In particular, game theory is able to model the negotiation process to reach the equilibria among stakeholder needs. Because of the huge dimension of the negotiation space, a collaborative optimization framework with evolutionary algorithm has been integrated in order to guide the game process to efficiently and rapidly searching for the Pareto equilibria among stakeholders. At last, the concept of utility constituted the mechanism to bridge the language barrier between experts of different backgrounds and differing needs, using the elicited and modeled needs to evaluate a multitude of alternatives. To highlight the benefits of the proposed methodology, the paper presents the design of a CubeSat mission for the observation of lunar radiation environment. The derived solution results able to balance all stakeholders needs and guaranteeing the effectiveness of the selection mission concept thanks to its robustness in valuable changeability. The benefits provided by the proposed design methodology are highlighted, and further development proposed.

Keywords: concurrent engineering, artificial intelligence, negotiation in engineering design, multidisciplinary optimization

Procedia PDF Downloads 137
761 Preschoolers’ Selective Trust in Moral Promises

Authors: Yuanxia Zheng, Min Zhong, Cong Xin, Guoxiong Liu, Liqi Zhu

Abstract:

Trust is a critical foundation of social interaction and development, playing a significant role in the physical and mental well-being of children, as well as their social participation. Previous research has demonstrated that young children do not blindly trust others but make selective trust judgments based on available information. The characteristics of speakers can influence children’s trust judgments. According to Mayer et al.’s model of trust, these characteristics of speakers, including ability, benevolence, and integrity, can influence children’s trust judgments. While previous research has focused primarily on the effects of ability and benevolence, there has been relatively little attention paid to integrity, which refers to individuals’ adherence to promises, fairness, and justice. This study focuses specifically on how keeping/breaking promises affects young children’s trust judgments. The paradigm of selective trust was employed in two experiments. A sample size of 100 children was required for an effect size of w = 0.30,α = 0.05,1-β = 0.85, using G*Power 3.1. This study employed a 2×2 within-subjects design to investigate the effects of moral valence of promises (within-subjects factor: moral vs. immoral promises), and fulfilment of promises (within-subjects factor: kept vs. broken promises) on children’s trust judgments (divided into declarative and promising contexts). In Experiment 1 adapted binary choice paradigms, presenting 118 preschoolers (62 girls, Mean age = 4.99 years, SD = 0.78) with four conflict scenarios involving the keeping or breaking moral/immoral promises, in order to investigate children’s trust judgments. Experiment 2 utilized single choice paradigms, in which 112 preschoolers (57 girls, Mean age = 4.94 years, SD = 0.80) were presented four stories to examine their level of trust. The results of Experiment 1 showed that preschoolers selectively trusted both promisors who kept moral promises and those who broke immoral promises, as well as their assertions and new promises. Additionally, the 5.5-6.5-year-old children are more likely to trust both promisors who keep moral promises and those who break immoral promises more than the 3.5- 4.5-year-old children. Moreover, preschoolers are more likely to make accurate trust judgments towards promisor who kept moral promise compared to those who broke immoral promises. The results of Experiment 2 showed significant differences of preschoolers’ trust degree: kept moral promise > broke immoral promise > broke moral promise ≈ kept immoral promise. This study is the first to investigate the development of trust judgement in moral promise among preschoolers aged 3.5-6.5. The results show that preschoolers can consider both valence and fulfilment of promises when making trust judgments. Furthermore, as preschoolers mature, they become more inclined to trust promisors who keep moral promises and those who break immoral promises. Additionally, the study reveals that preschoolers have the highest level of trust in promisors who kept moral promises, followed by those who broke immoral promises. Promisors who broke moral promises and those who kept immoral promises are trusted the least. These findings contribute valuable insights to our understanding of moral promises and trust judgment.

Keywords: promise, trust, moral judgement, preschoolers

Procedia PDF Downloads 55
760 The Beneficial Effects of Inhibition of Hepatic Adaptor Protein Phosphotyrosine Interacting with PH Domain and Leucine Zipper 2 on Glucose and Cholesterol Homeostasis

Authors: Xi Chen, King-Yip Cheng

Abstract:

Hypercholesterolemia, characterized by high low-density lipoprotein cholesterol (LDL-C), raises cardiovascular events in patients with type 2 diabetes (T2D). Although several drugs, such as statin and PCSK9 inhibitors, are available for the treatment of hypercholesterolemia, they exert detrimental effects on glucose metabolism and hence increase the risk of T2D. On the other hand, the drugs used to treat T2D have minimal effect on improving the lipid profile. Therefore, there is an urgent need to develop treatments that can simultaneously improve glucose and lipid homeostasis. Adaptor protein phosphotyrosine interacting with PH domain and leucine zipper 2 (APPL2) causes insulin resistance in the liver and skeletal muscle via inhibiting insulin and adiponectin actions in animal models. Single-nucleotide polymorphisms in the APPL2 gene were associated with LDL-C, non-alcoholic fatty liver disease, and coronary artery disease in humans. The aim of this project is to investigate whether APPL2 antisense oligonucleotide (ASO) can alleviate dietary-induced T2D and hypercholesterolemia. High-fat diet (HFD) was used to induce obesity and insulin resistance in mice. GalNAc-conjugated APPL2 ASO (GalNAc-APPL2-ASO) was used to silence hepatic APPL2 expression in C57/BL6J mice selectively. Glucose, lipid, and energy metabolism were monitored. Immunoblotting and quantitative PCR analysis showed that GalNAc-APPL2-ASO treatment selectively reduced APPL2 expression in the liver instead of other tissues, like adipose tissues, kidneys, muscle, and heart. The glucose tolerance test and insulin sensitivity test revealed that GalNAc-APPL2-ASO improved glucose tolerance and insulin sensitivity progressively. Blood chemistry analysis revealed that the mice treated with GalNAc-APPL2-ASO had significantly lower circulating levels of total cholesterol and LDL cholesterol. However, there was no difference in circulating levels of high-density lipoprotein (HDL) cholesterol, triglyceride, and free fatty acid between the mice treated with GalNac-APPL2-ASO and GalNAc-Control-ASO. No obvious effect on food intake, body weight, and liver injury markers after GalNAc-APPL2-ASO treatment was found, supporting its tolerability and safety. We showed that selectively silencing hepatic APPL2 alleviated insulin resistance and hypercholesterolemia and improved energy metabolism in the dietary-induced obese mouse model, indicating APPL2 as a therapeutic target for metabolic diseases.

Keywords: APPL2, antisense oligonucleotide, hypercholesterolemia, type 2 diabetes

Procedia PDF Downloads 68
759 Bivariate Analyses of Factors That May Influence HIV Testing among Women Living in the Democratic Republic of the Congo

Authors: Danielle A. Walker, Kyle L. Johnson, Patrick J. Fox, Jacen S. Moore

Abstract:

The HIV Continuum of Care has become a universal model to provide context for the process of HIV testing, linkage to care, treatment, and viral suppression. HIV testing is the first step in moving toward community viral suppression. Countries with a lower socioeconomic status experience the lowest rates of testing and access to care. The Democratic Republic of the Congo is located in the heart of sub-Saharan Africa, where testing and access to care are low and women experience higher HIV prevalence compared to men. In the Democratic Republic of the Congo there is only a 21.6% HIV testing rate among women. Because a critical gap exists between a woman’s risk of contracting HIV and the decision to be tested, this study was conducted to obtain a better understanding of the relationship between factors that could influence HIV testing among women. The datasets analyzed were from the 2013-14 Democratic Republic of the Congo Demographic and Health Survey Program. The data was subset for women with an age range of 18-49 years. All missing cases were removed and one variable was recoded. The total sample size analyzed was 14,982 women. The results showed that there did not seem to be a difference in HIV testing by mean age. Out of 11 religious categories (Catholic, Protestant, Armee de salut, Kimbanguiste, Other Christians, Muslim, Bundu dia kongo, Vuvamu, Animist, no religion, and other), those who identified as Other Christians had the highest testing rate of 25.9% and those identified as Vuvamu had a 0% testing rate (p<0.001). There was a significant difference in testing by religion. Only 0.7% of women surveyed identified as having no religious affiliation. This suggests partnerships with key community and religious leaders could be a tool to increase testing. Over 60% of women who had never been tested for HIV did not know where to be tested. This highlights the need to educate communities on where testing facilities can be located. Almost 80% of women who believed HIV could be transmitted by supernatural means and/or witchcraft had never been tested before (p=0.08). Cultural beliefs could influence risk perception and testing decisions. Consequently, misconceptions need to be considered when implementing HIV testing and prevention programs. Location by province, years of education, and wealth index were also analyzed to control for socioeconomic status. Kinshasa had the highest testing rate of 54.2% of women living there, and both Equateur and Kasai-Occidental had less than a 10% testing rate (p<0.001). As the education level increased up to 12 years, testing increased (p<0.001). Women within the highest quintile of the wealth index had a 56.1% testing rate, and women within the lowest quintile had a 6.5% testing rate (p<0.001). This study concludes that further research is needed to identify culturally competent methods to increase HIV education programs, build partnerships with key community leaders, and improve knowledge on access to care.

Keywords: Democratic Republic of the Congo, cultural beliefs, education, HIV testing

Procedia PDF Downloads 287
758 Personality Based Tailored Learning Paths Using Cluster Analysis Methods: Increasing Students' Satisfaction in Online Courses

Authors: Orit Baruth, Anat Cohen

Abstract:

Online courses have become common in many learning programs and various learning environments, particularly in higher education. Social distancing forced in response to the COVID-19 pandemic has increased the demand for these courses. Yet, despite the frequency of use, online learning is not free of limitations and may not suit all learners. Hence, the growth of online learning alongside with learners' diversity raises the question: is online learning, as it currently offered, meets the needs of each learner? Fortunately, today's technology allows to produce tailored learning platforms, namely, personalization. Personality influences learner's satisfaction and therefore has a significant impact on learning effectiveness. A better understanding of personality can lead to a greater appreciation of learning needs, as well to assists educators ensure that an optimal learning environment is provided. In the context of online learning and personality, the research on learning design according to personality traits is lacking. This study explores the relations between personality traits (using the 'Big-five' model) and students' satisfaction with five techno-pedagogical learning solutions (TPLS): discussion groups, digital books, online assignments, surveys/polls, and media, in order to provide an online learning process to students' satisfaction. Satisfaction level and personality identification of 108 students who participated in a fully online learning course at a large, accredited university were measured. Cluster analysis methods (k-mean) were applied to identify learners’ clusters according to their personality traits. Correlation analysis was performed to examine the relations between the obtained clusters and satisfaction with the offered TPLS. Findings suggest that learners associated with the 'Neurotic' cluster showed low satisfaction with all TPLS compared to learners associated with the 'Non-neurotics' cluster. learners associated with the 'Consciences' cluster were satisfied with all TPLS except discussion groups, and those in the 'Open-Extroverts' cluster were satisfied with assignments and media. All clusters except 'Neurotic' were highly satisfied with the online course in general. According to the findings, dividing learners into four clusters based on personality traits may help define tailor learning paths for them, combining various TPLS to increase their satisfaction. As personality has a set of traits, several TPLS may be offered in each learning path. For the neurotics, however, an extended selection may suit more, or alternatively offering them the TPLS they less dislike. Study findings clearly indicate that personality plays a significant role in a learner's satisfaction level. Consequently, personality traits should be considered when designing personalized learning activities. The current research seeks to bridge the theoretical gap in this specific research area. Establishing the assumption that different personalities need different learning solutions may contribute towards a better design of online courses, leaving no learner behind, whether he\ she likes online learning or not, since different personalities need different learning solutions.

Keywords: online learning, personality traits, personalization, techno-pedagogical learning solutions

Procedia PDF Downloads 105
757 Working From Home: On the Relationship Between Place Attachment to Work Place, Extraversion and Segmentation Preference to Burnout

Authors: Diamant Irene, Shklarnik Batya

Abstract:

In on to its widespread effects on health and economic issues, Covid-19 shook the work and employment world. Among the prominent changes during the pandemic is the work-from-home trend, complete or partial, as part of social distancing. In fact, these changes accelerated an existing tendency of work flexibility already underway before the pandemic. Technology and means of advanced communications led to a re-assessment of “place of work” as a physical space in which work takes place. Today workers can remotely carry out meetings, manage projects, work in groups, and different research studies point to the fact that this type of work has no adverse effect on productivity. However, from the worker’s perspective, despite numerous advantages associated with work from home, such as convenience, flexibility, and autonomy, various drawbacks have been identified such as loneliness, reduction of commitment, home-work boundary erosion, all risk factors relating to the quality of life and burnout. Thus, a real need has arisen in exploring differences in work-from-home experiences and understanding the relationship between psychological characteristics and the prevalence of burnout. This understanding may be of significant value to organizations considering a future hybrid work model combining in-office and remote working. Based on Hobfoll’s Theory of Conservation of Resources, we hypothesized that burnout would mainly be found among workers whose physical remoteness from the workplace threatens or hinders their ability to retain significant individual resources. In the present study, we compared fully remote and partially remote workers (hybrid work), and we examined psychological characteristics and their connection to the formation of burnout. Based on the conceptualization of Place Attachment as the cognitive-emotional bond of an individual to a meaningful place and the need to maintain closeness to it, we assumed that individuals characterized with Place Attachment to the workplace would suffer more from burnout when working from home. We also assumed that extrovert individuals, characterized by the need of social interaction at the workplace and individuals with segmentationpreference – a need for separation between different life domains, would suffer more from burnout, especially among fully remote workers relative to partially remote workers. 194 workers, of which 111 worked from home in full and 83 worked partially from home, aged 19-53, from different sectors, were tested using an online questionnaire through social media. The results of the study supported our assumptions. The repercussions of these findings are discussed, relating to future occupational experience, with an emphasis on suitable occupational adjustment according to the psychological characteristics and needs of workers.

Keywords: working from home, burnout, place attachment, extraversion, segmentation preference, Covid-19

Procedia PDF Downloads 191
756 Analytical Study of the Structural Response to Near-Field Earthquakes

Authors: Isidro Perez, Maryam Nazari

Abstract:

Numerous earthquakes, which have taken place across the world, led to catastrophic damage and collapse of structures (e.g., 1971 San Fernando; 1995 Kobe-Japan; and 2010 Chile earthquakes). Engineers are constantly studying methods to moderate the effect this phenomenon has on structures to further reduce damage, costs, and ultimately to provide life safety to occupants. However, there are regions where structures, cities, or water reservoirs are built near fault lines. When an earthquake occurs near the fault lines, they can be categorized as near-field earthquakes. In contrary, a far-field earthquake occurs when the region is further away from the seismic source. A near-field earthquake generally has a higher initial peak resulting in a larger seismic response, when compared to a far-field earthquake ground motion. These larger responses may result in serious consequences in terms of structural damage which can result in a high risk for the public’s safety. Unfortunately, the response of structures subjected to near-field records are not properly reflected in the current building design specifications. For example, in ASCE 7-10, the design response spectrum is mostly based on the far-field design-level earthquakes. This may result in the catastrophic damage of structures that are not properly designed for near-field earthquakes. This research investigates the knowledge that the effect of near-field earthquakes has on the response of structures. To fully examine this topic, a structure was designed following the current seismic building design specifications, e.g. ASCE 7-10 and ACI 318-14, being analytically modeled, utilizing the SAP2000 software. Next, utilizing the FEMA P695 report, several near-field and far-field earthquakes were selected, and the near-field earthquake records were scaled to represent the design-level ground motions. Upon doing this, the prototype structural model, created using SAP2000, was subjected to the scaled ground motions. A Linear Time History Analysis and Pushover analysis were conducted on SAP2000 for evaluation of the structural seismic responses. On average, the structure experienced an 8% and 1% increase in story drift and absolute acceleration, respectively, when subjected to the near-field earthquake ground motions. The pushover analysis was ran to find and aid in properly defining the hinge formation in the structure when conducting the nonlinear time history analysis. A near-field ground motion is characterized by a high-energy pulse, making it unique to other earthquake ground motions. Therefore, pulse extraction methods were used in this research to estimate the maximum response of structures subjected to near-field motions. The results will be utilized in the generation of a design spectrum for the estimation of design forces for buildings subjected to NF ground motions.

Keywords: near-field, pulse, pushover, time-history

Procedia PDF Downloads 147
755 Keeping Education Non-Confessional While Teaching Children about Religion

Authors: Tünde Puskás, Anita Andersson

Abstract:

This study is part of a research project about whether religion is considered as part of Swedish cultural heritage in Swedish preschools. Our aim in this paper is to explore how a Swedish preschool balance between keeping the education non-confessional and at the same time teaching children about a particular tradition, Easter.The paper explores how in a Swedish preschool with a religious profile teachers balance between keeping education non-confessional and teaching about a tradition with religious roots. The point of departure for the theoretical frame of our study is that practical considerations in pedagogical situations are inherently dilemmatic. The dilemmas that are of interest for our study evolve around formalized, intellectual ideologies, such us multiculturalism and secularism that have an impact on everyday practice. Educational dilemmas may also arise in the intersections of the formalized ideology of non-confessionalism, prescribed in policy documents and the common sense understandings of what is included in what is understood as Swedish cultural heritage. In this paper, religion is treated as a human worldview that, similarly to secular ideologies, can be understood as a system of thought. We make use of Ninian Smart's theoretical framework according to which in modern Western world religious and secular ideologies, as human worldviews, can be studied from the same analytical framework. In order to be able to study the distinctive character of human worldviews Smart introduced a multi-dimensional model within which the different dimensions interact with each other in various ways and to different degrees. The data for this paper is drawn from fieldwork carried out in 2015-2016 in the form of video ethnography. The empirical material chosen consists of a video recording of a specific activity during which the preschool group took part in an Easter play performed in the local church. The analysis shows that the policy of non-confessionalism together with the idea that teaching covering religious issues must be purely informational leads in everyday practice to dilemmas about what is considered religious. At the same time what the adults actually do with religion fulfills six of seven dimensions common to religious traditions as outlined by Smart. What we can also conclude from the analysis is that whether it is religion or a cultural tradition that is thought through the performance the children watched in the church depends on how the concept of religion is defined. The analysis shows that the characters of the performance themselves understood religion as the doctrine of Jesus' resurrection from the dead. This narrow understanding of religion enabled them indirectly to teach about the traditions and narratives surrounding Easter while avoiding teaching religion as a belief system.

Keywords: non-confessional education, preschool, religion, tradition

Procedia PDF Downloads 159
754 A Prospective Study of a Clinically Significant Anatomical Change in Head and Neck Intensity-Modulated Radiation Therapy Using Transit Electronic Portal Imaging Device Images

Authors: Wilai Masanga, Chirapha Tannanonta, Sangutid Thongsawad, Sasikarn Chamchod, Todsaporn Fuangrod

Abstract:

The major factors of radiotherapy for head and neck (HN) cancers include patient’s anatomical changes and tumour shrinkage. These changes can significantly affect the planned dose distribution that causes the treatment plan deterioration. A measured transit EPID images compared to a predicted EPID images using gamma analysis has been clinically implemented to verify the dose accuracy as part of adaptive radiotherapy protocol. However, a global gamma analysis dose not sensitive to some critical organ changes as the entire treatment field is compared. The objective of this feasibility study is to evaluate the dosimetric response to patient anatomical changes during the treatment course in HN IMRT (Head and Neck Intensity-Modulated Radiation Therapy) using a novel comparison method; organ-of-interest gamma analysis. This method provides more sensitive to specific organ change detection. Random replanned 5 HN IMRT patients with causes of tumour shrinkage and patient weight loss that critically affect to the parotid size changes were selected and evaluated its transit dosimetry. A comprehensive physics-based model was used to generate a series of predicted transit EPID images for each gantry angle from original computed tomography (CT) and replan CT datasets. The patient structures; including left and right parotid, spinal cord, and planning target volume (PTV56) were projected to EPID level. The agreement between the transit images generated from original CT and replanned CT was quantified using gamma analysis with 3%, 3mm criteria. Moreover, only gamma pass-rate is calculated within each projected structure. The gamma pass-rate in right parotid and PTV56 between predicted transit of original CT and replan CT were 42.8%( ± 17.2%) and 54.7%( ± 21.5%). The gamma pass-rate for other projected organs were greater than 80%. Additionally, the results of organ-of-interest gamma analysis were compared with 3-dimensional cone-beam computed tomography (3D-CBCT) and the rational of replan by radiation oncologists. It showed that using only registration of 3D-CBCT to original CT does not provide the dosimetric impact of anatomical changes. Using transit EPID images with organ-of-interest gamma analysis can provide additional information for treatment plan suitability assessment.

Keywords: re-plan, anatomical change, transit electronic portal imaging device, EPID, head, and neck

Procedia PDF Downloads 217
753 First-Trimester Screening of Preeclampsia in a Routine Care

Authors: Tamar Grdzelishvili, Zaza Sinauridze

Abstract:

Introduction: Preeclampsia is a complication of the second trimester of pregnancy, which is characterized by high morbidity and multiorgan damage. Many complex pathogenic mechanisms are now implicated to be responsible for this disease (1). Preeclampsia is one of the leading causes of maternal mortality worldwide. Statistics are enough to convince you of the seriousness of this pathology: about 100,000 women die of preeclampsia every year. It occurs in 3-14% (varies significantly depending on racial origin or ethnicity and geographical region) of pregnant women, in 75% of cases - in a mild form, and in 25% - in a severe form. During severe pre-eclampsia-eclampsia, perinatal mortality increases by 5 times and stillbirth by 9.6 times. Considering that the only way to treat the disease is to end the pregnancy, the main thing is timely diagnosis and prevention of the disease. Identification of high-risk pregnant women for PE and giving prophylaxis would reduce the incidence of preterm PE. First-trimester screening model developed by the Fetal Medicine Foundation (FMF), which uses the Bayes-theorem to combine maternal characteristics and medical history together with measurements of mean arterial pressure, uterine artery pulsatility index, and serum placental growth factor, has been proven to be effective and have superior screening performance to that of traditional risk factor-based approach for the prediction of PE (2) Methods: Retrospective single center screening study. The study population consisted of women from the Tbilisi maternity hospital “Pineo medical ecosystem” who met the following criteria: they spoke Georgian, English, or Russian and agreed to participate in the study after discussing informed consent and answering questions. Prior to the study, the informed consent forms approved by the Institutional Review Board were obtained from the study subjects. Early assessment of preeclampsia was performed between 11-13 weeks of pregnancy. The following were evaluated: anamnesis, dopplerography of the uterine artery, mean arterial blood pressure, and biochemical parameter: Pregnancy-associated plasma protein A (PAPP-A). Individual risk assessment was performed with performed by Fast Screen 3.0 software ThermoFisher scientific. Results: A total of 513 women were recruited and through the study, 51 women were diagnosed with preeclampsia (34.5% in the pregnant women with high risk, 6.5% in the pregnant women with low risk; P<0.000 1). Conclusions: First-trimester screening combining maternal factors with uterine artery Doppler, blood pressure, and pregnancy-associated plasma protein-A is useful to predict PE in a routine care setting. More patient studies are needed for final conclusions. The research is still ongoing.

Keywords: first-trimester, preeclampsia, screening, pregnancy-associated plasma protein

Procedia PDF Downloads 77
752 Teaching of Entrepreneurship and Innovation in Brazilian Universities

Authors: Marcelo T. Okano, Oduvaldo Vendrametto, Osmildo S. Santos, Marcelo E. Fernandes, Heide Landi

Abstract:

Teaching of entrepreneurship and innovation in Brazilian universities has increased in recent years due to several factors such as the emergence of disciplines like biotechnology increased globalization reduced basic funding and new perspectives on the role of the university in the system of knowledge production Innovation is increasingly seen as an evolutionary process that involves different institutional spheres or sectors in society Entrepreneurship is a milestone on the road towards economic progress, and makes a huge contribution towards the quality and future hopes of a sector, economy or even a country. Entrepreneurship is as important in small and medium-sized enterprises (SMEs) and local markets as in large companies, and national and international markets, and is just as key a consideration for public companies as or private organizations. Entrepreneurship helps to encourage the competition in the current environment that leads to the effects of globalization. There is an increasing tendency for government policy to promote entrepreneurship for its apparent economic benefit. Accordingly, governments seek to employ entrepreneurship education as a means to stimulate increased levels of economic activity. Entrepreneurship education and training (EET) is growing rapidly in universities and colleges throughout the world, and governments are supporting it both directly and through funding major investments in advice-provision to would-be entrepreneurs and existing small businesses. The Triple Helix of university–industry–government relations is compared with alternative models for explaining the current research system in its social contexts. Communications and negotiations between institutional partners generate an overlay that increasingly reorganizes the underlying arrangements. To achieve the objective of this research was a survey of the literature on the entrepreneurship and innovation and then a field research with 100 students of Fatec. To collect the data needed for analysis, we used the exploratory research of a qualitative nature. We asked to respondents what degree of knowledge over ten related to entrepreneurship and innovation topics, responses were answered in a Likert scale with 4 levels, none, small, medium and large. We can conclude that the terms such as entrepreneurship and innovation are known by most students because the university propagates them across disciplines, lectures, and institutes innovation. The more specific items such as canvas and Design thinking model are unknown by most respondents. The importance of the University in teaching innovation and entrepreneurship in the transmission of this knowledge to the students in order to equalize the knowledge. As a future project, these items will be re-evaluated to create indicators for measuring the knowledge level.

Keywords: Brazilian universities, entrepreneurship, innovation, entrepreneurship, globalization

Procedia PDF Downloads 508
751 Developing of Ecological Internal Insulation Composite Boards for Innovative Retrofitting of Heritage Buildings

Authors: J. N. Nackler, K. Saleh Pascha, W. Winter

Abstract:

WHISCERS™ (Whole House In-Situ Carbon and Energy Reduction Solution) is an innovative process for Internal Wall Insulation (IWI) for energy-efficient retrofitting of heritage building, which uses laser measuring to determine the dimensions of a room, off-site insulation board cutting and rapid installation to complete the process. As part of a multinational investigation consortium the Austrian part adapted the WHISCERS system to local conditions of Vienna where most historical buildings have valuable stucco facades, precluding the application of an external insulation. The Austrian project contribution addresses the replacement of commonly used extruded polystyrene foam (XPS) with renewable materials such as wood and wood products to develop a more sustainable IWI system. As the timber industry is a major industry in Austria, a new innovative and more sustainable IWI solution could also open up new markets. The first approach of investigation was the Life Cycle Assessment (LCA) to define the performance of wood fibre board as insulation material in comparison to normally used XPS-boards. As one of the results the global-warming potential (GWP) of wood-fibre-board is 15 times less the equivalent to carbon dioxide while in the case of XPS it´s 72 times more. The hygrothermal simulation program WUFI was used to evaluate and simulate heat and moisture transport in multi-layer building components of the developed IWI solution. The results of the simulations prove in examined boundary conditions of selected representative brickwork constructions to be functional and usable without risk regarding vapour diffusion and liquid transport in proposed IWI. In a further stage three different solutions were developed and tested (1 - glued/mortared, 2 - with soft board, connected to wall with gypsum board as top layer, 3 - with soft board and clay board as top layer). All three solutions presents a flexible insulation layer out of wood fibre towards the existing wall, thus compensating irregularities of the wall surface. From first considerations at the beginning of the development phase, three different systems had been developed and optimized according to assembly technology and tested as small specimen in real object conditions. The built prototypes are monitored to detect performance and building physics problems and to validate the results of the computer simulation model. This paper illustrates the development and application of the Internal Wall Insulation system.

Keywords: internal insulation, wood fibre, hygrothermal simulations, monitoring, clay, condensate

Procedia PDF Downloads 219
750 Spare Part Carbon Footprint Reduction with Reman Applications

Authors: Enes Huylu, Sude Erkin, Nur A. Özdemir, Hatice K. Güney, Cemre S. Atılgan, Hüseyin Y. Altıntaş, Aysemin Top, Muammer Yılman, Özak Durmuş

Abstract:

Remanufacturing (reman) applications allow manufacturers to contribute to the circular economy and help to introduce products with almost the same quality, environment-friendly, and lower cost. The objective of this study is to present that the carbon footprint of automotive spare parts used in vehicles could be reduced by reman applications based on Life Cycle Analysis which was framed with ISO 14040 principles. In that case, it was aimed to investigate reman applications for 21 parts in total. So far, research and calculations have been completed for the alternator, turbocharger, starter motor, compressor, manual transmission, auto transmission, and DPF (diesel particulate filter) parts, respectively. Since the aim of Ford Motor Company and Ford OTOSAN is to achieve net zero based on Science-Based Targets (SBT) and the Green Deal that the European Union sets out to make it climate neutral by 2050, the effects of reman applications are researched. In this case, firstly, remanufacturing articles available in the literature were searched based on the yearly high volume of spare parts sold. Paper review results related to their material composition and emissions released during incoming production and remanufacturing phases, the base part has been selected to take it as a reference. Then, the data of the selected base part from the research are used to make an approximate estimation of the carbon footprint reduction of the relevant part used in Ford OTOSAN. The estimation model is based on the weight, and material composition of the referenced paper reman activity. As a result of this study, it was seen that remanufacturing applications are feasible to apply technically and environmentally since it has significant effects on reducing the emissions released during the production phase of the vehicle components. For this reason, the research and calculations of the total number of targeted products in yearly volume have been completed to a large extent. Thus, based on the targeted parts whose research has been completed, in line with the net zero targets of Ford Motor Company and Ford OTOSAN by 2050, if remanufacturing applications are preferred instead of recent production methods, it is possible to reduce a significant amount of the associated greenhouse gas (GHG) emissions of spare parts used in vehicles. Besides, it is observed that remanufacturing helps to reduce the waste stream and causes less pollution than making products from raw materials by reusing the automotive components.

Keywords: greenhouse gas emissions, net zero targets, remanufacturing, spare parts, sustainability

Procedia PDF Downloads 82
749 Outcome-Based Education as Mediator of the Effect of Blended Learning on the Student Performance in Statistics

Authors: Restituto I. Rodelas

Abstract:

The higher education has adopted the outcomes-based education from K-12. In this approach, the teacher uses any teaching and learning strategies that enable the students to achieve the learning outcomes. The students may be required to exert more effort and figure things out on their own. Hence, outcomes-based students are assumed to be more responsible and more capable of applying the knowledge learned. Another approach that the higher education in the Philippines is starting to adopt from other countries is blended learning. This combination of classroom and fully online instruction and learning is expected to be more effective. Participating in the online sessions, however, is entirely up to the students. Thus, the effect of blended learning on the performance of students in Statistics may be mediated by outcomes-based education. If there is a significant positive mediating effect, then blended learning can be optimized by integrating outcomes-based education. In this study, the sample will consist of four blended learning Statistics classes at Jose Rizal University in the second semester of AY 2015–2016. Two of these classes will be assigned randomly to the experimental group that will be handled using outcomes-based education. The two classes in the control group will be handled using the traditional lecture approach. Prior to the discussion of the first topic, a pre-test will be administered. The same test will be given as posttest after the last topic is covered. In order to establish equality of the groups’ initial knowledge, single factor ANOVA of the pretest scores will be performed. Single factor ANOVA of the posttest-pretest score differences will also be conducted to compare the performance of the experimental and control groups. When a significant difference is obtained in any of these ANOVAs, post hoc analysis will be done using Tukey's honestly significant difference test (HSD). Mediating effect will be evaluated using correlation and regression analyses. The groups’ initial knowledge are equal when the result of pretest scores ANOVA is not significant. If the result of score differences ANOVA is significant and the post hoc test indicates that the classes in the experimental group have significantly different scores from those in the control group, then outcomes-based education has a positive effect. Let blended learning be the independent variable (IV), outcomes-based education be the mediating variable (MV), and score difference be the dependent variable (DV). There is mediating effect when the following requirements are satisfied: significant correlation of IV to DV, significant correlation of IV to MV, significant relationship of MV to DV when both IV and MV are predictors in a regression model, and the absolute value of the coefficient of IV as sole predictor is larger than that when both IV and MV are predictors. With a positive mediating effect of outcomes-base education on the effect of blended learning on student performance, it will be recommended to integrate outcomes-based education into blended learning. This will yield the best learning results.

Keywords: outcome-based teaching, blended learning, face-to-face, student-centered

Procedia PDF Downloads 291
748 Estimation of Soil Nutrient Content Using Google Earth and Pleiades Satellite Imagery for Small Farms

Authors: Lucas Barbosa Da Silva, Jun Okamoto Jr.

Abstract:

Precision Agriculture has long being benefited from crop fields’ aerial imagery. This important tool has allowed identifying patterns in crop fields, generating useful information to the production management. Reflectance intensity data in different ranges from the electromagnetic spectrum may indicate presence or absence of nutrients in the soil of an area. Different relations between the different light bands may generate even more detailed information. The knowledge of the nutrients content in the soil or in the crop during its growth is a valuable asset to the farmer that seeks to optimize its yield. However, small farmers in Brazil often lack the resources to access this kind information, and, even when they do, it is not presented in a comprehensive and/or objective way. So, the challenges of implementing this technology ranges from the sampling of the imagery, using aerial platforms, building of a mosaic with the images to cover the entire crop field, extracting the reflectance information from it and analyzing its relationship with the parameters of interest, to the display of the results in a manner that the farmer may take the necessary decisions more objectively. In this work, it’s proposed an analysis of soil nutrient contents based on image processing of satellite imagery and comparing its outtakes with commercial laboratory’s chemical analysis. Also, sources of satellite imagery are compared, to assess the feasibility of using Google Earth data in this application, and the impacts of doing so, versus the application of imagery from satellites like Landsat-8 and Pleiades. Furthermore, an algorithm for building mosaics is implemented using Google Earth imagery and finally, the possibility of using unmanned aerial vehicles is analyzed. From the data obtained, some soil parameters are estimated, namely, the content of Potassium, Phosphorus, Boron, Manganese, among others. The suitability of Google Earth Imagery for this application is verified within a reasonable margin, when compared to Pleiades Satellite imagery and to the current commercial model. It is also verified that the mosaic construction method has little or no influence on the estimation results. Variability maps are created over the covered area and the impacts of the image resolution and sample time frame are discussed, allowing easy assessments of the results. The final results show that easy and cheaper remote sensing and analysis methods are possible and feasible alternatives for the small farmer, with little access to technological and/or financial resources, to make more accurate decisions about soil nutrient management.

Keywords: remote sensing, precision agriculture, mosaic, soil, nutrient content, satellite imagery, aerial imagery

Procedia PDF Downloads 176
747 Reducing Falls in Memory Care through Implementation of the Stopping Elderly Accidents, Deaths, and Injuries Program

Authors: Cory B. Lord

Abstract:

Falls among the elderly population has become an area of concern in healthcare today. The negative impacts of falls lead to increased morbidity, mortality, and financial burdens for both patients and healthcare systems. Falls in the United States is reported at an annual rate of 36 million in those aged 65 and older. Each year, one out of four people in this age group will suffer a fall, with 20% of these falls causing injury. The setting for this Doctor of Nursing Practice (DNP) project was a memory care unit in an assisted living community, as these facilities house cognitively impaired older adults. These communities lack fall prevention programs; therefore, the need exists to add to the body of knowledge to positively impact this population. The objective of this project was to reduce fall rates through the implementation of the Center for Disease Control and Prevention (CDC) STEADI (stopping elderly accidents, deaths, and injuries) program. The DNP project performed was a quality improvement pilot study with a pre and post-test design. This program was implemented in the memory care setting over 12 weeks. The project included an educational session for staff and a fall risk assessment with appropriate resident referrals. The three aims of the DNP project were to reduce fall rates among the elderly aged 65 and older who reside in the memory care unit, increase staff knowledge of STEADI fall prevention measures after an educational session, and assess the willingness of memory care unit staff to adopt an evidence-based a fall prevention program. The Donabedian model was used as a guiding conceptual framework for this quality improvement pilot study. The fall rate data for 12 months before the intervention was evaluated and compared to post-intervention fall rates. The educational session comprised of a pre and post-test to assess staff knowledge of the fall prevention program and the willingness of staff to adopt the fall prevention program. The overarching goal was to reduce falls in the elderly population who live in memory care units. The results of the study showed, on average that the fall rate during the implementation period of STEADI (μ=6.79) was significantly lower when compared to the prior 12 months (μ= 9.50) (p=0.02, α = 0.05). The mean staff knowledge scores improved from pretest (μ=77.74%) to post-test (μ=87.42%) (p=0.00, α= 0.05) after the education session. The results of the willingness to adopt a fall prevention program were scored at 100%. In summation, implementing the STEADI fall prevention program can assist in reducing fall rates for residents aged 65 and older who reside in a memory care setting.

Keywords: dementia, elderly, falls, STEADI

Procedia PDF Downloads 135
746 Generation-Based Travel Decision Analysis in the Post-Pandemic Era

Authors: Hsuan Yu Lai, Hsuan Hsuan Chang

Abstract:

The consumer decision process steps through problems by weighing evidence, examining alternatives, and choosing a decision path. Currently, the COVID 19 made the tourism industry encounter a huge challenge and suffer the biggest amount of economic loss. It would be very important to reexamine the decision-making process model, especially after the pandemic, and consider the differences among different generations. The tourism industry has been significantly impacted by the global outbreak of COVID-19, but as the pandemic subsides, the sector is recovering. This study addresses the scarcity of research on travel decision-making patterns among generations in Taiwan. Specifically targeting individuals who frequently traveled abroad before the pandemic, the study explores differences in decision-making at different stages post-outbreak. So this study investigates differences in travel decision-making among individuals from different generations during/after the COVID-19 pandemic and examines the moderating effects of social media usage and individuals' perception of health risks. The study hypotheses are “there are significant differences in the decision-making process including travel motivation, information searching preferences, and criteria for decision-making” and that social-media usage and health-risk perception would moderate the results of the previous study hypothesis. The X, Y, and Z generations are defined and categorized based on a literature review. The survey collected data including their social-economic background, travel behaviors, motivations, considerations for destinations, travel information searching preferences, and decision-making criteria before/after the pandemic based on the reviews of previous studies. Data from 656 online questionnaires were collected between January to May 2023 and from Taiwanese travel consumers who used to travel at least one time abroad before Covid-19. SPSS is used to analyze the data with One-Way ANOVA and Two-Way ANOVA. The analysis includes demand perception, information gathering, alternative comparison, purchase behavior, and post-travel experience sharing. Social media influence and perception of health risks are examined as moderating factors. The findings show that before the pandemic, the Y Generation preferred natural environments, while the X Generation favored historical and cultural sites compared to the Z Generation. However, after the outbreak, the Z Generation displayed a significant preference for entertainment activities. This study contributes to understanding changes in travel decision-making patterns following COVID-19 and the influence of social media and health risks. The findings have practical implications for the tourism industry.

Keywords: consumer decision-making, generation study, health risk perception, post-pandemic era, social media

Procedia PDF Downloads 61