Search results for: finite difference modelling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8270

Search results for: finite difference modelling

1010 A Study of the Frequency of Individual Support for the Pupils With Developmental Disabilities or Suspected Developmental Disabilities in Regular Japanese School Classes - From a Questionnaire Survey of Teachers

Authors: Maho Komura

Abstract:

The purpose of this study was to determine from a questionnaire survey of teachers the status of implementation of individualized support for the pupils with suspected developmental disabilities in regular elementary school classes in Japan. In inclusive education, the goal is for all pupils to learn in the same place as much as possible by receiving the individualized support they need. However, in the Japanese school culture, strong "homogeneity" sometimes surfaces, and it is pointed out that it is difficult to provide individualized support from the viewpoint of formal equality. Therefore, we decided to conduct this study in order to examine whether there is a difference in the frequency of implementation depending on the content of individualized support and to consider the direction of future individualized support. The subjects of the survey were 196 public elementary school teachers who had been in charge of regular classes within the past five years. In the survey, individualized support was defined as individualized consideration including rational consideration, and did not include support for the entire class or all pupils enrolled in the class (e.g., reducing the amount of homework for pupils who have trouble learning, changing classroom rules, etc.). (e.g., reducing the amount of homework for pupils with learning difficulties, allowing pupils with behavioral concerns to use the library or infirmary when they are unstable). The respondents were asked to choose one answer from four options, ranging from "very much" to "not at all," regarding the degree to which they implemented the nine individual support items that were set up with reference to previous studies. As a result, it became clear that the majority of teachers had pupils with developmental disabilities or pupils who require consideration in terms of learning and behavior, and that the majority of teachers had experience in providing individualized support to these pupils. Investigating the content of the individualized support that had been implemented, it became clear that the frequency with which it was implemented varied depending on the individualized support. Individualized support that allowed pupils to perform the same learning tasks was implemented more frequently, but individualized support that allowed different learning tasks or use of places other than the classroom was implemented less frequently. It was suggested that flexible support methods tailored to each pupil may not have been considered.

Keywords: inclusive education, ndividualized support, regular class, elementary school

Procedia PDF Downloads 130
1009 Early Transcriptome Responses to Piscine orthoreovirus-1 in Atlantic salmon Erythrocytes Compared to Salmonid Kidney Cell Lines

Authors: Thomais Tsoulia, Arvind Y. M. Sundaram, Stine Braaen, Øyvind Haugland, Espen Rimstad, Øystein Wessel, Maria K. Dahle

Abstract:

Fish red blood cells (RBC) are nucleated, and in addition to their function in gas exchange, they have been characterized as mediators of immune responses. Salmonid RBC are the major target cells of Piscineorthoreovirus (PRV), a virus associated with heart and skeletal muscle inflammation (HSMI) in farmed Atlantic salmon. The activation of antiviral response genesin RBChas previously been described in ex vivo and in vivo PRV-infection models, but not explored in the initial virus encounter phase. In the present study, mRNA transcriptome responses were explored in erythrocytes from individual fish, kept ex vivo, and exposed to purified PRV for 24 hours. The responses were compared to responses in macrophage-like salmon head kidney (SHK-1) and endothelial-like Atlantic salmon kidney (ASK) cells, none of which support PRV replication. The comparative analysis showed that the antiviral response to PRV was strongest in the SHK-1 cells, with a set of 80 significantly induced genes (≥ 2-fold upregulation). In RBC, 46 genes were significantly upregulated, while ASK cells were not significantly responsive. In particular, the transcriptome analysis of RBC revealed that PRV significantly induced interferon regulatory factor 1 (IRF1) and interferon-induced protein with tetratricopeptide repeats 5-like (IFIT9). However, several interferon-regulated antiviral genes which have previously been reported upregulated in PRV infected RBC in vivo (myxovirus resistance (Mx), interferon-stimulated gene 15 (ISG15), toll-like receptor 3 (TLR3)), were not significantly induced after 24h of virus stimulation. In contrast to RBC, these antiviral response genes were significantly upregulated in SHK-1. These results confirm that RBC are involved in the innate immune response to viruses, but with a delayed antiviral response compared to SHK-1. A notable difference is that interferon regulatory factor 1 (IRF-1) is the most strongly induced gene in RBC, but not among the significantly induced genes in SHK-1. Putative differences in the binding, recognition, and response to PRV, and any link to effects on the ability of PRV to replicate remains to be explored.

Keywords: antiviral responses, atlantic salmon, piscine orthoreovirus-1, red blood cells, RNA-seq

Procedia PDF Downloads 189
1008 Ergonomic Adaptations in Visually Impaired Workers - A Literature Review

Authors: Kamila Troper, Pedro Mestre, Maria Lurdes Menano, Joana Mendonça, Maria João Costa, Sandra Demel

Abstract:

Introduction: Visual impairment is a problem that has an influence on hundreds of thousands of people all over the world. Although it is possible for a Visually Impaired person to do most jobs, the right training, technological assistance, and emotional support are essential. Ergonomics be able to solve many of the problems/issues with the relative ease of positioning, lighting and design of the workplace. A little forethought can make a tremendous difference to the ease with which a person with an impairment function. Objectives: Review the main ergonomic adaptation measures reported in the literature in order to promote better working conditions and safety measures for the visually impaired. Methodology: This was an exploratory-descriptive, qualitative literature systematic review study. The main databases used were: PubMed, BIREME, LILACS, with articles and studies published between 2000 and 2021. Results: Based on the principles of the theoretical references of ergonomic analysis of work, the main restructuring of the physical space of the workstations were: Accessibility facilities and assistive technologies; A screen reader that captures information from a computer and sends it in real-time to a speech synthesizer or Braille terminal; Installations of software with voice recognition, Monitors with enlarged screens; Magnification software; Adequate lighting, magnifying lenses in addition to recommendations regarding signage and clearance of the places where the visually impaired pass through. Conclusions: Employability rates for people with visual impairments(both those who are blind and those who have low vision)are low and continue to be a concern to the world and for researchers as a topic of international interest. Although numerous authors have identified barriers to employment and proposed strategies to remediate or circumvent those barriers, people with visual impairments continue to experience high rates of unemployment.

Keywords: ergonomic adaptations, visual impairments, ergonomic analysis of work, systematic review

Procedia PDF Downloads 182
1007 Fire Resilient Cities: The Impact of Fire Regulations, Technological and Community Resilience

Authors: Fanny Guay

Abstract:

Building resilience, sustainable buildings, urbanization, climate change, resilient cities, are just a few examples of where the focus of research has been in the last few years. It is obvious that there is a need to rethink how we are building our cities and how we are renovating our existing buildings. However, the question remaining is how can we assure that we are building sustainable yet resilient cities? There are many aspects one can touch upon when discussing resilience in cities, but after the event of Grenfell in June 2017, it has become clear that fire resilience must be a priority. We define resilience as a holistic approach including communities, society and systems, focusing not only on resisting the effects of a disaster, but also how it will cope and recover from it. Cities are an example of such a system, where components such as buildings have an important role to play. A building on fire will have an impact on the community, the economy, the environment, and so the entire system. Therefore, we believe that fire and resilience go hand in hand when we discuss building resilient cities. This article aims at discussing the current state of the concept of fire resilience and suggests actions to support the built of more fire resilient buildings. Using the case of Grenfell and the fire safety regulations in the UK, we will briefly compare the fire regulations in other European countries, more precisely France, Germany and Denmark, to underline the difference and make some suggestions to increase fire resilience via regulation. For this research, we will also include other types of resilience such as technological resilience, discussing the structure of buildings itself, as well as community resilience, considering the role of communities in building resilience. Our findings demonstrate that to increase fire resilience, amending existing regulations might be necessary, for example, how we performed reaction to fire tests and how we classify building products. However, as we are looking at national regulations, we are only able to make general suggestions for improvement. Another finding of this research is that the capacity of the community to recover and adapt after a fire is also an essential factor. Fundamentally, fire resilience, technological resilience and community resilience are closely connected. Building resilient cities is not only about sustainable buildings or energy efficiency; it is about assuring that all the aspects of resilience are included when building or renovating buildings. We must ask ourselves questions as: Who are the users of this building? Where is the building located? What are the components of the building, how was it designed and which construction products have been used? If we want to have resilient cities, we must answer these basic questions and assure that basic factors such as fire resilience are included in our assessment.

Keywords: buildings, cities, fire, resilience

Procedia PDF Downloads 170
1006 Numerical Investigation on Design Method of Timber Structures Exposed to Parametric Fire

Authors: Robert Pečenko, Karin Tomažič, Igor Planinc, Sabina Huč, Tomaž Hozjan

Abstract:

Timber is favourable structural material due to high strength to weight ratio, recycling possibilities, and green credentials. Despite being flammable material, it has relatively high fire resistance. Everyday engineering practice around the word is based on an outdated design of timber structures considering standard fire exposure, while modern principles of performance-based design enable use of advanced non-standard fire curves. In Europe, standard for fire design of timber structures EN 1995-1-2 (Eurocode 5) gives two methods, reduced material properties method and reduced cross-section method. In the latter, fire resistance of structural elements depends on the effective cross-section that is a residual cross-section of uncharred timber reduced additionally by so called zero strength layer. In case of standard fire exposure, Eurocode 5 gives a fixed value of zero strength layer, i.e. 7 mm, while for non-standard parametric fires no additional comments or recommendations for zero strength layer are given. Thus designers often implement adopted 7 mm rule also for parametric fire exposure. Since the latest scientific evidence suggests that proposed value of zero strength layer can be on unsafe side for standard fire exposure, its use in the case of a parametric fire is also highly questionable and more numerical and experimental research in this field is needed. Therefore, the purpose of the presented study is to use advanced calculation methods to investigate the thickness of zero strength layer and parametric charring rates used in effective cross-section method in case of parametric fire. Parametric studies are carried out on a simple solid timber beam that is exposed to a larger number of parametric fire curves Zero strength layer and charring rates are determined based on the numerical simulations which are performed by the recently developed advanced two step computational model. The first step comprises of hygro-thermal model which predicts the temperature, moisture and char depth development and takes into account different initial moisture states of timber. In the second step, the response of timber beam simultaneously exposed to mechanical and fire load is determined. The mechanical model is based on the Reissner’s kinematically exact beam model and accounts for the membrane, shear and flexural deformations of the beam. Further on, material non-linear and temperature dependent behaviour is considered. In the two step model, the char front temperature is, according to Eurocode 5, assumed to have a fixed temperature of around 300°C. Based on performed study and observations, improved levels of charring rates and new thickness of zero strength layer in case of parametric fires are determined. Thus, the reduced cross section method is substantially improved to offer practical recommendations for designing fire resistance of timber structures. Furthermore, correlations between zero strength layer thickness and key input parameters of the parametric fire curve (for instance, opening factor, fire load, etc.) are given, representing a guideline for a more detailed numerical and also experimental research in the future.

Keywords: advanced numerical modelling, parametric fire exposure, timber structures, zero strength layer

Procedia PDF Downloads 168
1005 In vitro Method to Evaluate the Effect of Steam-Flaking on the Quality of Common Cereal Grains

Authors: Wanbao Chen, Qianqian Yao, Zhenming Zhou

Abstract:

Whole grains with intact pericarp are largely resistant to digestion by ruminants because entire kernels are not conducive to bacterial attachment. But processing methods makes the starch more accessible to microbes, and increases the rate and extent of starch degradation in the rumen. To estimate the feasibility of applying a steam-flaking as the processing technique of grains for ruminants, cereal grains (maize, wheat, barley and sorghum) were processed by steam-flaking (steam temperature 105°C, heating time, 45 min). And chemical analysis, in vitro gas production, volatile fatty acid concentrations, and energetic values were adopted to evaluate the effects of steam-flaking. In vitro cultivation was conducted for 48h with the rumen fluid collected from steers fed a total mixed ration consisted of 40% hay and 60% concentrates. The results showed that steam-flaking processing had a significant effect on the contents of neutral detergent fiber and acid detergent fiber (P < 0.01). The concentration of starch gelatinization degree in all grains was also great improved in steam-flaking grains, as steam-flaking processing disintegrates the crystal structure of cereal starch, which may subsequently facilitate absorption of moisture and swelling. Theoretical maximum gas production after steam-flaking processing showed no great difference. However, compared with intact grains, total gas production at 48 h and the rate of gas production were significantly (P < 0.01) increased in all types of grain. Furthermore, there was no effect of steam-flaking processing on total volatile fatty acid, but a decrease in the ratio between acetate and propionate was observed in the current in vitro fermentation. The present study also found that steam-flaking processing increased (P < 0.05) organic matter digestibility and energy concentration of the grains. The collective findings of the present study suggest that steam-flaking processing of grains could improve their rumen fermentation and energy utilization by ruminants. In conclusion, the utilization of steam-flaking would be practical to improve the quality of common cereal grains.

Keywords: cereal grains, gas production, in vitro rumen fermentation, steam-flaking processing

Procedia PDF Downloads 270
1004 Characterization of Thin Woven Composites Used in Printed Circuit Boards by Combining Numerical and Experimental Approaches

Authors: Gautier Girard, Marion Martiny, Sebastien Mercier, Mohamad Jrad, Mohamed-Slim Bahi, Laurent Bodin, Francois Lechleiter, David Nevo, Sophie Dareys

Abstract:

Reliability of electronic devices has always been of highest interest for Aero-MIL and space applications. In any electronic device, Printed Circuit Board (PCB), providing interconnection between components, is a key for reliability. During the last decades, PCB technologies evolved to sustain and/or fulfill increased original equipment manufacturers requirements and specifications, higher densities and better performances, faster time to market and longer lifetime, newer material and mixed buildups. From the very beginning of the PCB industry up to recently, qualification, experiments and trials, and errors were the most popular methods to assess system (PCB) reliability. Nowadays OEM, PCB manufacturers and scientists are working together in a close relationship in order to develop predictive models for PCB reliability and lifetime. To achieve that goal, it is fundamental to characterize precisely base materials (laminates, electrolytic copper, …), in order to understand failure mechanisms and simulate PCB aging under environmental constraints by means of finite element method for example. The laminates are woven composites and have thus an orthotropic behaviour. The in-plane properties can be measured by combining classical uniaxial testing and digital image correlation. Nevertheless, the out-of-plane properties cannot be evaluated due to the thickness of the laminate (a few hundred of microns). It has to be noted that the knowledge of the out-of-plane properties is fundamental to investigate the lifetime of high density printed circuit boards. A homogenization method combining analytical and numerical approaches has been developed in order to obtain the complete elastic orthotropic behaviour of a woven composite from its precise 3D internal structure and its experimentally measured in-plane elastic properties. Since the mechanical properties of the resin surrounding the fibres are unknown, an inverse method is proposed to estimate it. The methodology has been applied to one laminate used in hyperfrequency spatial applications in order to get its elastic orthotropic behaviour at different temperatures in the range [-55°C; +125°C]. Next; numerical simulations of a plated through hole in a double sided PCB are performed. Results show the major importance of the out-of-plane properties and the temperature dependency of these properties on the lifetime of a printed circuit board. Acknowledgements—The support of the French ANR agency through the Labcom program ANR-14-LAB7-0003-01, support of CNES, Thales Alenia Space and Cimulec is acknowledged.

Keywords: homogenization, orthotropic behaviour, printed circuit board, woven composites

Procedia PDF Downloads 204
1003 The Study of the Correlation of Future-Oriented Thinking and Retirement Planning: The Analysis of Two Professions

Authors: Ya-Hui Lee, Ching-Yi Lu, Chien Hung, Hsieh

Abstract:

The purpose of this study is to explore the difference between state-owned-enterprise employees and the civil servants regarding their future-oriented thinking and retirement planning. The researchers investigated 687 middle age and older adults (345 state-owned-enterprise employees and 342 civil servants) through survey research, to understand the relevance between and the prediction of their future-oriented thinking and retirement planning. The findings of this study are: 1.There are significant differences between these two professions regarding future-oriented thinking but not retirement planning. The results of the future-oriented thinking of civil servants are overall higher than that of the state-owned-enterprise employees. 2. There are significant differences both in the aspects of future-oriented thinking and retirement planning among civil servants of different ages. The future-oriented thinking and retirement planning of ages 55 and above are more significant than those of ages 45 or under. For the state-owned-enterprise employees, however, there is no significance found in their future-oriented thinking, but in their retirement planning. Moreover, retirement planning is higher at ages 55 or above than at other ages. 3. With regard to education, there is no correlation to future-oriented thinking or retirement planning for civil servants. For state-owned-enterprise employees, however, their levels of education directly affect their future-oriented thinking. Those with a master degree or above have greater future-oriented thinking than those with other educational degrees. As for retirement planning, there is no correlation. 4. Self-assessment of economic status significantly affects the future-oriented thinking and retirement planning of both civil servants and state-owned-enterprise employees. Those who assess themselves more affluently are more inclined to future-oriented thinking and retirement planning. 5. For civil servants, there are significant differences between their monthly income and retirement planning, but none with future-oriented thinking. As for state-owned-enterprise employees, there are significant differences between their monthly income and retirement planning as well as future-oriented thinking. State-owned-enterprise employees who have significantly higher monthly incomes (1,960 euros and above) have more significant future-oriented thinking and retirement planning than those with lower monthly incomes (1,469 euros and below). 6. The middle age and older adults of both professions have positive correlations with future-oriented thinking and retirement planning. Through stepwise multiple regression analysis, the results indicate that future-oriented thinking and retirement planning have positive predictions. The authors then present the findings of this study for state-owned-enterprises, public authorities, and older adult educational program designs in Taiwan as references.

Keywords: state-owned-enterprise employees, civil servants, future-oriented thinking, retirement planning

Procedia PDF Downloads 366
1002 A Meta-Analysis of the Association Between Greenspace and Mental Health After COVID-19

Authors: Jae-Hyuk Hyun, Dong-Sung Bae, Jea-Sun Lee

Abstract:

The COVID-19 pandemic emphasized the benefits of natural green space on mental health in pandemic situations. The effects of greenspace on reducing mental health disorder are detected, but limitations impede highlighting the overall effectiveness of greenspace on mental health to be valid and significant. Therefore, this study aims to comprehensively and quantitatively analyze the effectiveness and significance of greenspace in reducing mental disorders after the COVID-19 outbreak. This study adopted a systematic review to select adequate, necessary studies with significant associations between greenspace and mental health after COVID-19. Meta-analysis is performed using the selected studies for calculating and analyzing the combined effect size of greenspace on reducing mental disorder, difference of effect size in various factors of greenspace or mental health, and variables’ effects on greenspace or mental health. Also, a correlation test using MQRS and effect size is performed to determine significant correlations of factors in greenspace and mental health. The analysis confirmed the combined effect size of the association between greenspace and mental health to be interpreted as large enough (medium effect size, 0.565). Various factors consisting of greenspace or mental health had considerable effect sizes, with heterogeneity existing between studies of different greenspace and mental health aspects (subgroups). A significant correlation between factors in greenspace and mental health was identified, with correlations satisfying both reliability and effectiveness used for suggesting necessary greenspace policies with mental health benefits during the pandemic situation. Different variables of the study period, female proportion, and mean age significantly affected certain factors of greenspace or mental health, while the increase in effects of greenspace on mental health was detected as the COVID-19 period continued. Also, the regional heterogeneity of effects on the association between greenspace and mental health is recognized in all factors consisting of greenspace and mental health except for the visitation of greenspace. In conclusion, valid and significant effects were detected in various associations between greenspace and mental health. Based on the results of this study, conducting elaborate research and establishing adequate and necessary greenspace policies and strategies are recommended to effectively benefit the mental health of citizens in future pandemic situations.

Keywords: greenspace, natural environment, mental health, mental disorder, COVID-19, pandemic, systematic review, meta-analysis

Procedia PDF Downloads 68
1001 Longitudinal Profile of Antibody Response to SARS-CoV-2 in Patients with Covid-19 in a Setting from Sub–Saharan Africa: A Prospective Longitudinal Study

Authors: Teklay Gebrecherkos

Abstract:

Background: Serological testing for SARS-CoV-2 plays an important role in epidemiological studies, in aiding the diagnosis of COVID-19 and assess vaccine responses. Little is known about the dynamics of SARS-CoV-2 serology in African settings. Here, we aimed to characterize the longitudinal antibody response profile to SARS-CoV-2 in Ethiopia. Methods: In this prospective study, a total of 102 PCR-confirmed COVID-19 patients were enrolled. We obtained 802 plasma samples collected serially. SARS-CoV-2 antibodies were determined using four lateral flow immune assays (LFIAs) and an electrochemiluminescent immunoassay. We determined longitudinal antibody response to SARS-CoV-2 as well as seroconversion dynamics. Results: Serological positivity rate ranged between 12%-91%, depending on timing after symptom onset. There was no difference in the positivity rate between severe and non-severe COVID-19 cases. The specificity ranged between 90%-97%. Agreement between different assays ranged between 84%-92%. The estimated positive predictive value (PPV) for IgM or IgG in a scenario with seroprevalence at 5% varies from 33% to 58%. Nonetheless, when the population seroprevalence increases to 25% and 50%, there is a corresponding increase in the estimated PPVs. The estimated negative-predictive value (NPV) in a low seroprevalence scenario (5%) is high (>99%). However, the estimated NPV in a high seroprevalence scenario (50%) for IgM or IgG is reduced significantly from 80% to 85%. Overall, 28/102 (27.5%) seroconverted by one or more assays tested within a median time of 11 (IQR: 9–15) days post symptom onset. The median seroconversion time among symptomatic cases tended to be shorter when compared to asymptomatic patients [9 (IQR: 6–11) vs. 15 (IQR: 13–21) days; p = 0.002]. Overall, seroconversion reached 100% 5.5 weeks after the onset of symptoms. Notably, of the remaining 74 COVID-19 patients included in the cohort, 64 (62.8%) were positive for antibodies at the time of enrollment, and 10 (9.8%) patients failed to mount a detectable antibody response by any of the assays tested during follow-up. Conclusions: Longitudinal assessment of antibody response in African COVID-19 patients revealed heterogeneous responses. This underscores the need for a comprehensive evaluation of serum assays before implementation. Factors associated with failure to seroconvert need further research.

Keywords: COVID-19, antibody, rapid diagnostic tests, ethiopia

Procedia PDF Downloads 82
1000 Role of Autophagic Lysosome Reformation for Cell Viability in an in vitro Infection Model

Authors: Muhammad Awais Afzal, Lorena Tuchscherr De Hauschopp, Christian Hübner

Abstract:

Introduction: Autophagy is an evolutionarily conserved lysosome-dependent degradation pathway, which can be induced by extrinsic and intrinsic stressors in living systems to adapt to fluctuating environmental conditions. In the context of inflammatory stress, autophagy contributes to the elimination of invading pathogens, the regulation of innate and adaptive immune mechanisms, and regulation of inflammasome activity as well as tissue damage repair. Lysosomes can be recycled from autolysosomes by the process of autophagic lysosome reformation (ALR), which depends on the presence of several proteins including Spatacsin. Thus ALR contributes to the replenishment of lysosomes that are available for fusion with autophagosomes in situations of increased autophagic turnover, e.g., during bacterial infections, inflammatory stress or sepsis. Objectives: We aimed to assess whether ALR plays a role for cell survival in an in-vitro bacterial infection model. Methods: Mouse embryonic fibroblasts (MEFs) were isolated from wild-type mice and Spatacsin (Spg11-/-) knockout mice. Wild-type MEFs and Spg11-/- MEFs were infected with Staphylococcus aureus (multiplication of infection (MOI) used was 10). After 8 and 16 hours of infection, cell viability was assessed on BD flow cytometer through propidium iodide intake. Bacterial intake by cells was also calculated by plating cell lysates on blood agar plates. Results: in-vitro infection of MEFs with Staphylococcus aureus showed a marked decrease of cell viability in ALR deficient Spatacsin knockout (Spg11-/-) MEFs after 16 hours of infection as compared to wild-type MEFs (n=3 independent experiments; p < 0.0001) although no difference was observed for bacterial intake by both genotypes. Conclusion: Suggesting that ALR is important for the defense of invading pathogens e.g. S. aureus, we observed a marked increase of cell death in an in-vitro infection model in cells with compromised ALR.

Keywords: autophagy, autophagic lysosome reformation, bacterial infections, Staphylococcus aureus

Procedia PDF Downloads 144
999 Understanding the Factors Influencing Urban Ethiopian Consumers’ Consumption Intention of Spirulina-Supplemented Bread

Authors: Adino Andaregie, Isao Takagi, Hirohisa Shimura, Mitsuko Chikasada, Shinjiro Sato, Solomon Addisu

Abstract:

Context: The prevalence of undernutrition in developing countries like Ethiopia has become a significant issue. In this regard, finding alternative nutritional supplements seems to be a practical solution. Spirulina, a highly nutritious microalgae, offers a valuable option as it is a rich source of various essential nutrients. The study aimed to establish the factors affecting urban Ethiopian consumers' consumption intention of Spirulina-fortified bread. Research Aim: The primary purpose of this research is to identify the behavioral and socioeconomic factors impacting the intention of urban Ethiopian consumers to eat Spirulina-fortified bread. Methodology: The research utilized a quantitative approach wherein a structured questionnaire was created and distributed among 361 urban consumers via an online platform. The theory of planned behavior (TPB) was used as a conceptual framework, and confirmatory factor analysis (CFA) and structural equation modelling (SEM) were employed for data analysis. Findings: The study results revealed that attitude towards the supplement, subjective norms, and perceived behavioral control were the critical factors influencing the consumption intention of Spirulina-fortified bread. Moreover, age, physical exercise, and prior knowledge of Spirulina as a food ingredient were also found to have a significant influence. Theoretical Importance: The study contributes towards the understanding of consumer behavior and factors affecting the purchase intentions of Spirulina-fortified bread in urban Ethiopia. The use of TPB as a theoretical framework adds a vital aspect to the study as it provides helpful insights into the factors affecting intentions towards this functional food. Data Collection and Analysis Procedures: The data collection process involved the creation of a structured questionnaire, which was distributed online to urban Ethiopian consumers. Once data was collected, CFA and SEM were utilized to analyze the data and identify the factors impacting consumer behavior. Questions Addressed: The study aimed to address the following questions: (1) What are the behavioral and socioeconomic factors impacting urban Ethiopian consumers' consumption intention of Spirulina-fortified bread? (2) To what extent do attitude towards the supplement, subjective norms, and perceived behavioral control affect the purchase intention of Spirulina-fortified bread? (3) What role does age, education, income, physical exercise, and prior knowledge of Spirulina as a food ingredient play in the purchase intention of Spirulina-fortified bread among urban Ethiopian consumers? Conclusion: The study concludes that attitude towards the supplement, subjective norms, and perceived behavioral control are significant factors influencing urban Ethiopian consumers’ consumption intention of Spirulina-fortified bread. Moreover, age, education, income, physical exercise, and prior knowledge of Spirulina as a food ingredient also play a significant role in determining purchase intentions. The findings provide valuable insights for developing effective marketing strategies for Spirulina-fortified functional foods targeted at different consumer segments.

Keywords: spirulina, consumption, factors, intention, consumers, behavior

Procedia PDF Downloads 83
998 Modeling Discrimination against Gay People: Predictors of Homophobic Behavior against Gay Men among High School Students in Switzerland

Authors: Patrick Weber, Daniel Gredig

Abstract:

Background and Purpose: Research has well documented the impact of discrimination and micro-aggressions on the wellbeing of gay men and, especially, adolescents. For the prevention of homophobic behavior against gay adolescents, however, the focus has to shift on those who discriminate: For the design and tailoring of prevention and intervention, it is important to understand the factors responsible for homophobic behavior such as, for example, verbal abuse. Against this background, the present study aimed to assess homophobic – in terms of verbally abusive – behavior against gay people among high school students. Furthermore, it aimed to establish the predictors of the reported behavior by testing an explanatory model. This model posits that homophobic behavior is determined by negative attitudes and knowledge. These variables are supposed to be predicted by the acceptance of traditional gender roles, religiosity, orientation toward social dominance, contact with gay men, and by the perceived expectations of parents, friends and teachers. These social-cognitive variables in turn are assumed to be determined by students’ gender, age, immigration background, formal school level, and the discussion of gay issues in class. Method: From August to October 2016, we visited 58 high school classes in 22 public schools in a county in Switzerland, and asked the 8th and 9th year students on three formal school levels to participate in survey about gender and gay issues. For data collection, we used an anonymous self-administered questionnaire filled in during class. Data were analyzed using descriptive statistics and structural equation modelling (Generalized Least Square Estimates method). The sample included 897 students, 334 in the 8th and 563 in the 9th year, aged 12–17, 51.2% being female, 48.8% male, 50.3% with immigration background. Results: A proportion of 85.4% participants reported having made homophobic statements in the 12 month before survey, 4.7% often and very often. Analysis showed that respondents’ homophobic behavior was predicted directly by negative attitudes (β=0.20), as well as by the acceptance of traditional gender roles (β=0.06), religiosity (β=–0.07), contact with gay people (β=0.10), expectations of parents (β=–0.14) and friends (β=–0.19), gender (β=–0.22) and having a South-East-European or Western- and Middle-Asian immigration background (β=0.09). These variables were predicted, in turn, by gender, age, immigration background, formal school level, and discussion of gay issues in class (GFI=0.995, AGFI=0.979, SRMR=0.0169, CMIN/df=1.199, p>0.213, adj. R2 =0.384). Conclusion: Findings evidence a high prevalence of homophobic behavior in the responding high school students. The tested explanatory model explained 38.4% of the assessed homophobic behavior. However, data did not found full support of the model. Knowledge did not turn out to be a predictor of behavior. Except for the perceived expectation of teachers and orientation toward social dominance, the social-cognitive variables were not fully mediated by attitudes. Equally, gender and immigration background predicted homophobic behavior directly. These findings demonstrate the importance of prevention and provide also leverage points for interventions against anti-gay bias in adolescents – also in social work settings as, for example, in school social work, open youth work or foster care.

Keywords: discrimination, high school students, gay men, predictors, Switzerland

Procedia PDF Downloads 329
997 Imposing Personal Liability on Shareholder's/Partner's in a Corporate Entity; Implementation of UK’s Personal Liability Institutions in Georgian Corporate Law: Content and Outcomes

Authors: Gvantsa Magradze

Abstract:

The paper examines the grounds for the imposition of a personal liability on shareholder/partner, mainly under Georgian and UK law’s comparative analysis. The general emphasis was made on personal responsibility grounds adaptation in practice and presents the analyze of court decisions. On this base, reader will be capable to find a difference between the dogmatic and practical grounds for imposition personal liability. The first chapter presents the general information about discussed issue and notion of personal liability. The second chapter is devoted to an explanation the concept – ‘the head of the corporation’ to make it clear who is the subject of responsibility in the article and not to remain individuals beyond the attention, who do not hold the position of director but are participating in governing activities and, therefore, have to have fiduciury duties. After short comparative analysis of personal responsibility, the Georgian Corporate law reality is further discussed. Here, the problem of determining personal liability is a problematic issue, thus a separate chapter is devoted to the issue, which explains the grounds for personal liability imposition in details. Within the paper is discussed the content and the purpose of personal liability institutions under UK’s corporate law and an attempt to implement them, and especially ‘Alter Ego’ doctrine in Georgian corporate Law reality and the outcomes of the experiment. For the research purposes will be examined national case law in regard to personal liability imposition, as well as UK’s experience in that regard. Comparative analyze will make it clear, wherein the Georgian statute, are gaps and how to fill them up. The articles major finding as stated, is that Georgian Corporate law does not provide any legally consolidated grounds for personal liability imposition, which in fact, leads to unfaithful, unlawful actions on partners’/shareholders’ behalf. In order to make business market fair, advancement of a national statute is inevitable, and for that, the experience sharing from developed countries is an irreplaceable gift. Overall, the article analyses, how discussed amendments might influence case law and if such amendments were made years ago, how the judgments could look like (before and after amendments).

Keywords: alter ego doctrine, case law, corporate law, good faith, personal liability

Procedia PDF Downloads 149
996 Systematic Review of Quantitative Risk Assessment Tools and Their Effect on Racial Disproportionality in Child Welfare Systems

Authors: Bronwen Wade

Abstract:

Over the last half-century, child welfare systems have increasingly relied on quantitative risk assessment tools, such as actuarial or predictive risk tools. These tools are developed by performing statistical analysis of how attributes captured in administrative data are related to future child maltreatment. Some scholars argue that attributes in administrative data can serve as proxies for race and that quantitative risk assessment tools reify racial bias in decision-making. Others argue that these tools provide more “objective” and “scientific” guides for decision-making instead of subjective social worker judgment. This study performs a systematic review of the literature on the impact of quantitative risk assessment tools on racial disproportionality; it examines methodological biases in work on this topic, summarizes key findings, and provides suggestions for further work. A search of CINAHL, PsychInfo, Proquest Social Science Premium Collection, and the ProQuest Dissertations and Theses Collection was performed. Academic and grey literature were included. The review includes studies that use quasi-experimental methods and development, validation, or re-validation studies of quantitative risk assessment tools. PROBAST (Prediction model Risk of Bias Assessment Tool) and CHARMS (CHecklist for critical Appraisal and data extraction for systematic Reviews of prediction Modelling Studies) were used to assess the risk of bias and guide data extraction for risk development, validation, or re-validation studies. ROBINS-I (Risk of Bias in Non-Randomized Studies of Interventions) was used to assess for bias and guide data extraction for the quasi-experimental studies identified. Due to heterogeneity among papers, a meta-analysis was not feasible, and a narrative synthesis was conducted. 11 papers met the eligibility criteria, and each has an overall high risk of bias based on the PROBAST and ROBINS-I assessments. This is deeply concerning, as major policy decisions have been made based on a limited number of studies with a high risk of bias. The findings on racial disproportionality have been mixed and depend on the tool and approach used. Authors use various definitions for racial equity, fairness, or disproportionality. These concepts of statistical fairness are connected to theories about the reason for racial disproportionality in child welfare or social definitions of fairness that are usually not stated explicitly. Most findings from these studies are unreliable, given the high degree of bias. However, some of the less biased measures within studies suggest that quantitative risk assessment tools may worsen racial disproportionality, depending on how disproportionality is mathematically defined. Authors vary widely in their approach to defining and addressing racial disproportionality within studies, making it difficult to generalize findings or approaches across studies. This review demonstrates the power of authors to shape policy or discourse around racial justice based on their choice of statistical methods; it also demonstrates the need for improved rigor and transparency in studies of quantitative risk assessment tools. Finally, this review raises concerns about the impact that these tools have on child welfare systems and racial disproportionality.

Keywords: actuarial risk, child welfare, predictive risk, racial disproportionality

Procedia PDF Downloads 54
995 Regulation of Differentiating Intramuscular Stromal Vascular Cells Isolated from Hanwoo Beef Cattle by Retinoic Acid and Calcium

Authors: Seong Gu Hwang, Young Kyoon Oh, Joseph F. dela Cruz

Abstract:

Marbling, or intramuscular fat, has been consistently identified as one of the top beef quality problems. Intramuscular adipocytes distribute throughout the perimysial connective tissue of skeletal muscle and are the major site for the deposition of intramuscular fat, which is essential for the eating quality of meat. The stromal vascular fraction of the skeletal muscle contains progenitor cells that can be enhanced to differentiate to adipocytes and increase intramuscular fat. Primary cultures of bovine intramuscular stromal vascular cells were used in this study to elucidate the effects of extracellular calcium and retinoic acid concentration on adipocyte differentiation. Cell viability assay revealed that even at different concentrations of calcium and retinoic acid, there was no significant difference on cell viability. Monitoring of the adipocyte differentiation showed that bovine intramuscular stromal vascular cells cultured in a low concentration of extracellular calcium and retinoic acid had a better degree of fat accumulation. The mRNA and protein expressions of PPARγ, C/EBPα, SREBP-1c and aP2 were analyzed and showed a significant upregulation upon the reduction in the level of extracellular calcium and retinoic acid. The upregulation of these adipogenic related genes means that the decreasing concentration of calcium and retinoic acid is able to stimulate the adipogenic differentiation of bovine intramuscular stromal vascular cells. To further elucidate the effect of calcium, the expression level of calreticulin was measured. Calreticulin which is known to be an inhibitor of PPARγ was down regulated by the decreased level of calcium and retinoic acid in the culture media. The same tendency was observed on retinoic acid receptors RARα and CRABP-II. These receptors are recognized as adipogenic inhibitors, and the downregulation of their expression allowed a better level of differentiation in bovine intramuscular stromal vascular cells. In conclusion, data show that decreasing the level of extracellular calcium and retinoic acid can significantly promote adipogenesis in intramuscular stromal vascular cells of Hanwoo beef cattle. These findings may provide new insights in enhancing intramuscular adipogenesis and marbling in beef cattle.

Keywords: calcium, calreticulin, hanwoo beef, retinoic acid

Procedia PDF Downloads 305
994 Optimizing Fermented Paper Production Using Spyrogira sp. Interpolating with Banana Pulp

Authors: Hadiatullah, T. S. D. Desak Ketut, A. A. Ayu, A. N. Isna, D. P. Ririn

Abstract:

Spirogyra sp. is genus of microalgae which has a high carbohydrate content that used as a best medium for bacterial fermentation to produce cellulose. This study objective to determine the effect of pulp banana in the fermented paper production process using Spirogyra sp. and characterizing of the paper product. The method includes the production of bacterial cellulose, assay of the effect fermented paper interpolating with banana pulp using Spirogyra sp., and the assay of paper characteristics include gram-mage paper, water assay absorption, thickness, power assay of tensile resistance, assay of tear resistance, density, and organoleptic assay. Experiments were carried out with completely randomized design with a variation of the concentration of sewage treatment in the fermented paper production interpolating banana pulp using Spirogyra sp. Each parameter data to be analyzed by Anova variance that continued by real difference test with an error rate of 5% using the SPSS. Nata production results indicate that different carbon sources (glucose and sugar) did not show any significant differences from cellulose parameters assay. Significantly different results only indicated for the control treatment. Although not significantly different from the addition of a carbon source, sugar showed higher potency to produce high cellulose. Based on characteristic assay of the fermented paper showed that the paper gram-mage indicated that the control treatment without interpolation of a carbon source and a banana pulp have better result than banana pulp interpolation. Results of control gram-mage is 260 gsm that show optimized by cardboard. While on paper gram-mage produced with the banana pulp interpolation is about 120-200 gsm that show optimized by magazine paper and art paper. Based on the density, weight, water absorption assays, and organoleptic assay of paper showing the highest results in the treatment of pulp banana interpolation with sugar source as carbon is 14.28 g/m2, 0.02 g and 0.041 g/cm2.minutes. The conclusion found that paper with nata material interpolating with sugar and banana pulp has the potential formulation to produce super-quality paper.

Keywords: cellulose, fermentation, grammage, paper, Spirogyra sp.

Procedia PDF Downloads 333
993 FMCW Doppler Radar Measurements with Microstrip Tx-Rx Antennas

Authors: Yusuf Ulaş Kabukçu, Si̇nan Çeli̇k, Onur Salan, Mai̇de Altuntaş, Mert Can Dalkiran, Gökseni̇n Bozdağ, Metehan Bulut, Fati̇h Yaman

Abstract:

This study presents a more compact implementation of the 2.4GHz MIT Coffee Can Doppler Radar for 2.6GHz operating frequency. The main difference of our prototype depends on the use of microstrip antennas which makes it possible to transport with a small robotic vehicle. We have designed our radar system with two different channels: Tx and Rx. The system mainly consists of Voltage Controlled Oscillator (VCO) source, low noise amplifiers, microstrip antennas, splitter, mixer, low pass filter, and necessary RF connectors with cables. The two microstrip antennas, one is element for transmitter and the other one is array for receiver channel, was designed, fabricated and verified by experiments. The system has two operation modes: speed detection and range detection. If the switch of the operation mode is ‘Off’, only CW signal transmitted for speed measurement. When the switch is ‘On’, CW is frequency-modulated and range detection is possible. In speed detection mode, high frequency (2.6 GHz) is generated by a VCO, and then amplified to reach a reasonable level of transmit power. Before transmitting the amplified signal through a microstrip patch antenna, a splitter used in order to compare the frequencies of transmitted and received signals. Half of amplified signal (LO) is forwarded to a mixer, which helps us to compare the frequencies of transmitted and received (RF) and has the IF output, or in other words information of Doppler frequency. Then, IF output is filtered and amplified to process the signal digitally. Filtered and amplified signal showing Doppler frequency is used as an input of audio input of a computer. After getting this data Doppler frequency is shown as a speed change on a figure via Matlab script. According to experimental field measurements the accuracy of speed measurement is approximately %90. In range detection mode, a chirp signal is used to form a FM chirp. This FM chirp helps to determine the range of the target since only Doppler frequency measured with CW is not enough for range detection. Such a FMCW Doppler radar may be used in border security of the countries since it is capable of both speed and range detection.

Keywords: doppler radar, FMCW, range detection, speed detection

Procedia PDF Downloads 398
992 Sexual Dimorphism in the Sensorial Structures of the Antenna of Thygater aethiops (Hymenoptera: Apidae) and Its Relation with Some Corporal Parameters

Authors: Wendy Carolina Gomez Ramirez, Rodulfo Ospina Torres

Abstract:

Thygater aethiops is a species of solitary bee with a neotropical distribution that has been adapted to live in urban environments. This species of bee presents a marked sexual dimorphism since the males have antenna almost as long as their body different from the females that present antenna with smaller size. In this work, placoid sensilla were studied, which are structures that appear in the antenna and are involved in the detection of substances both, for reproduction and for the search of food. The aim of this study was to evaluate the differences between these sensory structures in the different sexes, for which males and females were captured. Later some body measures were taken such as fresh weight with abdomen and without it, since the weight could be modified by the stomach content; other measures were taken as the total antenna length and length of the flagellum and flagelomere. After negative imprints of the antenna were made using nail polish, the imprint was cut with a microblade and mounted onto a microscope slide. The placoid sensilla were visible on the imprint, so they were counted manually on the 100x objective lens of the optical microscope. Initially, the males presented a specific distribution pattern in two types of sensilla: trichoid and placoid, the trichoid were found aligned in the dorsal face of the antenna and the placoid were distributed along the entire antenna; that was different to the females since they did not present a distribution pattern the sensilla were randomly organized. It was obtained that the males, because they have a longer antenna, have a greater number of sensilla in relation to the females. Additionally, it was found that there was no relationship between the weight and the number of sensilla, but there was a positive relationship between the length of the antenna, the length of the flagellum and the number of sensilla. The relationship between the number of sensilla per unit area in each of the sexes was also calculated, which showed that, on average, males have 4.2 ± 0.38 sensilla per unit area and females present 2.2 ± 0.20 and likewise a significant difference between sexes. This dimorphism found may be related to the sexual behavior of the species, since it has been demonstrated that males are more adapted to the perception of substances related to reproduction than to the search of food.

Keywords: antenna, olfactory organ, sensilla, sexual dimorphism, solitary bees

Procedia PDF Downloads 164
991 Surviral: An Agent-Based Simulation Framework for Sars-Cov-2 Outcome Prediction

Authors: Sabrina Neururer, Marco Schweitzer, Werner Hackl, Bernhard Tilg, Patrick Raudaschl, Andreas Huber, Bernhard Pfeifer

Abstract:

History and the current outbreak of Covid-19 have shown the deadly potential of infectious diseases. However, infectious diseases also have a serious impact on areas other than health and healthcare, such as the economy or social life. These areas are strongly codependent. Therefore, disease control measures, such as social distancing, quarantines, curfews, or lockdowns, have to be adopted in a very considerate manner. Infectious disease modeling can support policy and decision-makers with adequate information regarding the dynamics of the pandemic and therefore assist in planning and enforcing appropriate measures that will prevent the healthcare system from collapsing. In this work, an agent-based simulation package named “survival” for simulating infectious diseases is presented. A special focus is put on SARS-Cov-2. The presented simulation package was used in Austria to model the SARS-Cov-2 outbreak from the beginning of 2020. Agent-based modeling is a relatively recent modeling approach. Since our world is getting more and more complex, the complexity of the underlying systems is also increasing. The development of tools and frameworks and increasing computational power advance the application of agent-based models. For parametrizing the presented model, different data sources, such as known infections, wastewater virus load, blood donor antibodies, circulating virus variants and the used capacity for hospitalization, as well as the availability of medical materials like ventilators, were integrated with a database system and used. The simulation result of the model was used for predicting the dynamics and the possible outcomes and was used by the health authorities to decide on the measures to be taken in order to control the pandemic situation. The survival package was implemented in the programming language Java and the analytics were performed with R Studio. During the first run in March 2020, the simulation showed that without measures other than individual personal behavior and appropriate medication, the death toll would have been about 27 million people worldwide within the first year. The model predicted the hospitalization rates (standard and intensive care) for Tyrol and South Tyrol with an accuracy of about 1.5% average error. They were calculated to provide 10-days forecasts. The state government and the hospitals were provided with the 10-days models to support their decision-making. This ensured that standard care was maintained for as long as possible without restrictions. Furthermore, various measures were estimated and thereafter enforced. Among other things, communities were quarantined based on the calculations while, in accordance with the calculations, the curfews for the entire population were reduced. With this framework, which is used in the national crisis team of the Austrian province of Tyrol, a very accurate model could be created on the federal state level as well as on the district and municipal level, which was able to provide decision-makers with a solid information basis. This framework can be transferred to various infectious diseases and thus can be used as a basis for future monitoring.

Keywords: modelling, simulation, agent-based, SARS-Cov-2, COVID-19

Procedia PDF Downloads 174
990 The Effects of Billboard Content and Visible Distance on Driver Behavior

Authors: Arsalan Hassan Pour, Mansoureh Jeihani, Samira Ahangari

Abstract:

Distracted driving has been one of the most integral concerns surrounding our daily use of vehicles since the invention of the automobile. While much attention has been recently given to cell phones related distraction, commercial billboards along roads are also candidates for drivers' visual and cognitive distractions, as they may take drivers’ eyes from the road and their minds off the driving task to see, perceive and think about the billboard’s content. Using a driving simulator and a head-mounted eye-tracking system, speed change, acceleration, deceleration, throttle response, collision, lane changing, and offset from the center of the lane data along with gaze fixation duration and frequency data were collected in this study. Some 92 participants from a fairly diverse sociodemographic background drove on a simulated freeway in Baltimore, Maryland area and were exposed to three different billboards to investigate the effects of billboards on drivers’ behavior. Participants glanced at the billboards several times with different frequencies, the maximum of which occurred on the billboard with the highest cognitive load. About 74% of the participants didn’t look at billboards for more than two seconds at each glance except for the billboard with a short visible area. Analysis of variance (ANOVA) was performed to find the variations in driving behavior when they are invisible, readable, and post billboards area. The results show a slight difference in speed, throttle, brake, steering velocity, and lane changing, among different areas. Brake force and deviation from the center of the lane increased in the readable area in comparison with the visible area, and speed increased right after each billboard. The results indicated that billboards have a significant effect on driving performance and visual attention based on their content and visibility status. Generalized linear model (GLM) analysis showed no connection between participants’ age and driving experience with gaze duration. However, the visible distance of the billboard, gender, and billboard content had a significant effect on gaze duration.

Keywords: ANOVA, billboards, distracted driving, drivers' behavior, driving simulator, eye-Tracking system, GLM

Procedia PDF Downloads 128
989 Towards a Measuring Tool to Encourage Knowledge Sharing in Emerging Knowledge Organizations: The Who, the What and the How

Authors: Rachel Barker

Abstract:

The exponential velocity in the truly knowledge-intensive world today has increasingly bombarded organizations with unfathomable challenges. Hence organizations are introduced to strange lexicons of descriptors belonging to a new paradigm of who, what and how knowledge at individual and organizational levels should be managed. Although organizational knowledge has been recognized as a valuable intangible resource that holds the key to competitive advantage, little progress has been made in understanding how knowledge sharing at individual level could benefit knowledge use at collective level to ensure added value. The research problem is that a lack of research exists to measure knowledge sharing through a multi-layered structure of ideas with at its foundation, philosophical assumptions to support presuppositions and commitment which requires actual findings from measured variables to confirm observed and expected events. The purpose of this paper is to address this problem by presenting a theoretical approach to measure knowledge sharing in emerging knowledge organizations. The research question is that despite the competitive necessity of becoming a knowledge-based organization, leaders have found it difficult to transform their organizations due to a lack of knowledge on who, what and how it should be done. The main premise of this research is based on the challenge for knowledge leaders to develop an organizational culture conducive to the sharing of knowledge and where learning becomes the norm. The theoretical constructs were derived and based on the three components of the knowledge management theory, namely technical, communication and human components where it is suggested that this knowledge infrastructure could ensure effective management. While it is realised that it might be a little problematic to implement and measure all relevant concepts, this paper presents effect of eight critical success factors (CSFs) namely: organizational strategy, organizational culture, systems and infrastructure, intellectual capital, knowledge integration, organizational learning, motivation/performance measures and innovation. These CSFs have been identified based on a comprehensive literature review of existing research and tested in a new framework adapted from four perspectives of the balanced score card (BSC). Based on these CSFs and their items, an instrument was designed and tested among managers and employees of a purposefully selected engineering company in South Africa who relies on knowledge sharing to ensure their competitive advantage. Rigorous pretesting through personal interviews with executives and a number of academics took place to validate the instrument and to improve the quality of items and correct wording of issues. Through analysis of surveys collected, this research empirically models and uncovers key aspects of these dimensions based on the CSFs. Reliability of the instrument was calculated by Cronbach’s a for the two sections of the instrument on organizational and individual levels.The construct validity was confirmed by using factor analysis. The impact of the results was tested using structural equation modelling and proved to be a basis for implementing and understanding the competitive predisposition of the organization as it enters the process of knowledge management. In addition, they realised the importance to consolidate their knowledge assets to create value that is sustainable over time.

Keywords: innovation, intellectual capital, knowledge sharing, performance measures

Procedia PDF Downloads 195
988 An Experimental Study on the Thermal Properties of Concrete Aggregates in Relation to Their Mineral Composition

Authors: Kyung Suk Cho, Heung Youl Kim

Abstract:

The analysis of the petrologic characteristics and thermal properties of crushed aggregates for concrete such as granite, gneiss, dolomite, shale and andesite found that rock-forming minerals decided the thermal properties of the aggregates. The thermal expansion coefficients of aggregates containing lots of quartz increased rapidly at 573 degrees due to quartz transition. The mass of aggregate containing carbonate minerals decreased rapidly at 750 degrees due to decarboxylation, while its specific heat capacity increased relatively. The mass of aggregates containing hydrated silicate minerals decreased more significantly, and their specific heat capacities were greater when compared with aggregates containing feldspar or quartz. It is deduced that the hydroxyl group (OH) in hydrated silicate dissolved as its bond became loose at high temperatures. Aggregates containing mafic minerals turned red at high temperatures due to oxidation response. Moreover, the comparison of cooling methods showed that rapid cooling using water resulted in more reduction in aggregate mass than slow cooling at room temperatures. In order to observe the fire resistance performance of concrete composed of the identical but coarse aggregate, mass loss and compressive strength reduction factor at 200, 400, 600 and 800 degrees were measured. It was found from the analysis of granite and gneiss that the difference in thermal expansion coefficients between cement paste and aggregates caused by quartz transit at 573 degrees resulted in thermal stress inside the concrete and thus triggered concrete cracking. The ferromagnesian hydrated silicate in andesite and shale caused greater reduction in both initial stiffness and mass compared with other aggregates. However, the thermal expansion coefficient of andesite and shale was similar to that of cement paste. Since they were low in thermal conductivity and high in specific heat capacity, concrete cracking was relatively less severe. Being slow in heat transfer, they were judged to be materials of high heat capacity.

Keywords: crush-aggregates, fire resistance, thermal expansion, heat transfer

Procedia PDF Downloads 228
987 Application of Discrete-Event Simulation in Health Technology Assessment: A Cost-Effectiveness Analysis of Alzheimer’s Disease Treatment Using Real-World Evidence in Thailand

Authors: Khachen Kongpakwattana, Nathorn Chaiyakunapruk

Abstract:

Background: Decision-analytic models for Alzheimer’s disease (AD) have been advanced to discrete-event simulation (DES), in which individual-level modelling of disease progression across continuous severity spectra and incorporation of key parameters such as treatment persistence into the model become feasible. This study aimed to apply the DES to perform a cost-effectiveness analysis of treatment for AD in Thailand. Methods: A dataset of Thai patients with AD, representing unique demographic and clinical characteristics, was bootstrapped to generate a baseline cohort of patients. Each patient was cloned and assigned to donepezil, galantamine, rivastigmine, memantine or no treatment. Throughout the simulation period, the model randomly assigned each patient to discrete events including hospital visits, treatment discontinuation and death. Correlated changes in cognitive and behavioral status over time were developed using patient-level data. Treatment effects were obtained from the most recent network meta-analysis. Treatment persistence, mortality and predictive equations for functional status, costs (Thai baht (THB) in 2017) and quality-adjusted life year (QALY) were derived from country-specific real-world data. The time horizon was 10 years, with a discount rate of 3% per annum. Cost-effectiveness was evaluated based on the willingness-to-pay (WTP) threshold of 160,000 THB/QALY gained (4,994 US$/QALY gained) in Thailand. Results: Under a societal perspective, only was the prescription of donepezil to AD patients with all disease-severity levels found to be cost-effective. Compared to untreated patients, although the patients receiving donepezil incurred a discounted additional costs of 2,161 THB, they experienced a discounted gain in QALY of 0.021, resulting in an incremental cost-effectiveness ratio (ICER) of 138,524 THB/QALY (4,062 US$/QALY). Besides, providing early treatment with donepezil to mild AD patients further reduced the ICER to 61,652 THB/QALY (1,808 US$/QALY). However, the dominance of donepezil appeared to wane when delayed treatment was given to a subgroup of moderate and severe AD patients [ICER: 284,388 THB/QALY (8,340 US$/QALY)]. Introduction of a treatment stopping rule when the Mini-Mental State Exam (MMSE) score goes below 10 to a mild AD cohort did not deteriorate the cost-effectiveness of donepezil at the current treatment persistence level. On the other hand, none of the AD medications was cost-effective when being considered under a healthcare perspective. Conclusions: The DES greatly enhances real-world representativeness of decision-analytic models for AD. Under a societal perspective, treatment with donepezil improves patient’s quality of life and is considered cost-effective when used to treat AD patients with all disease-severity levels in Thailand. The optimal treatment benefits are observed when donepezil is prescribed since the early course of AD. With healthcare budget constraints in Thailand, the implementation of donepezil coverage may be most likely possible when being considered starting with mild AD patients, along with the stopping rule introduced.

Keywords: Alzheimer's disease, cost-effectiveness analysis, discrete event simulation, health technology assessment

Procedia PDF Downloads 129
986 Potential Opportunity and Challenge of Developing Organic Rankine Cycle Geothermal Power Plant in China Based on an Energy-Economic Model

Authors: Jiachen Wang, Dongxu Ji

Abstract:

Geothermal power generation is a mature technology with zero carbon emission and stable power output, which could play a vital role as an optimum substitution of base load technology in China’s future decarbonization society. However, the development of geothermal power plants in China is stagnated for a decade due to the underestimation of geothermal energy and insufficient favoring policy. Lack of understanding of the potential value of base-load technology and environmental benefits is the critical reason for disappointed policy support. This paper proposed a different energy-economic model to uncover the potential benefit of developing a geothermal power plant in Puer, including the value of base-load power generation, and environmental and economic benefits. Optimization of the Organic Rankine Cycle (ORC) for maximum power output and minimum Levelized cost of electricity was first conducted. This process aimed at finding the optimum working fluid, turbine inlet pressure, pinch point temperature difference and superheat degrees. Then the optimal ORC model was sent to the energy-economic model to simulate the potential economic and environmental benefits. Impact of geothermal power plants based on the scenarios of implementing carbon trade market, the direct subsidy per electricity generation and nothing was tested. In addition, a requirement of geothermal reservoirs, including geothermal temperature and mass flow rate for a competitive power generation technology with other renewables, was listed. The result indicated that the ORC power plant has a significant economic and environmental benefit over other renewable power generation technologies when implementing carbon trading market and subsidy support. At the same time, developers must locate the geothermal reservoirs with minimum temperature and mass flow rate of 130 degrees and 50 m/s to guarantee a profitable project under nothing scenarios.

Keywords: geothermal power generation, optimization, energy model, thermodynamics

Procedia PDF Downloads 68
985 Influence of Glass Plates Different Boundary Conditions on Human Impact Resistance

Authors: Alberto Sanchidrián, José A. Parra, Jesús Alonso, Julián Pecharromán, Antonia Pacios, Consuelo Huerta

Abstract:

Glass is a commonly used material in building; there is not a unique design solution as plates with a different number of layers and interlayers may be used. In most façades, a security glazing have to be used according to its performance in the impact pendulum. The European Standard EN 12600 establishes an impact test procedure for classification under the point of view of the human security, of flat plates with different thickness, using a pendulum of two tires and 50 kg mass that impacts against the plate from different heights. However, this test does not replicate the actual dimensions and border conditions used in building configurations and so the real stress distribution is not determined with this test. The influence of different boundary conditions, as the ones employed in construction sites, is not well taking into account when testing the behaviour of safety glazing and there is not a detailed procedure and criteria to determinate the glass resistance against human impact. To reproduce the actual boundary conditions on site, when needed, the pendulum test is arranged to be used "in situ", with no account for load control, stiffness, and without a standard procedure. Fracture stress of small and large glass plates fit a Weibull distribution with quite a big dispersion so conservative values are adopted for admissible fracture stress under static loads. In fact, test performed for human impact gives a fracture strength two or three times higher, and many times without a total fracture of the glass plate. Newest standards, as for example DIN 18008-4, states for an admissible fracture stress 2.5 times higher than the ones used for static and wing loads. Now two working areas are open: a) to define a standard for the ‘in situ’ test; b) to prepare a laboratory procedure that allows testing with more real stress distribution. To work on both research lines a laboratory that allows to test medium size specimens with different border conditions, has been developed. A special steel frame allows reproducing the stiffness of the glass support substructure, including a rigid condition used as reference. The dynamic behaviour of the glass plate and its support substructure have been characterized with finite elements models updated with modal tests results. In addition, a new portable impact machine is being used to get enough force and direction control during the impact test. Impact based on 100 J is used. To avoid problems with broken glass plates, the test have been done using an aluminium plate of 1000 mm x 700 mm size and 10 mm thickness supported on four sides; three different substructure stiffness conditions are used. A detailed control of the dynamic stiffness and the behaviour of the plate is done with modal tests. Repeatability of the test and reproducibility of results prove that procedure to control both, stiffness of the plate and the impact level, is necessary.

Keywords: glass plates, human impact test, modal test, plate boundary conditions

Procedia PDF Downloads 307
984 Topology Optimization Design of Transmission Structure in Flapping-Wing Micro Aerial Vehicle via 3D Printing

Authors: Zuyong Chen, Jianghao Wu, Yanlai Zhang

Abstract:

Flapping-wing micro aerial vehicle (FMAV) is a new type of aircraft by mimicking the flying behavior to that of small birds or insects. Comparing to the traditional fixed wing or rotor-type aircraft, FMAV only needs to control the motion of flapping wings, by changing the size and direction of lift to control the flight attitude. Therefore, its transmission system should be designed very compact. Lightweight design can effectively extend its endurance time, while engineering experience alone is difficult to simultaneously meet the requirements of FMAV for structural strength and quality. Current researches still lack the guidance of considering nonlinear factors of 3D printing material when carrying out topology optimization, especially for the tiny FMAV transmission system. The coupling of non-linear material properties and non-linear contact behaviors of FMAV transmission system is a great challenge to the reliability of the topology optimization result. In this paper, topology optimization design based on FEA solver package Altair Optistruct for the transmission system of FMAV manufactured by 3D Printing was carried out. Firstly, the isotropic constitutive behavior of the Ultraviolet (UV) Cureable Resin used to fabricate the structure of FMAV was evaluated and confirmed through tensile test. Secondly, a numerical computation model describing the mechanical behavior of FMAV transmission structure was established and verified by experiments. Then topology optimization modeling method considering non-linear factors were presented, and optimization results were verified by dynamic simulation and experiments. Finally, detail discussions of different load status and constraints were carried out to explore the leading factors affecting the optimization results. The contributions drawn from this article helpful for guiding the lightweight design of FMAV are summarizing as follow; first, a dynamic simulation modeling method used to obtain the load status is presented. Second, verification method of optimized results considering non-linear factors is introduced. Third, based on or can achieve a better weight reduction effect and improve the computational efficiency rather than taking multi-states into account. Fourth, basing on makes for improving the ability to resist bending deformation. Fifth, constraint of displacement helps to improve the structural stiffness of optimized result. Results and engineering guidance in this paper may shed lights on the structural optimization and light-weight design for future advanced FMAV.

Keywords: flapping-wing micro aerial vehicle, 3d printing, topology optimization, finite element analysis, experiment

Procedia PDF Downloads 169
983 The Use of Orthodontic Pacifiers to Prevent Pacifier Induced Malocclusion - A Literature Review

Authors: Maliha Ahmed Suleman, Sidra Ahmed Suleman

Abstract:

Introduction: The use of pacifiers is common amongst infants and young children as a comforting behavior. These non-nutritive sucking habits can be detrimental to the developing occlusion should they persist while the permanent dentition is established. Orthodontic pacifiers have been recommended as an alternative to conventional pacifiers as they are considered to have less interference with orofacial development. However, there is a lack of consensus on whether this is true. Aim and objectives: To review the prevalence of malocclusion associated with the use of orthodontic pacifiers. Methodology: Literature was identified through a rigorous search of the Embase, Pubmed, CINAHL, and Cochrane Library databases. Articles published from 2000 onwards were included. In total, 5 suitable papers were identified. Results: One study showed that the use of orthodontic pacifiers increased the risk of malocclusion, as seen through a greater prevalence of accentuated overjet, posterior crossbites, and anterior open bites in comparison to individuals who did not use pacifiers. However, this study found that there was a clinically significant reduction in the prevalence of anterior open bites amongst orthodontic pacifier users in comparison to conventional pacifier users. Another study found that both types of pacifiers lead to malocclusion; however, they found no difference in the mean overjet and prevalence of anterior open bites amongst conventional and orthodontic pacifier users. In contrast, one study suggested that orthodontic pacifiers do not seem to be related to the development of malocclusions in the primary dentitions, and using them between the ages of 0-3 months was actually beneficial as it prevents thumb-sucking habits. One of the systemic reviews concluded that orthodontic pacifiers do not seem to reduce the occurrence of posterior crossbites; however, they could reduce the development of open bites by virtue of their thin neck design. Whereas another systematic review concluded that there were no differences as to the effects on the stomatognathic system when comparing conventional and orthodontic pacifiers. Conclusion: There is limited and conflicting evidence to support the notion that orthodontic pacifiers can reduce the prevalence of malocclusion when compared to conventional pacifiers. Well-designed randomized controlled trials are required in the future in order to thoroughly assess the effects of orthodontic pacifiers on the developing occlusion and orofacial structures.

Keywords: orthodontics, pacifier, malocclusion, review

Procedia PDF Downloads 85
982 The Effects of Cooling during Baseball Games on Perceived Exertion and Core Temperature

Authors: Chih-Yang Liao

Abstract:

Baseball is usually played outdoors in the warmest months of the year. Therefore, baseball players are susceptible to the influence of the hot environment. It has been shown that hitting performance is increased in games played in warm weather, compared to in cold weather, in Major League Baseball. Intermittent cooling during sporting events can prevent the risk of hyperthermia and increase endurance performance. However, the effects of cooling during baseball games played in a hot environment are unclear. This study adopted a cross-over design. Ten Division I collegiate male baseball players in Taiwan volunteered to participate in this study. Each player played two simulated baseball games, with one day in between. Five of the players received intermittent cooling during the first simulated game, while the other five players received intermittent cooling during the second simulated game. The participants were covered in neck and forehand regions for 6 min with towels that were soaked in icy salt water 3 to 4 times during the games. The participants received the cooling treatment in the dugout when they were not on the field for defense or hitting. During the 2 simulated games, the temperature was 31.1-34.1°C and humidity was 58.2-61.8%, with no difference between the two games. Ratings of perceived exertion, thermal sensation, tympanic and forehead skin temperature immediately after each defensive half-inning and after cooling treatments were recorded. Ratings of perceived exertion were measured using the Borg 10-point scale. The thermal sensation was measured with a 6-point scale. The tympanic and skin temperature was measured with infrared thermometers. The data were analyzed with a two-way analysis of variance with repeated measurement. The results showed that intermitted cooling significantly reduced ratings of perceived exertion and thermal sensation. Forehead skin temperature was also significantly decreased after cooling treatments. However, the tympanic temperature was not significantly different between the two trials. In conclusion, intermittent cooling in the neck and forehead regions was effective in alleviating the perceived exertion and heat sensation. However, this cooling intervention did not affect the core temperature. Whether intermittent cooling has any impact on hitting or pitching performance in baseball players warrants further investigation.

Keywords: baseball, cooling, ratings of perceived exertion, thermal sensation

Procedia PDF Downloads 143
981 Dependence of the Photoelectric Exponent on the Source Spectrum of the CT

Authors: Rezvan Ravanfar Haghighi, V. C. Vani, Suresh Perumal, Sabyasachi Chatterjee, Pratik Kumar

Abstract:

X-ray attenuation coefficient [µ(E)] of any substance, for energy (E), is a sum of the contributions from the Compton scattering [ μCom(E)] and photoelectric effect [µPh(E)]. In terms of the, electron density (ρe) and the effective atomic number (Zeff) we have µCom(E) is proportional to [(ρe)fKN(E)] while µPh(E) is proportional to [(ρeZeffx)/Ey] with fKN(E) being the Klein-Nishina formula, with x and y being the exponents for photoelectric effect. By taking the sample's HU at two different excitation voltages (V=V1, V2) of the CT machine, we can solve for X=ρe, Y=ρeZeffx from these two independent equations, as is attempted in DECT inversion. Since µCom(E) and µPh(E) are both energy dependent, the coefficients of inversion are also dependent on (a) the source spectrum S(E,V) and (b) the detector efficiency D(E) of the CT machine. In the present paper we tabulate these coefficients of inversion for different practical manifestations of S(E,V) and D(E). The HU(V) values from the CT follow: <µ(V)>=<µw(V)>[1+HU(V)/1000] where the subscript 'w' refers to water and the averaging process <….> accounts for the source spectrum S(E,V) and the detector efficiency D(E). Linearity of μ(E) with respect to X and Y implies that (a) <µ(V)> is a linear combination of X and Y and (b) for inversion, X and Y can be written as linear combinations of two independent observations <µ(V1)>, <µ(V2)> with V1≠V2. These coefficients of inversion would naturally depend upon S(E, V) and D(E). We numerically investigate this dependence for some practical cases, by taking V = 100 , 140 kVp, as are used for cardiological investigations. The S(E,V) are generated by using the Boone-Seibert source spectrum, being superposed on aluminium filters of different thickness lAl with 7mm≤lAl≤12mm and the D(E) is considered to be that of a typical Si[Li] solid state and GdOS scintilator detector. In the values of X and Y, found by using the calculated inversion coefficients, errors are below 2% for data with solutions of glycerol, sucrose and glucose. For low Zeff materials like propionic acid, Zeffx is overestimated by 20% with X being within1%. For high Zeffx materials like KOH the value of Zeffx is underestimated by 22% while the error in X is + 15%. These imply that the source may have additional filtering than the aluminium filter specified by the manufacturer. Also it is found that the difference in the values of the inversion coefficients for the two types of detectors is negligible. The type of the detector does not affect on the DECT inversion algorithm to find the unknown chemical characteristic of the scanned materials. The effect of the source should be considered as an important factor to calculate the coefficients of inversion.

Keywords: attenuation coefficient, computed tomography, photoelectric effect, source spectrum

Procedia PDF Downloads 400