Search results for: particle-tracking model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 16828

Search results for: particle-tracking model

778 A Vaccination Program to Control an Outbreak of Acute Hepatitis A among MSM in Taiwan, 2016

Authors: Ying-Jung Hsieh, Angela S. Huang, Chu-Ming Chiu, Yu-Min Chou, Chin-Hui Yang

Abstract:

Background and Objectives: Hepatitis A is primarily acquired by the fecal-oral route through person-to-person contact or ingestion of contaminated food or water. During 2010 to 2014, an average of 83 cases of locally-acquired disease was reported to Taiwan’s notifiable disease system. Taiwan Centers for Disease Control (TCDC) identified an outbreak of acute hepatitis A which began in June 2015. Of the 126 cases reported in 2015, 103 (82%) cases were reported during June–December and 95 cases (92%) of them were male. The average age of all male cases was 31 years (median, 29 years; range, 15–76 years). Among the 95 male cases, 49 (52%) were also infected with HIV, and all reported to have had sex with other men. To control this outbreak, TCDC launched a free hepatitis A vaccination program in January 2016 for close contacts of confirmed hepatitis A cases, including family members, sexual partners, and household contacts. Effect of the vaccination program was evaluated. Methods: All cases of hepatitis A reported to the National Notifiable Disease Surveillance System were included. A case of hepatitis A was defined as a locally-acquired disease in a person who had acute clinical symptoms include fever, malaise, loss of appetite, nausea or abdominal discomfort compatible with hepatitis, and tested positive for anti-HAV IgM during June 2015 to June 2016 in Taiwan. The rate of case accumulation was calculated using a simple regression model. Results: During January–June 2016, there were 466 cases of hepatitis A reported; of the 243 (52%) who were also infected with HIV, 232 (95%) had a history of having sex with men. Of the 346 cases that were followed up, 259 (75%) provided information on contacts but only 14 (5%) of them provided the name of their sexual partners. Among the 602 contacts reported, 349 (58%) were family members, 14 (2%) were sexual partners, and 239 (40%) were other household contacts. Among the 602 contacts eligible for free hepatitis A vaccination, 440 (73%) received the vaccine. There were 87 (25%) cases that refused to disclose their close contacts. The average case accumulation rate during January–June 2016 was 21.7 cases per month, which was 6.8 times compared to the average case accumulation rate during June–December 2015 of 3.2 cases per month. Conclusions: Despite vaccination program aimed to provide free hepatitis A vaccine to close contacts of hepatitis A patients, the outbreak continued and even gained momentum in transmission. Refusal by hepatitis A patients to provide names of their close contacts and rejection of contacts to take the hepatitis A vaccine may have contributed to the poor effect of the program. Targeted vaccination efforts of all MSM may be needed to control the outbreak among this population in the short term. In the long term, universal vaccination program is needed to prevent the infection of hepatitis A.

Keywords: hepatitis A, HIV, men who have sex with men, vaccination

Procedia PDF Downloads 256
777 On-Ice Force-Velocity Modeling Technical Considerations

Authors: Dan Geneau, Mary Claire Geneau, Seth Lenetsky, Ming -Chang Tsai, Marc Klimstra

Abstract:

Introduction— Horizontal force-velocity profiling (HFVP) involves modeling an athletes linear sprint kinematics to estimate valuable maximum force and velocity metrics. This approach to performance modeling has been used in field-based team sports and has recently been introduced to ice-hockey as a forward skating performance assessment. While preliminary data has been collected on ice, distance constraints of the on-ice test restrict the ability of the athletes to reach their maximal velocity which result in limits of the model to effectively estimate athlete performance. This is especially true of more elite athletes. This report explores whether athletes on-ice are able to reach a velocity plateau similar to what has been seen in overground trials. Fourteen male Major Junior ice-hockey players (BW= 83.87 +/- 7.30 kg, height = 188 ± 3.4cm cm, age = 18 ± 1.2 years n = 14) were recruited. For on-ice sprints, participants completed a standardized warm-up consisting of skating and dynamic stretching and a progression of three skating efforts from 50% to 95%. Following the warm-up, participants completed three on ice 45m sprints, with three minutes of rest in between each trial. For overground sprints, participants completed a similar dynamic warm-up to that of on-ice trials. Following the warm-up participants completed three 40m overground sprint trials. For each trial (on-ice and overground), radar was used to collect instantaneous velocity (Stalker ATS II, Texas, USA) aimed at the participant’s waist. Sprint velocities were modelled using custom Python (version 3.2) script using a mono-exponential function, similar to previous work. To determine if on-ice tirals were achieving a maximum velocity (plateau), minimum acceleration values of the modeled data at the end of the sprint were compared (using paired t-test) between on-ice and overground trials. Significant differences (P<0.001) between overground and on-ice minimum accelerations were observed. It was found that on-ice trials consistently reported higher final acceleration values, indicating a maximum maintained velocity (plateau) had not been reached. Based on these preliminary findings, it is suggested that reliable HFVP metrics cannot yet be collected from all ice-hockey populations using current methods. Elite male populations were not able to achieve a velocity plateau similar to what has been seen in overground trials, indicating the absence of a maximum velocity measure. With current velocity and acceleration modeling techniques, including a dependency of a velocity plateau, these results indicate the potential for error in on-ice HFVP measures. Therefore, these findings suggest that a greater on-ice sprint distance may be required or the need for other velocity modeling techniques, where maximal velocity is not required for a complete profile.   

Keywords: ice-hockey, sprint, skating, power

Procedia PDF Downloads 101
776 Study on the Geometric Similarity in Computational Fluid Dynamics Calculation and the Requirement of Surface Mesh Quality

Authors: Qian Yi Ooi

Abstract:

At present, airfoil parameters are still designed and optimized according to the scale of conventional aircraft, and there are still some slight deviations in terms of scale differences. However, insufficient parameters or poor surface mesh quality is likely to occur if these small deviations are embedded in a future civil aircraft with a size that is quite different from conventional aircraft, such as a blended-wing-body (BWB) aircraft with future potential, resulting in large deviations in geometric similarity in computational fluid dynamics (CFD) simulations. To avoid this situation, the study on the CFD calculation on the geometric similarity of airfoil parameters and the quality of the surface mesh is conducted to obtain the ability of different parameterization methods applied on different airfoil scales. The research objects are three airfoil scales, including the wing root and wingtip of conventional civil aircraft and the wing root of the giant hybrid wing, used by three parameterization methods to compare the calculation differences between different sizes of airfoils. In this study, the constants including NACA 0012, a Reynolds number of 10 million, an angle of attack of zero, a C-grid for meshing, and the k-epsilon (k-ε) turbulence model are used. The experimental variables include three airfoil parameterization methods: point cloud method, B-spline curve method, and class function/shape function transformation (CST) method. The airfoil dimensions are set to 3.98 meters, 17.67 meters, and 48 meters, respectively. In addition, this study also uses different numbers of edge meshing and the same bias factor in the CFD simulation. Studies have shown that with the change of airfoil scales, different parameterization methods, the number of control points, and the meshing number of divisions should be used to improve the accuracy of the aerodynamic performance of the wing. When the airfoil ratio increases, the most basic point cloud parameterization method will require more and larger data to support the accuracy of the airfoil’s aerodynamic performance, which will face the severe test of insufficient computer capacity. On the other hand, when using the B-spline curve method, average number of control points and meshing number of divisions should be set appropriately to obtain higher accuracy; however, the quantitative balance cannot be directly defined, but the decisions should be made repeatedly by adding and subtracting. Lastly, when using the CST method, it is found that limited control points are enough to accurately parameterize the larger-sized wing; a higher degree of accuracy and stability can be obtained by using a lower-performance computer.

Keywords: airfoil, computational fluid dynamics, geometric similarity, surface mesh quality

Procedia PDF Downloads 222
775 Topological Language for Classifying Linear Chord Diagrams via Intersection Graphs

Authors: Michela Quadrini

Abstract:

Chord diagrams occur in mathematics, from the study of RNA to knot theory. They are widely used in theory of knots and links for studying the finite type invariants, whereas in molecular biology one important motivation to study chord diagrams is to deal with the problem of RNA structure prediction. An RNA molecule is a linear polymer, referred to as the backbone, that consists of four types of nucleotides. Each nucleotide is represented by a point, whereas each chord of the diagram stands for one interaction for Watson-Crick base pairs between two nonconsecutive nucleotides. A chord diagram is an oriented circle with a set of n pairs of distinct points, considered up to orientation preserving diffeomorphisms of the circle. A linear chord diagram (LCD) is a special kind of graph obtained cutting the oriented circle of a chord diagram. It consists of a line segment, called its backbone, to which are attached a number of chords with distinct endpoints. There is a natural fattening on any linear chord diagram; the backbone lies on the real axis, while all the chords are in the upper half-plane. Each linear chord diagram has a natural genus of its associated surface. To each chord diagram and linear chord diagram, it is possible to associate the intersection graph. It consists of a graph whose vertices correspond to the chords of the diagram, whereas the chord intersections are represented by a connection between the vertices. Such intersection graph carries a lot of information about the diagram. Our goal is to define an LCD equivalence class in terms of identity of intersection graphs, from which many chord diagram invariants depend. For studying these invariants, we introduce a new representation of Linear Chord Diagrams based on a set of appropriate topological operators that permits to model LCD in terms of the relations among chords. Such set is composed of: crossing, nesting, and concatenations. The crossing operator is able to generate the whole space of linear chord diagrams, and a multiple context free grammar able to uniquely generate each LDC starting from a linear chord diagram adding a chord for each production of the grammar is defined. In other words, it allows to associate a unique algebraic term to each linear chord diagram, while the remaining operators allow to rewrite the term throughout a set of appropriate rewriting rules. Such rules define an LCD equivalence class in terms of the identity of intersection graphs. Starting from a modelled RNA molecule and the linear chord, some authors proposed a topological classification and folding. Our LCD equivalence class could contribute to the RNA folding problem leading to the definition of an algorithm that calculates the free energy of the molecule more accurately respect to the existing ones. Such LCD equivalence class could be useful to obtain a more accurate estimate of link between the crossing number and the topological genus and to study the relation among other invariants.

Keywords: chord diagrams, linear chord diagram, equivalence class, topological language

Procedia PDF Downloads 203
774 Emotion-Convolutional Neural Network for Perceiving Stress from Audio Signals: A Brain Chemistry Approach

Authors: Anup Anand Deshmukh, Catherine Soladie, Renaud Seguier

Abstract:

Emotion plays a key role in many applications like healthcare, to gather patients’ emotional behavior. Unlike typical ASR (Automated Speech Recognition) problems which focus on 'what was said', it is equally important to understand 'how it was said.' There are certain emotions which are given more importance due to their effectiveness in understanding human feelings. In this paper, we propose an approach that models human stress from audio signals. The research challenge in speech emotion detection is finding the appropriate set of acoustic features corresponding to an emotion. Another difficulty lies in defining the very meaning of emotion and being able to categorize it in a precise manner. Supervised Machine Learning models, including state of the art Deep Learning classification methods, rely on the availability of clean and labelled data. One of the problems in affective computation is the limited amount of annotated data. The existing labelled emotions datasets are highly subjective to the perception of the annotator. We address the first issue of feature selection by exploiting the use of traditional MFCC (Mel-Frequency Cepstral Coefficients) features in Convolutional Neural Network. Our proposed Emo-CNN (Emotion-CNN) architecture treats speech representations in a manner similar to how CNN’s treat images in a vision problem. Our experiments show that Emo-CNN consistently and significantly outperforms the popular existing methods over multiple datasets. It achieves 90.2% categorical accuracy on the Emo-DB dataset. We claim that Emo-CNN is robust to speaker variations and environmental distortions. The proposed approach achieves 85.5% speaker-dependant categorical accuracy for SAVEE (Surrey Audio-Visual Expressed Emotion) dataset, beating the existing CNN based approach by 10.2%. To tackle the second problem of subjectivity in stress labels, we use Lovheim’s cube, which is a 3-dimensional projection of emotions. Monoamine neurotransmitters are a type of chemical messengers in the brain that transmits signals on perceiving emotions. The cube aims at explaining the relationship between these neurotransmitters and the positions of emotions in 3D space. The learnt emotion representations from the Emo-CNN are mapped to the cube using three component PCA (Principal Component Analysis) which is then used to model human stress. This proposed approach not only circumvents the need for labelled stress data but also complies with the psychological theory of emotions given by Lovheim’s cube. We believe that this work is the first step towards creating a connection between Artificial Intelligence and the chemistry of human emotions.

Keywords: deep learning, brain chemistry, emotion perception, Lovheim's cube

Procedia PDF Downloads 156
773 Using Fractal Architectures for Enhancing the Thermal-Fluid Transport

Authors: Surupa Shaw, Debjyoti Banerjee

Abstract:

Enhancing heat transfer in compact volumes is a challenge when constrained by cost issues, especially those associated with requirements for minimizing pumping power consumption. This is particularly acute for electronic chip cooling applications. Technological advancements in microelectronics have led to development of chip architectures that involve increased power consumption. As a consequence packaging, technologies are saddled with needs for higher rates of power dissipation in smaller form factors. The increasing circuit density, higher heat flux values for dissipation and the significant decrease in the size of the electronic devices are posing thermal management challenges that need to be addressed with a better design of the cooling system. Maximizing surface area for heat exchanging surfaces (e.g., extended surfaces or “fins”) can enable dissipation of higher levels of heat flux. Fractal structures have been shown to maximize surface area in compact volumes. Self-replicating structures at multiple length scales are called “Fractals” (i.e., objects with fractional dimensions; unlike regular geometric objects, such as spheres or cubes whose volumes and surface area values scale as integer values of the length scale dimensions). Fractal structures are expected to provide an appropriate technology solution to meet these challenges for enhanced heat transfer in the microelectronic devices by maximizing surface area available for heat exchanging fluids within compact volumes. In this study, the effect of different fractal micro-channel architectures and flow structures on the enhancement of transport phenomena in heat exchangers is explored by parametric variation of fractal dimension. This study proposes a model that would enable cost-effective solutions for thermal-fluid transport for energy applications. The objective of this study is to ascertain the sensitivity of various parameters (such as heat flux and pressure gradient as well as pumping power) to variation in fractal dimension. The role of the fractal parameters will be instrumental in establishing the most effective design for the optimum cooling of microelectronic devices. This can help establish the requirement of minimal pumping power for enhancement of heat transfer during cooling. Results obtained in this study show that the proposed models for fractal architectures of microchannels significantly enhanced heat transfer due to augmentation of surface area in the branching networks of varying length-scales.

Keywords: fractals, microelectronics, constructal theory, heat transfer enhancement, pumping power enhancement

Procedia PDF Downloads 319
772 Bi-Directional Impulse Turbine for Thermo-Acoustic Generator

Authors: A. I. Dovgjallo, A. B. Tsapkova, A. A. Shimanov

Abstract:

The paper is devoted to one of engine types with external heating – a thermoacoustic engine. In thermoacoustic engine heat energy is converted to an acoustic energy. Further, acoustic energy of oscillating gas flow must be converted to mechanical energy and this energy in turn must be converted to electric energy. The most widely used way of transforming acoustic energy to electric one is application of linear generator or usual generator with crank mechanism. In both cases, the piston is used. Main disadvantages of piston use are friction losses, lubrication problems and working fluid pollution which cause decrease of engine power and ecological efficiency. Using of a bidirectional impulse turbine as an energy converter is suggested. The distinctive feature of this kind of turbine is that the shock wave of oscillating gas flow passing through the turbine is reflected and passes through the turbine again in the opposite direction. The direction of turbine rotation does not change in the process. Different types of bidirectional impulse turbines for thermoacoustic engines are analyzed. The Wells turbine is the simplest and least efficient of them. A radial impulse turbine has more complicated design and is more efficient than the Wells turbine. The most appropriate type of impulse turbine was chosen. This type is an axial impulse turbine, which has a simpler design than that of a radial turbine and similar efficiency. The peculiarities of the method of an impulse turbine calculating are discussed. They include changes in gas pressure and velocity as functions of time during the generation of gas oscillating flow shock waves in a thermoacoustic system. In thermoacoustic system pressure constantly changes by a certain law due to acoustic waves generation. Peak values of pressure are amplitude which determines acoustic power. Gas, flowing in thermoacoustic system, periodically changes its direction and its mean velocity is equal to zero but its peak values can be used for bi-directional turbine rotation. In contrast with feed turbine, described turbine operates on un-steady oscillating flows with direction changes which significantly influence the algorithm of its calculation. Calculated power output is 150 W with frequency 12000 r/min and pressure amplitude 1,7 kPa. Then, 3-d modeling and numerical research of impulse turbine was carried out. As a result of numerical modeling, main parameters of the working fluid in turbine were received. On the base of theoretical and numerical data model of impulse turbine was made on 3D printer. Experimental unit was designed for numerical modeling results verification. Acoustic speaker was used as acoustic wave generator. Analysis if the acquired data shows that use of the bi-directional impulse turbine is advisable. By its characteristics as a converter, it is comparable with linear electric generators. But its lifetime cycle will be higher and engine itself will be smaller due to turbine rotation motion.

Keywords: acoustic power, bi-directional pulse turbine, linear alternator, thermoacoustic generator

Procedia PDF Downloads 378
771 Possibilities and Limits for the Development of Care in Primary Health Care in Brazil

Authors: Ivonete Teresinha Schulter Buss Heidemann, Michelle Kuntz Durand, Aline Megumi Arakawa-Belaunde, Sandra Mara Corrêa, Leandro Martins Costa Do Araujo, Kamila Soares Maciel

Abstract:

Primary Health Care is defined as the level of a system of services that enables the achievement of answers to health needs. This level of care produces services and actions of attention to the person in the life cycle and in their health conditions or diseases. Primary Health Care refers to a conception of care model and organization of the health system that in Brazil seeks to reorganize the principles of the Unified Health System. This system is based on the principle of health as a citizen's right and duty of the State. Primary health care has family health as a priority strategy for its organization according to the precepts of the Unified Health System, structured in the logic of new sectoral practices, associating clinical work and health promotion. Thus, this study seeks to know the possibilities and limits of the care developed by professionals working in Primary Health Care. It was conducted by a qualitative approach of the participant action type, based on Paulo Freire's Research Itinerary, which corresponds to three moments: Thematic Investigation; Encoding and Decoding; and, Critical Unveiling. The themes were investigated in a health unit with the development of a culture circle with 20 professionals, from a municipality in southern Brazil, in the first half of 2021. The participants revealed as possibilities the involvement, bonding and strengthening of the interpersonal relationships of the professionals who work in the context of primary care. Promoting welcoming in primary care has favoured care and teamwork, as well as improved access. They also highlighted that care planning, the use of technologies in the process of communication and the orientation of the population enhances the levels of problem-solving capacity and the organization of services. As limits, the lack of professional recognition and the scarce material and human resources were revealed, conditions that generate tensions for health care. The reduction in the number of professionals and the low salary are pointed out as elements that boost the motivation of the health team for the development of the work. The participants revealed that due to COVID-19, the flow of care had as a priority the pandemic situation, which affected health care in primary care, and prevention and health promotion actions were canceled. The study demonstrated that empowerment and professional involvement are fundamental to promoting comprehensive and problem-solving care. However, limits of the teams are observed when exercising their activities, these are related to the lack of human and material resources, and the expansion of public health policies is urgent.

Keywords: health promotion, primary health care, health professionals, welcoming.

Procedia PDF Downloads 101
770 Robust Inference with a Skew T Distribution

Authors: M. Qamarul Islam, Ergun Dogan, Mehmet Yazici

Abstract:

There is a growing body of evidence that non-normal data is more prevalent in nature than the normal one. Examples can be quoted from, but not restricted to, the areas of Economics, Finance and Actuarial Science. The non-normality considered here is expressed in terms of fat-tailedness and asymmetry of the relevant distribution. In this study a skew t distribution that can be used to model a data that exhibit inherent non-normal behavior is considered. This distribution has tails fatter than a normal distribution and it also exhibits skewness. Although maximum likelihood estimates can be obtained by solving iteratively the likelihood equations that are non-linear in form, this can be problematic in terms of convergence and in many other respects as well. Therefore, it is preferred to use the method of modified maximum likelihood in which the likelihood estimates are derived by expressing the intractable non-linear likelihood equations in terms of standardized ordered variates and replacing the intractable terms by their linear approximations obtained from the first two terms of a Taylor series expansion about the quantiles of the distribution. These estimates, called modified maximum likelihood estimates, are obtained in closed form. Hence, they are easy to compute and to manipulate analytically. In fact the modified maximum likelihood estimates are equivalent to maximum likelihood estimates, asymptotically. Even in small samples the modified maximum likelihood estimates are found to be approximately the same as maximum likelihood estimates that are obtained iteratively. It is shown in this study that the modified maximum likelihood estimates are not only unbiased but substantially more efficient than the commonly used moment estimates or the least square estimates that are known to be biased and inefficient in such cases. Furthermore, in conventional regression analysis, it is assumed that the error terms are distributed normally and, hence, the well-known least square method is considered to be a suitable and preferred method for making the relevant statistical inferences. However, a number of empirical researches have shown that non-normal errors are more prevalent. Even transforming and/or filtering techniques may not produce normally distributed residuals. Here, a study is done for multiple linear regression models with random error having non-normal pattern. Through an extensive simulation it is shown that the modified maximum likelihood estimates of regression parameters are plausibly robust to the distributional assumptions and to various data anomalies as compared to the widely used least square estimates. Relevant tests of hypothesis are developed and are explored for desirable properties in terms of their size and power. The tests based upon modified maximum likelihood estimates are found to be substantially more powerful than the tests based upon least square estimates. Several examples are provided from the areas of Economics and Finance where such distributions are interpretable in terms of efficient market hypothesis with respect to asset pricing, portfolio selection, risk measurement and capital allocation, etc.

Keywords: least square estimates, linear regression, maximum likelihood estimates, modified maximum likelihood method, non-normality, robustness

Procedia PDF Downloads 397
769 DIF-JACKET: a Thermal Protective Jacket for Firefighters

Authors: Gilda Santos, Rita Marques, Francisca Marques, João Ribeiro, André Fonseca, João M. Miranda, João B. L. M. Campos, Soraia F. Neves

Abstract:

Every year, an unacceptable number of firefighters are seriously burned during firefighting operations, with some of them eventually losing their life. Although thermal protective clothing research and development has been searching solutions to minimize firefighters heat load and skin burns, currently commercially available solutions focus in solving isolated problems, for example, radiant heat or water-vapor resistance. Therefore, episodes of severe burns and heat strokes are still frequent. Taking this into account, a consortium composed by Portuguese entities has joined synergies to develop an innovative protective clothing system by following a procedure based on the application of numerical models to optimize the design and using a combinationof protective clothing components disposed in different layers. Recently, it has been shown that Phase Change Materials (PCMs) can contribute to the reduction of potential heat hazards in fire extinguish operations, and consequently, their incorporation into firefighting protective clothing has advantages. The greatest challenge is to integrate these materials without compromising garments ergonomics and, at the same time, accomplishing the International Standard of protective clothing for firefighters – laboratory test methods and performance requirements for wildland firefighting clothing. The incorporation of PCMs into the firefighter's protective jacket will result in the absorption of heat from the fire and consequently increase the time that the firefighter can be exposed to it. According to the project studies and developments, to favor a higher use of the PCM storage capacityand to take advantage of its high thermal inertia more efficiently, the PCM layer should be closer to the external heat source. Therefore, in this stage, to integrate PCMs in firefighting clothing, a mock-up of a vest specially designed to protect the torso (back, chest and abdomen) and to be worn over a fire-resistant jacketwas envisaged. Different configurations of PCMs, as well as multilayer approaches, were studied using suitable joining technologies such as bonding, ultrasound, and radiofrequency. Concerning firefighter’s protective clothing, it is important to balance heat protection and flame resistance with comfort parameters, namely, thermaland water-vapor resistances. The impact of the most promising solutions regarding thermal comfort was evaluated to refine the performance of the global solutions. Results obtained with experimental bench scale model and numerical simulation regarding the integration of PCMs in a vest designed as protective clothing for firefighters will be presented.

Keywords: firefighters, multilayer system, phase change material, thermal protective clothing

Procedia PDF Downloads 165
768 Factors Affecting Air Surface Temperature Variations in the Philippines

Authors: John Christian Lequiron, Gerry Bagtasa, Olivia Cabrera, Leoncio Amadore, Tolentino Moya

Abstract:

Changes in air surface temperature play an important role in the Philippine’s economy, industry, health, and food production. While increasing global mean temperature in the recent several decades has prompted a number of climate change and variability studies in the Philippines, most studies still focus on rainfall and tropical cyclones. This study aims to investigate the trend and variability of observed air surface temperature and determine its major influencing factor/s in the Philippines. A non-parametric Mann-Kendall trend test was applied to monthly mean temperature of 17 synoptic stations covering 56 years from 1960 to 2015 and a mean change of 0.58 °C or a positive trend of 0.0105 °C/year (p < 0.05) was found. In addition, wavelet decomposition was used to determine the frequency of temperature variability show a 12-month, 30-80-month and more than 120-month cycles. This indicates strong annual variations, interannual variations that coincide with ENSO events, and interdecadal variations that are attributed to PDO and CO2 concentrations. Air surface temperature was also correlated with smoothed sunspot number and galactic cosmic rays, the results show a low to no effect. The influence of ENSO teleconnection on temperature, wind pattern, cloud cover, and outgoing longwave radiation on different ENSO phases had significant effects on regional temperature variability. Particularly, an anomalous anticyclonic (cyclonic) flow east of the Philippines during the peak and decay phase of El Niño (La Niña) events leads to the advection of warm southeasterly (cold northeasterly) air mass over the country. Furthermore, an apparent increasing cloud cover trend is observed over the West Philippine Sea including portions of the Philippines, and this is believed to lessen the effect of the increasing air surface temperature. However, relative humidity was also found to be increasing especially on the central part of the country, which results in a high positive trend of heat index, exacerbating the effects on human discomfort. Finally, an assessment of gridded temperature datasets was done to look at the viability of using three high-resolution datasets in future climate analysis and model calibration and verification. Several error statistics (i.e. Pearson correlation, Bias, MAE, and RMSE) were used for this validation. Results show that gridded temperature datasets generally follows the observed surface temperature change and anomalies. In addition, it is more representative of regional temperature rather than a substitute to station-observed air temperature.

Keywords: air surface temperature, carbon dioxide, ENSO, galactic cosmic rays, smoothed sunspot number

Procedia PDF Downloads 324
767 Public Participation for an Effective Flood Risk Management: Building Social Capacities in Ribera Alta Del Ebro, Spain

Authors: Alba Ballester Ciuró, Marc Pares Franzi

Abstract:

While coming decades are likely to see a higher flood risk in Europe and greater socio-economic damages, traditional flood risk management has become inefficient. In response to that, new approaches such as capacity building and public participation have recently been incorporated in natural hazards mitigation policy (i.e. Sendai Framework for Action, Intergovernmental Panel on Climate Change reports and EU Floods Directive). By integrating capacity building and public participation, we present a research concerning the promotion of participatory social capacity building actions for flood risk mitigation at the local level. Social capacities have been defined as the resources and abilities available at individual and collective level that can be used to anticipate, respond to, cope with, recover from and adapt to external stressors. Social capacity building is understood as a process of identifying communities’ social capacities and of applying collaborative strategies to improve them. This paper presents a proposal of systematization of participatory social capacity building process for flood risk mitigation, and its implementation in a high risk of flooding area in the Ebro river basin: Ribera Alta del Ebro. To develop this process, we designed and tested a tool that allows measuring and building five types of social capacities: knowledge, motivation, networks, participation and finance. The tool implementation has allowed us to assess social capacities in the area. Upon the results of the assessment we have developed a co-decision process with stakeholders and flood risk management authorities on which participatory activities could be employed to improve social capacities for flood risk mitigation. Based on the results of this process, and focused on the weaker social capacities, we developed a set of participatory actions in the area oriented to general public and stakeholders: informative sessions on flood risk management plan and flood insurances, interpretative river descents on flood risk management (with journalists, teachers, and general public), interpretative visit to the floodplain, workshop on agricultural insurance, deliberative workshop on project funding, deliberative workshops in schools on flood risk management (playing with a flood risk model). The combination of obtaining data through a mixed-methods approach of qualitative inquiry and quantitative surveys, as well as action research through co-decision processes and pilot participatory activities, show us the significant impact of public participation on social capacity building for flood risk mitigation and contributes to the understanding of which main factors intervene in this process.

Keywords: flood risk management, public participation, risk reduction, social capacities, vulnerability assessment

Procedia PDF Downloads 212
766 Evidence-Triggers for Care of Patients with Cleft Lip and Palate in Srinagarind Hospital: The Tawanchai Center and Out-Patients Surgical Room

Authors: Suteera Pradubwong, Pattama Surit, Sumalee Pongpagatip, Tharinee Pethchara, Bowornsilp Chowchuen

Abstract:

Background: Cleft lip and palate (CLP) is a congenital anomaly of the lip and palate that is caused by several factors. It was found in approximately one per 500 to 550 live births depending on nationality and socioeconomic status. The Tawanchai Center and out-patients surgical room of Srinagarind Hospital are responsible for providing care to patients with CLP (starting from birth to adolescent) and their caregivers. From the observations and interviews with nurses working in these units, they reported that both patients and their caregivers confronted many problems which affected their physical and mental health. Based on the Soukup’s model (2000), the researchers used evidence triggers from clinical practice (practice triggers) and related literature (knowledge triggers) to investigate the problems. Objective: The purpose of this study was to investigate the problems of care for patients with CLP in the Tawanchai Center and out-patient surgical room of Srinagarind Hospital. Material and Method: The descriptive method was used in this study. For practice triggers, the researchers obtained the data from medical records of ten patients with CLP and from interviewing two patients with CLP, eight caregivers, two nurses, and two assistant workers. Instruments for the interview consisted of a demographic data form and a semi-structured questionnaire. For knowledge triggers, the researchers used a literature search. The data from both practice and knowledge triggers were collected between February and May 2016. The quantitative data were analyzed through frequency and percentage distributions, and the qualitative data were analyzed through a content analysis. Results: The problems of care gained from practice and knowledge triggers were consistent and were identified as holistic issues, including 1) insufficient feeding, 2) risks of respiratory tract infections and physical disorders, 3) psychological problems, such as anxiety, stress, and distress, 4) socioeconomic problems, such as stigmatization, isolation, and loss of income, 5)spiritual problems, such as low self-esteem and low quality of life, 6) school absence and learning limitation, 7) lack of knowledge about CLP and its treatments, 8) misunderstanding towards roles among the multidisciplinary team, 9) no available services, and 10) shortage of healthcare professionals, especially speech-language pathologists (SLPs). Conclusion: From evidence-triggers, the problems of care affect the patients and their caregivers holistically. Integrated long-term care by the multidisciplinary team is needed for children with CLP starting from birth to adolescent. Nurses should provide effective care to these patients and their caregivers by using a holistic approach and working collaboratively with other healthcare providers in the multidisciplinary team.

Keywords: evidence-triggers, cleft lip, cleft palate, problems of care

Procedia PDF Downloads 219
765 Integrating Computational Modeling and Analysis with in Vivo Observations for Enhanced Hemodynamics Diagnostics and Prognosis

Authors: Shreyas S. Hegde, Anindya Deb, Suresh Nagesh

Abstract:

Computational bio-mechanics is developing rapidly as a non-invasive tool to assist the medical fraternity to help in both diagnosis and prognosis of human body related issues such as injuries, cardio-vascular dysfunction, atherosclerotic plaque etc. Any system that would help either properly diagnose such problems or assist prognosis would be a boon to the doctors and medical society in general. Recently a lot of work is being focused in this direction which includes but not limited to various finite element analysis related to dental implants, skull injuries, orthopedic problems involving bones and joints etc. Such numerical solutions are helping medical practitioners to come up with alternate solutions for such problems and in most cases have also reduced the trauma on the patients. Some work also has been done in the area related to the use of computational fluid mechanics to understand the flow of blood through the human body, an area of hemodynamics. Since cardio-vascular diseases are one of the main causes of loss of human life, understanding of the blood flow with and without constraints (such as blockages), providing alternate methods of prognosis and further solutions to take care of issues related to blood flow would help save valuable life of such patients. This project is an attempt to use computational fluid dynamics (CFD) to solve specific problems related to hemodynamics. The hemodynamics simulation is used to gain a better understanding of functional, diagnostic and theoretical aspects of the blood flow. Due to the fact that many fundamental issues of the blood flow, like phenomena associated with pressure and viscous forces fields, are still not fully understood or entirely described through mathematical formulations the characterization of blood flow is still a challenging task. The computational modeling of the blood flow and mechanical interactions that strongly affect the blood flow patterns, based on medical data and imaging represent the most accurate analysis of the blood flow complex behavior. In this project the mathematical modeling of the blood flow in the arteries in the presence of successive blockages has been analyzed using CFD technique. Different cases of blockages in terms of percentages have been modeled using commercial software CATIA V5R20 and simulated using commercial software ANSYS 15.0 to study the effect of varying wall shear stress (WSS) values and also other parameters like the effect of increase in Reynolds number. The concept of fluid structure interaction (FSI) has been used to solve such problems. The model simulation results were validated using in vivo measurement data from existing literature

Keywords: computational fluid dynamics, hemodynamics, blood flow, results validation, arteries

Procedia PDF Downloads 408
764 Strategic Entrepreneurship: Model Proposal for Post-Troika Sustainable Cultural Organizations

Authors: Maria Inês Pinho

Abstract:

Recent literature on issues of Cultural Management (also called Strategic Management for cultural organizations) systematically seeks for models that allow such equipment to adapt to the constant change that occurs in contemporary societies. In the last decade, the world, and in particular Europe has experienced a serious financial problem that has triggered defensive mechanisms, both in the direction of promoting the balance of public accounts and in the sense of the anonymous loss of the democratic and cultural values of each nation. If in the first case emerged the Troika that led to strong cuts in funding for Culture, deeply affecting those organizations; in the second case, the commonplace citizen is seen fighting for the non-closure of cultural equipment. Despite this, the cultural manager argues that there is no single formula capable of solving the need to adapt to change. In another way, it is up to this agent to know the existing scientific models and to adapt them in the best way to the reality of the institution he coordinates. These actions, as a rule, are concerned with the best performance vis-à-vis external audiences or with the financial sustainability of cultural organizations. They forget, therefore, that all this mechanics cannot function without its internal public, without its Human Resources. The employees of the cultural organization must then have an entrepreneurial posture - must be intrapreneurial. This paper intends to break this form of action and lead the cultural manager to understand that his role should be in the sense of creating value for society, through a good organizational performance. This is only possible with a posture of strategic entrepreneurship. In other words, with a link between: Cultural Management, Cultural Entrepreneurship and Cultural Intrapreneurship. In order to prove this assumption, the case study methodology was used with the symbol of the European Capital of Culture (Casa da Música) as well as qualitative and quantitative techniques. The qualitative techniques included the procedure of in-depth interviews to managers, founders and patrons and focus groups to public with and without experience in managing cultural facilities. The quantitative techniques involved the application of a questionnaire to middle management and employees of Casa da Música. After the triangulation of the data, it was proved that contemporary management of cultural organizations must implement among its practices, the concept of Strategic Entrepreneurship and its variables. Also, the topics which characterize the Cultural Intrapreneurship notion (job satisfaction, the quality in organizational performance, the leadership and the employee engagement and autonomy) emerged. The findings show then that to be sustainable, a cultural organization should meet the concerns of both external and internal forum. In other words, it should have an attitude of citizenship to the communities, visible on a social responsibility and a participatory management, only possible with the implementation of the concept of Strategic Entrepreneurship and its variable of Cultural Intrapreneurship.

Keywords: cultural entrepreneurship, cultural intrapreneurship, cultural organizations, strategic management

Procedia PDF Downloads 183
763 Capacity of Cold-Formed Steel Warping-Restrained Members Subjected to Combined Axial Compressive Load and Bending

Authors: Maryam Hasanali, Syed Mohammad Mojtabaei, Iman Hajirasouliha, G. Charles Clifton, James B. P. Lim

Abstract:

Cold-formed steel (CFS) elements are increasingly being used as main load-bearing components in the modern construction industry, including low- to mid-rise buildings. In typical multi-storey buildings, CFS structural members act as beam-column elements since they are exposed to combined axial compression and bending actions, both in moment-resisting frames and stud wall systems. Current design specifications, including the American Iron and Steel Institute (AISI S100) and the Australian/New Zealand Standard (AS/NZS 4600), neglect the beneficial effects of warping-restrained boundary conditions in the design of beam-column elements. Furthermore, while a non-linear relationship governs the interaction of axial compression and bending, the combined effect of these actions is taken into account through a simplified linear expression combining pure axial and flexural strengths. This paper aims to evaluate the reliability of the well-known Direct Strength Method (DSM) as well as design proposals found in the literature to provide a better understanding of the efficiency of the code-prescribed linear interaction equation in the strength predictions of CFS beam columns and the effects of warping-restrained boundary conditions on their behavior. To this end, the experimentally validated finite element (FE) models of CFS elements under compression and bending were developed in ABAQUS software, which accounts for both non-linear material properties and geometric imperfections. The validated models were then used for a comprehensive parametric study containing 270 FE models, covering a wide range of key design parameters, such as length (i.e., 0.5, 1.5, and 3 m), thickness (i.e., 1, 2, and 4 mm) and cross-sectional dimensions under ten different load eccentricity levels. The results of this parametric study demonstrated that using the DSM led to the most conservative strength predictions for beam-column members by up to 55%, depending on the element’s length and thickness. This can be sourced by the errors associated with (i) the absence of warping-restrained boundary condition effects, (ii) equations for the calculations of buckling loads, and (iii) the linear interaction equation. While the influence of warping restraint is generally less than 6%, the code suggested interaction equation led to an average error of 4% to 22%, based on the element lengths. This paper highlights the need to provide more reliable design solutions for CFS beam-column elements for practical design purposes.

Keywords: beam-columns, cold-formed steel, finite element model, interaction equation, warping-restrained boundary conditions

Procedia PDF Downloads 105
762 The French Ekang Ethnographic Dictionary. The Quantum Approach

Authors: Henda Gnakate Biba, Ndassa Mouafon Issa

Abstract:

Dictionaries modeled on the Western model [tonic accent languages] are not suitable and do not account for tonal languages phonologically, which is why the [prosodic and phonological] ethnographic dictionary was designed. It is a glossary that expresses the tones and the rhythm of words. It recreates exactly the speaking or singing of a tonal language, and allows the non-speaker of this language to pronounce the words as if they were a native. It is a dictionary adapted to tonal languages. It was built from ethnomusicological theorems and phonological processes, according to Jean. J. Rousseau 1776 hypothesis /To say and to sing were once the same thing/. Each word in the French dictionary finds its corresponding language, ekaη. And each word ekaη is written on a musical staff. This ethnographic dictionary is also an inventive, original and innovative research thesis, but it is also an inventive, original and innovative research thesis. A contribution to the theoretical, musicological, ethno musicological and linguistic conceptualization of languages, giving rise to the practice of interlocution between the social and cognitive sciences, the activities of artistic creation and the question of modeling in the human sciences: mathematics, computer science, translation automation and artificial intelligence. When you apply this theory to any text of a folksong of a world-tone language, you do not only piece together the exact melody, rhythm, and harmonies of that song as if you knew it in advance but also the exact speaking of this language. The author believes that the issue of the disappearance of tonal languages and their preservation has been structurally resolved, as well as one of the greatest cultural equations related to the composition and creation of tonal, polytonal and random music. The experimentation confirming the theorization designed a semi-digital, semi-analog application which translates the tonal languages of Africa (about 2,100 languages) into blues, jazz, world music, polyphonic music, tonal and anatonal music and deterministic and random music). To test this application, I use a music reading and writing software that allows me to collect the data extracted from my mother tongue, which is already modeled in the musical staves saved in the ethnographic (semiotic) dictionary for automatic translation ( volume 2 of the book). Translation is done (from writing to writing, from writing to speech and from writing to music). Mode of operation: you type a text on your computer, a structured song (chorus-verse), and you command the machine a melody of blues, jazz and, world music or, variety etc. The software runs, giving you the option to choose harmonies, and then you select your melody.

Keywords: music, language, entenglement, science, research

Procedia PDF Downloads 70
761 Collaborative Governance to Foster Public Good: The Case of the Etorkizuna Eraikiz Initiative

Authors: Igone Guerra, Xabier Barandiaran

Abstract:

The deep crisis (economic, social and cultural) in which Europe and Gipuzkoa, in the Basque Country (Spain), have been immersed in since 2008 forces governments to face a necessary transformation. These challenges demand different solutions and answers to meet the needs of the citizens. Adapting to continuous and sometimes abrupt changes in the social and political landscape requires an undeniable will to reinvent the way in which governments practice politics. This reinvention of government should help us build different organizations that, first, develop challenging public services, second, respond effectively to the needs of the citizens, and third, manage scarce resources, ultimately offering a contemporary concept of public value. In this context, the Etorkizuna Eraikiz initiative was designed to face the future challenges of the territory in a collaborative way. The aim of the initiative is to promote an alternative form of governance to generate common good and greater public value. In Etorkizuna Eraikiz democratic values, such as collaboration, participation, and accountability are prominent. This government approach is based on several features such as the creation of relational spaces to design and deliberate about the public politics or the promotion of a team-working approach, breaking down the silos between and within organizations, as an exercise in defining a shared vision regarding the Future of the Territory. A future in which the citizens are becoming actors in the problem-solving process and in the construction of a culture of participation and collective learning. In this paper, the Etorkizuna Eraikiz initiative will be presented (vision and methodology) as a model of a local approach to public policy innovation resulting in a way of governance that is more open and collaborative. Based on this case study, this paper explores the way in which collaborative governance leads to better decisions, better leadership, and better citizenry. Finally, the paper also describes some preliminary findings of this local approach, such as the level of knowledge of the citizenry about the projects promoted within Etorkizuna Eraikiz as well as the link between the challenges of the territory, as identified by the citizenry, and the political agenda promoted by the provincial government. Regarding the former, the Survey on the socio-political situation of Gipuzkoa showed that 27.9% of the respondents confirmed that they knew about the projects promoted within the initiative and gave it a mark of 5.71. In connection with the latter, over the last three years, 65 millions of euros have been allocated for a total of 73 projects that have covered socio-economic and political challenges such as aging, climate change, mobility, participation in democratic life, and so on. This governance approach of Etorkizuna Eraikiz has allowed the local government to match the needs of citizens to the political agenda fostering in this way a shared vision about the public value.

Keywords: collaborative governance, citizen participation, public good, social listening, public innovation

Procedia PDF Downloads 141
760 Forging A Distinct Understanding of Implicit Bias

Authors: Benjamin D Reese Jr

Abstract:

Implicit bias is understood as unconscious attitudes, stereotypes, or associations that can influence the cognitions, actions, decisions, and interactions of an individual without intentional control. These unconscious attitudes or stereotypes are often targeted toward specific groups of people based on their gender, race, age, perceived sexual orientation or other social categories. Since the late 1980s, there has been a proliferation of research that hypothesizes that the operation of implicit bias is the result of the brain needing to process millions of bits of information every second. Hence, one’s prior individual learning history provides ‘shortcuts’. As soon as one see someone of a certain race, one have immediate associations based on their past learning, and one might make assumptions about their competence, skill, or danger. These assumptions are outside of conscious awareness. In recent years, an alternative conceptualization has been proposed. The ‘bias of crowds’ theory hypothesizes that a given context or situation influences the degree of accessibility of particular biases. For example, in certain geographic communities in the United States, there is a long-standing and deeply ingrained history of structures, policies, and practices that contribute to racial inequities and bias toward African Americans. Hence, negative biases among groups of people towards African Americans are more accessible in such contexts or communities. This theory does not focus on individual brain functioning or cognitive ‘shortcuts.’ Therefore, attempts to modify individual perceptions or learning might have negligible impact on those embedded environmental systems or policies that are within certain contexts or communities. From the ‘bias of crowds’ perspective, high levels of racial bias in a community can be reduced by making fundamental changes in structures, policies, and practices to create a more equitable context or community rather than focusing on training or education aimed at reducing an individual’s biases. The current paper acknowledges and supports the foundational role of long-standing structures, policies, and practices that maintain racial inequities, as well as inequities related to other social categories, and highlights the critical need to continue organizational, community, and national efforts to eliminate those inequities. It also makes a case for providing individual leaders with a deep understanding of the dynamics of how implicit biases impact cognitions, actions, decisions, and interactions so that those leaders might more effectively develop structural changes in the processes and systems under their purview. This approach incorporates both the importance of an individual’s learning history as well as the important variables within the ‘bias of crowds’ theory. The paper also offers a model for leadership education, as well as examples of structural changes leaders might consider.

Keywords: implicit bias, unconscious bias, bias, inequities

Procedia PDF Downloads 12
759 Optimization Principles of Eddy Current Separator for Mixtures with Different Particle Sizes

Authors: Cao Bin, Yuan Yi, Wang Qiang, Amor Abdelkader, Ali Reza Kamali, Diogo Montalvão

Abstract:

The study of the electrodynamic behavior of non-ferrous particles in time-varying magnetic fields is a promising area of research with wide applications, including recycling of non-ferrous metals, mechanical transmission, and space debris. The key technology for recovering non-ferrous metals is eddy current separation (ECS), which utilizes the eddy current force and torque to separate non-ferrous metals. ECS has several advantages, such as low energy consumption, large processing capacity, and no secondary pollution, making it suitable for processing various mixtures like electronic scrap, auto shredder residue, aluminum scrap, and incineration bottom ash. Improving the separation efficiency of mixtures with different particle sizes in ECS can create significant social and economic benefits. Our previous study investigated the influence of particle size on separation efficiency by combining numerical simulations and separation experiments. Pearson correlation analysis found a strong correlation between the eddy current force in simulations and the repulsion distance in experiments, which confirmed the effectiveness of our simulation model. The interaction effects between particle size and material type, rotational speed, and magnetic pole arrangement were examined. It offer valuable insights for the design and optimization of eddy current separators. The underlying mechanism behind the effect of particle size on separation efficiency was discovered by analyzing eddy current and field gradient. The results showed that the magnitude and distribution heterogeneity of eddy current and magnetic field gradient increased with particle size in eddy current separation. Based on this, we further found that increasing the curvature of magnetic field lines within particles could also increase the eddy current force, providing a optimized method to improving the separation efficiency of fine particles. By combining the results of the studies, a more systematic and comprehensive set of optimization guidelines can be proposed for mixtures with different particle size ranges. The separation efficiency of fine particles could be improved by increasing the rotational speed, curvature of magnetic field lines, and electrical conductivity/density of materials, as well as utilizing the eddy current torque. When designing an ECS, the particle size range of the target mixture should be investigated in advance, and the suitable parameters for separating the mixture can be fixed accordingly. In summary, these results can guide the design and optimization of ECS, and also expand the application areas for ECS.

Keywords: eddy current separation, particle size, numerical simulation, metal recovery

Procedia PDF Downloads 90
758 Study of the Possibility of Adsorption of Heavy Metal Ions on the Surface of Engineered Nanoparticles

Authors: Antonina A. Shumakova, Sergey A. Khotimchenko

Abstract:

The relevance of research is associated, on the one hand, with an ever-increasing volume of production and the expansion of the scope of application of engineered nanomaterials (ENMs), and on the other hand, with the lack of sufficient scientific information on the nature of the interactions of nanoparticles (NPs) with components of biogenic and abiogenic origin. In particular, studying the effect of ENMs (TiO2 NPs, SiO2 NPs, Al2O3 NPs, fullerenol) on the toxicometric characteristics of common contaminants such as lead and cadmium is an important hygienic task, given the high probability of their joint presence in food products. Data were obtained characterizing a multidirectional change in the toxicity of model toxicants when they are co-administered with various types of ENMs. One explanation for this fact is the difference in the adsorption capacity of ENMs, which was further studied in in vitro studies. For this, a method was proposed based on in vitro modeling of conditions simulating the environment of the small intestine. It should be noted that the obtained data are in good agreement with the results of in vivo experiments: - with the combined administration of lead and TiO2 NPs, there were no significant changes in the accumulation of lead in rat liver; in other organs (kidneys, spleen, testes and brain), the lead content was lower than in animals of the control group; - studying the combined effect of lead and Al2O3 NPs, a multiple and significant increase in the accumulation of lead in rat liver was observed with an increase in the dose of Al2O3 NPs. For other organs, the introduction of various doses of Al2O3 NPs did not significantly affect the bioaccumulation of lead; - with the combined administration of lead and SiO2 NPs in different doses, there was no increase in lead accumulation in all studied organs. Based on the data obtained, it can be assumed that at least three scenarios of the combined effects of ENMs and chemical contaminants on the body: - ENMs quite firmly bind contaminants in the gastrointestinal tract and such a complex becomes inaccessible (or inaccessible) for absorption; in this case, it can be expected that the toxicity of both ENMs and contaminants will decrease; - the complex formed in the gastrointestinal tract has partial solubility and can penetrate biological membranes and / or physiological barriers of the body; in this case, ENMs can play the role of a kind of conductor for contaminants and, thus, their penetration into the internal environment of the body increases, thereby increasing the toxicity of contaminants; - ENMs and contaminants do not interact with each other in any way, therefore the toxicity of each of them is determined only by its quantity and does not depend on the quantity of another component. Authors hypothesized that the degree of adsorption of various elements on the surface of ENMs may be a unique characteristic of their action, allowing a more accurate understanding of the processes occurring in a living organism.

Keywords: absorption, cadmium, engineered nanomaterials, lead

Procedia PDF Downloads 87
757 Reframing Physical Activity for Health

Authors: M. Roberts

Abstract:

We Are Undefeatable - is a mass marketing behaviour change campaign that aims to support the least active people living with long term health conditions to be more active. This is an important issue to address because people with long term conditions are an historically underserved community for the sport and physical activity sector and the least active of those with long term conditions have the most to gain in health and wellbeing benefits. The campaign has generated a significant change in the way physical activity is communicated and people with long term conditions are represented in the media and marketing. The goal is to create a social norm around being active. The campaign is led by a unique partnership of organisations: the Richmond Group of Charities (made up of Age UK, Alzheimer’s Society, Asthma + Lung UK, Breast Cancer Now, British Heart Foundation, British Red Cross, Diabetes UK, Macmillan Cancer Support, Rethink Mental Illness, Royal Voluntary Service, Stroke Association, Versus Arthritis) along with Mind, MS Society, Parkinson’s UK and Sport England, with National Lottery Funding. It is underpinned by the COM-B model of behaviour change. It draws on the lived experience of people with multiple long term conditions to shape the look and feel of the campaign and all the resources available. People with long term conditions are the campaign messengers, central to the ethos of the campaign by telling their individual stories of overcoming barriers to be active with their health conditions. The central messaging is about finding a way to be active that works for the individual. We Are Undefeatable is evaluated through a multi-modal approach, including regular qualitative focus groups and a quantitative evaluation tracker undertaken three times a year. The campaign has highlighted the significant barriers to physical activity for people with long term conditions. This has changed the way our partnership talks about physical activity but has also had an impact on the wider sport and physical activity sector, prompting an increasing departure from traditional messaging and marketing approaches for this audience of people with long term conditions. The campaign has reached millions of people since its launch in 2019, through multiple marketing and partnership channels including primetime TV advertising and promotion through health professionals and in health settings. Its diverse storytellers make it relatable to its target audience and the achievable activities highlighted and inclusive messaging inspire our audience to take action as a result of seeing the campaign. The We Are Undefeatable campaign is a blueprint for physical activity campaigns; it not only addresses individual behaviour change but plays a role in addressing systemic barriers to physical activity by sharing the lived experience insight to shape policy and professional practice.

Keywords: behaviour change, long term conditions, partnership, relatable

Procedia PDF Downloads 66
756 Sustainable Mining Fulfilling Constitutional Responsibilities: A Case Study of NMDC Limited Bacheli in India

Authors: Bagam Venkateswarlu

Abstract:

NMDC Limited, Indian multinational mining company operates under administrative control of Ministry of Steel, Government of India. This study is undertaken to evaluate how sustainable mining practiced by the company fulfils the provisions of Indian Constitution to secure to its citizen – justice, equality of status and opportunity, promoting social, economic, political, and religious wellbeing. The Constitution of India lays down a road map as to how the goal of being a “Welfare State” shall be achieved. The vision of sustainable mining being practiced is oriented along the constitutional responsibilities on Indian Citizens and the Corporate World. This qualitative study shall be backed by quantitative studies of National Mineral Development Corporation performances in various domains of sustainable mining and ESG, that is, environment, social and governance parameters. For example, Five Star Rating of mine is a comprehensive evaluation system introduced by Ministry of Mines, Govt. of India is one of the methodologies. Corporate Social Responsibilities is one of the thrust areas for securing social well-being. Green energy initiatives in and around the mines has given the title of “Eco-Friendly Miner” to NMDC Limited. While operating fully mechanized large scale iron ore mine (18.8 million tonne per annum capacity) in Bacheli, Chhattisgarh, M/s NMDC Limited caters to the needs of mineral security of State of Chhattisgarh and Indian Union. It preserves forest, wild-life, and environment heritage of richly endowed State of Chhattisgarh. In the remote and far-flung interiors of Chhattisgarh, NMDC empowers the local population by providing world class educational & medical facilities, transportation network, drinking water facilities, irrigational agricultural supports, employment opportunities, establishing religious harmony. All this ultimately results in empowered, educated, and improved awareness in population. Thus, the basic tenets of constitution of India- secularism, democracy, welfare for all, socialism, humanism, decentralization, liberalism, mixed economy, and non-violence is fulfilled. Constitution declares India as a welfare state – for the people, of the people and by the people. The sustainable mining practices by NMDC are in line with the objective. Thus, the purpose of study is fully met with. The potential benefit of the study includes replicating this model in existing or new establishments in various parts of country – especially in the under-privileged interiors and far-flung areas which are yet to see the lights of development.

Keywords: ESG values, Indian constitution, NMDC limited, sustainable mining, CSR, green energy

Procedia PDF Downloads 77
755 Evaluation of Suspended Particles Impact on Condensation in Expanding Flow with Aerodynamics Waves

Authors: Piotr Wisniewski, Sławomir Dykas

Abstract:

Condensation has a negative impact on turbomachinery efficiency in many energy processes.In technical applications, it is often impossible to dry the working fluid at the nozzle inlet. One of the most popular working fluid is atmospheric air that always contains water in form of steam, liquid, or ice crystals. Moreover, it always contains some amount of suspended particles which influence the phase change process. It is known that the phenomena of evaporation or condensation are connected with release or absorption of latent heat, what influence the fluid physical properties and might affect the machinery efficiency therefore, the phase transition has to be taken under account. This researchpresents an attempt to evaluate the impact of solid and liquid particles suspended in the air on the expansion of moist air in a low expansion rate, i.e., with expansion rate, P≈1000s⁻¹. The numerical study supported by analytical and experimental research is presented in this work. The experimental study was carried out using an in-house experimental test rig, where nozzle was examined for different inlet air relative humidity values included in the range of 25 to 51%. The nozzle was tested for a supersonic flow as well as for flow with shock waves induced by elevated back pressure. The Schlieren photography technique and measurement of static pressure on the nozzle wall were used for qualitative identification of both condensation and shock waves. A numerical model validated against experimental data available in the literature was used for analysis of occurring flow phenomena. The analysis of the suspended particles number, diameter, and character (solid or liquid) revealed their connection with heterogeneous condensation importance. If the expansion of fluid without suspended particlesis considered, the condensation triggers so called condensation wave that appears downstream the nozzle throat. If the solid particles are considered, with increasing number of them, the condensation triggers upwind the nozzle throat, decreasing the condensation wave strength. Due to the release of latent heat during condensation, the fluid temperature and pressure increase, leading to the shift of normal shock upstream the flow. Owing relatively large diameters of the droplets created during heterogeneous condensation, they evaporate partially on the shock and continues to evaporate downstream the nozzle. If the liquid water particles are considered, due to their larger radius, their do not affect the expanding flow significantly, however might be in major importance while considering the compression phenomena as they will tend to evaporate on the shock wave. This research proves the need of further study of phase change phenomena in supersonic flow especially considering the interaction of droplets with the aerodynamic waves in the flow.

Keywords: aerodynamics, computational fluid dynamics, condensation, moist air, multi-phase flows

Procedia PDF Downloads 119
754 Legal Considerations in Fashion Modeling: Protecting Models' Rights and Ensuring Ethical Practices

Authors: Fatemeh Noori

Abstract:

The fashion industry is a dynamic and ever-evolving realm that continuously shapes societal perceptions of beauty and style. Within this industry, fashion modeling plays a crucial role, acting as the visual representation of brands and designers. However, behind the glamorous façade lies a complex web of legal considerations that govern the rights, responsibilities, and ethical practices within the field. This paper aims to explore the legal landscape surrounding fashion modeling, shedding light on key issues such as contract law, intellectual property, labor rights, and the increasing importance of ethical considerations in the industry. Fashion modeling involves the collaboration of various stakeholders, including models, designers, agencies, and photographers. To ensure a fair and transparent working environment, it is imperative to establish a comprehensive legal framework that addresses the rights and obligations of each party involved. One of the primary legal considerations in fashion modeling is the contractual relationship between models and agencies. Contracts define the terms of engagement, including payment, working conditions, and the scope of services. This section will delve into the essential elements of modeling contracts, the negotiation process, and the importance of clarity to avoid disputes. Models are not just individuals showcasing clothing; they are integral to the creation and dissemination of artistic and commercial content. Intellectual property rights, including image rights and the use of a model's likeness, are critical aspects of the legal landscape. This section will explore the protection of models' image rights, the use of their likeness in advertising, and the potential for unauthorized use. Models, like any other professionals, are entitled to fair and ethical treatment. This section will address issues such as working conditions, hours, and the responsibility of agencies and designers to prioritize the well-being of models. Additionally, it will explore the global movement toward inclusivity, diversity, and the promotion of positive body image within the industry. The fashion industry has faced scrutiny for perpetuating harmful standards of beauty and fostering a culture of exploitation. This section will discuss the ethical responsibilities of all stakeholders, including the promotion of diversity, the prevention of exploitation, and the role of models as influencers for positive change. In conclusion, the legal considerations in fashion modeling are multifaceted, requiring a comprehensive approach to protect the rights of models and ensure ethical practices within the industry. By understanding and addressing these legal aspects, the fashion industry can create a more transparent, fair, and inclusive environment for all stakeholders involved in the art of modeling.

Keywords: fashion modeling contracts, image rights in modeling, labor rights for models, ethical practices in fashion, diversity and inclusivity in modeling

Procedia PDF Downloads 77
753 CO₂ Recovery from Biogas and Successful Upgrading to Food-Grade Quality: A Case Study

Authors: Elisa Esposito, Johannes C. Jansen, Loredana Dellamuzia, Ugo Moretti, Lidietta Giorno

Abstract:

The reduction of CO₂ emission into the atmosphere as a result of human activity is one of the most important environmental challenges to face in the next decennia. Emission of CO₂, related to the use of fossil fuels, is believed to be one of the main causes of global warming and climate change. In this scenario, the production of biomethane from organic waste, as a renewable energy source, is one of the most promising strategies to reduce fossil fuel consumption and greenhouse gas emission. Unfortunately, biogas upgrading still produces the greenhouse gas CO₂ as a waste product. Therefore, this work presents a case study on biogas upgrading, aimed at the simultaneous purification of methane and CO₂ via different steps, including CO₂/methane separation by polymeric membranes. The original objective of the project was the biogas upgrading to distribution grid quality methane, but the innovative aspect of this case study is the further purification of the captured CO₂, transforming it from a useless by-product to a pure gas with food-grade quality, suitable for commercial application in the food and beverage industry. The study was performed on a pilot plant constructed by Tecno Project Industriale Srl (TPI) Italy. This is a model of one of the largest biogas production and purification plants. The full-scale anaerobic digestion plant (Montello Spa, North Italy), has a digestive capacity of 400.000 ton of biomass/year and can treat 6.250 m3/hour of biogas from FORSU (organic fraction of solid urban waste). The entire upgrading process consists of a number of purifications steps: 1. Dehydration of the raw biogas by condensation. 2. Removal of trace impurities such as H₂S via absorption. 3.Separation of CO₂ and methane via a membrane separation process. 4. Removal of trace impurities from CO₂. The gas separation with polymeric membranes guarantees complete simultaneous removal of microorganisms. The chemical purity of the different process streams was analysed by a certified laboratory and was compared with the guidelines of the European Industrial Gases Association and the International Society of Beverage Technologists (EIGA/ISBT) for CO₂ used in the food industry. The microbiological purity was compared with the limit values defined in the European Collaborative Action. With a purity of 96-99 vol%, the purified methane respects the legal requirements for the household network. At the same time, the CO₂ reaches a purity of > 98.1% before, and 99.9% after the final distillation process. According to the EIGA/ISBT guidelines, the CO₂ proves to be chemically and microbiologically sufficiently pure to be suitable for food-grade applications.

Keywords: biogas, CO₂ separation, CO2 utilization, CO₂ food grade

Procedia PDF Downloads 212
752 Insights into Child Malnutrition Dynamics with the Lens of Women’s Empowerment in India

Authors: Bharti Singh, Shri K. Singh

Abstract:

Child malnutrition is a multifaceted issue that transcends geographical boundaries. Malnutrition not only stunts physical growth but also leads to a spectrum of morbidities and child mortality. It is one of the leading causes of death (~50 %) among children under age five. Despite economic progress and advancements in healthcare, child malnutrition remains a formidable challenge for India. The objective is to investigate the impact of women's empowerment on child nutrition outcomes in India from 2006 to 2021. A composite index of women's empowerment was constructed using Confirmatory Factor Analysis (CFA), a rigorous technique that validates the measurement model by assessing how well-observed variables represent latent constructs. This approach ensures the reliability and validity of the empowerment index. Secondly, kernel density plots were utilised to visualise the distribution of key nutritional indicators, such as stunting, wasting, and overweight. These plots offer insights into the shape and spread of data distributions, aiding in understanding the prevalence and severity of malnutrition. Thirdly, linear polynomial graphs were employed to analyse how nutritional parameters evolved with the child's age. This technique enables the visualisation of trends and patterns over time, allowing for a deeper understanding of nutritional dynamics during different stages of childhood. Lastly, multilevel analysis was conducted to identify vulnerable levels, including State-level, PSU-level, and household-level factors impacting undernutrition. This approach accounts for hierarchical data structures and allows for the examination of factors at multiple levels, providing a comprehensive understanding of the determinants of child malnutrition. Overall, the utilisation of these statistical methodologies enhances the transparency and replicability of the study by providing clear and robust analytical frameworks for data analysis and interpretation. Our study reveals that NFHS-4 and NFHS-5 exhibit an equal density of severely stunted cases. NFHS-5 indicates a limited decline in wasting among children aged five, while the density of severely wasted children remains consistent across NFHS-3, 4, and 5. In 2019-21, women with higher empowerment had a lower risk of their children being undernourished (Regression coefficient= -0.10***; Confidence Interval [-0.18, -0.04]). Gender dynamics also play a significant role, with male children exhibiting a higher susceptibility to undernourishment. Multilevel analysis suggests household-level vulnerability (intra-class correlation=0.21), highlighting the need to address child undernutrition at the household level.

Keywords: child nutrition, India, NFHS, women’s empowerment

Procedia PDF Downloads 34
751 Documentary Project as an Active Learning Strategy in a Developmental Psychology Course

Authors: Ozge Gurcanli

Abstract:

Recent studies in active-learning focus on how student experience varies based on the content (e.g. STEM versus Humanities) and the medium (e.g. in-class exercises versus off-campus activities) of experiential learning. However, little is known whether the variation in classroom time and space within the same active learning context affects student experience. This study manipulated the use of classroom time for the active learning component of a developmental psychology course that is offered at a four-year university in the South-West Region of United States. The course uses a blended model: traditional and active learning. In the traditional learning component of the course, students do weekly readings, listen to lectures, and take midterms. In the active learning component, students make a documentary on a developmental topic as a final project. Students used the classroom time and space for the documentary in two ways: regular classroom time slots that were dedicated to the making of the documentary outside without the supervision of the professor (Classroom-time Outside) and lectures that offered basic instructions about how to make a documentary (Documentary Lectures). The study used the public teaching evaluations that are administered by the Office of Registrar’s. A total of two hundred and seven student evaluations were available across six semesters. Because the Office of Registrar’s presented the data separately without personal identifiers, One-Way ANOVA with four groups (Traditional, Experiential-Heavy: 19% Classroom-time Outside, 12% for Documentary Lectures, Experiential-Moderate: 5-7% for Classroom-time Outside, 16-19% for Documentary Lectures, Experiential Light: 4-7% for Classroom-time Outside, 7% for Documentary Lectures) was conducted on five key features (Organization, Quality, Assignments Contribution, Intellectual Curiosity, Teaching Effectiveness). Each measure used a five-point reverse-coded scale (1-Outstanding, 5-Poor). For all experiential conditions, the documentary counted towards 30% of the final grade. Organization (‘The instructors preparation for class was’), Quality (’Overall, I would rate the quality of this course as’) and Assignment Contribution (’The contribution of the graded work that made to the learning experience was’) did not yield any significant differences across four course types (F (3, 202)=1.72, p > .05, F(3, 200)=.32, p > .05, F(3, 203)=.43, p > .05, respectively). Intellectual Curiosity (’The instructor’s ability to stimulate intellectual curiosity was’) yielded a marginal effect (F (3, 201)=2.61, p = .053). Tukey’s HSD (p < .05) indicated that the Experiential-Heavy (M = 1.94, SD = .82) condition was significantly different than all other three conditions (M =1.57, 1.51, 1.58; SD = .68, .66, .77, respectively) showing that heavily active class-time did not elicit intellectual curiosity as much as others. Finally, Teaching Effectiveness (’Overall, I feel that the instructor’s effectiveness as a teacher was’) was significant (F (3, 198)=3.32, p <.05). Tukey’s HSD (p <.05) showed that students found the courses with moderate (M=1.49, SD=.62) to light (M=1.52, SD=.70) active class-time more effective than heavily active class-time (M=1.93, SD=.69). Overall, the findings of this study suggest that within the same active learning context, the time and the space dedicated to active learning results in different outcomes in intellectual curiosity and teaching effectiveness.

Keywords: active learning, learning outcomes, student experience, learning context

Procedia PDF Downloads 191
750 Breast Cancer Metastasis Detection and Localization through Transfer-Learning Convolutional Neural Network Classification Based on Convolutional Denoising Autoencoder Stack

Authors: Varun Agarwal

Abstract:

Introduction: With the advent of personalized medicine, histopathological review of whole slide images (WSIs) for cancer diagnosis presents an exceedingly time-consuming, complex task. Specifically, detecting metastatic regions in WSIs of sentinel lymph node biopsies necessitates a full-scanned, holistic evaluation of the image. Thus, digital pathology, low-level image manipulation algorithms, and machine learning provide significant advancements in improving the efficiency and accuracy of WSI analysis. Using Camelyon16 data, this paper proposes a deep learning pipeline to automate and ameliorate breast cancer metastasis localization and WSI classification. Methodology: The model broadly follows five stages -region of interest detection, WSI partitioning into image tiles, convolutional neural network (CNN) image-segment classifications, probabilistic mapping of tumor localizations, and further processing for whole WSI classification. Transfer learning is applied to the task, with the implementation of Inception-ResNetV2 - an effective CNN classifier that uses residual connections to enhance feature representation, adding convolved outputs in the inception unit to the proceeding input data. Moreover, in order to augment the performance of the transfer learning CNN, a stack of convolutional denoising autoencoders (CDAE) is applied to produce embeddings that enrich image representation. Through a saliency-detection algorithm, visual training segments are generated, which are then processed through a denoising autoencoder -primarily consisting of convolutional, leaky rectified linear unit, and batch normalization layers- and subsequently a contrast-normalization function. A spatial pyramid pooling algorithm extracts the key features from the processed image, creating a viable feature map for the CNN that minimizes spatial resolution and noise. Results and Conclusion: The simplified and effective architecture of the fine-tuned transfer learning Inception-ResNetV2 network enhanced with the CDAE stack yields state of the art performance in WSI classification and tumor localization, achieving AUC scores of 0.947 and 0.753, respectively. The convolutional feature retention and compilation with the residual connections to inception units synergized with the input denoising algorithm enable the pipeline to serve as an effective, efficient tool in the histopathological review of WSIs.

Keywords: breast cancer, convolutional neural networks, metastasis mapping, whole slide images

Procedia PDF Downloads 131
749 Exploring Instructional Designs on the Socio-Scientific Issues-Based Learning Method in Respect to STEM Education for Measuring Reasonable Ethics on Electromagnetic Wave through Science Attitudes toward Physics

Authors: Adisorn Banhan, Toansakul Santiboon, Prasong Saihong

Abstract:

Using the Socio-Scientific Issues-Based Learning Method is to compare of the blended instruction of STEM education with a sample consisted of 84 students in 2 classes at the 11th grade level in Sarakham Pittayakhom School. The 2-instructional models were managed of five instructional lesson plans in the context of electronic wave issue. These research procedures were designed of each instructional method through two groups, the 40-experimental student group was designed for the instructional STEM education (STEMe) and 40-controlling student group was administered with the Socio-Scientific Issues-Based Learning (SSIBL) methods. Associations between students’ learning achievements of each instructional method and their science attitudes of their predictions to their exploring activities toward physics with the STEMe and SSIBL methods were compared. The Measuring Reasonable Ethics Test (MRET) was assessed students’ reasonable ethics with the STEMe and SSIBL instructional design methods on two each group. Using the pretest and posttest technique to monitor and evaluate students’ performances of their reasonable ethics on electromagnetic wave issue in the STEMe and SSIBL instructional classes were examined. Students were observed and gained experience with the phenomena being studied with the Socio-Scientific Issues-Based Learning method Model. To support with the STEM that it was not just teaching about Science, Technology, Engineering, and Mathematics; it is a culture that needs to be cultivated to help create a problem solving, creative, critical thinking workforce for tomorrow in physics. Students’ attitudes were assessed with the Test Of Physics-Related Attitude (TOPRA) modified from the original Test Of Science-Related Attitude (TOSRA). Comparisons between students’ learning achievements of their different instructional methods on the STEMe and SSIBL were analyzed. Associations between students’ performances the STEMe and SSIBL instructional design methods of their reasonable ethics and their science attitudes toward physics were associated. These findings have found that the efficiency of the SSIBL and the STEMe innovations were based on criteria of the IOC value higher than evidence as 80/80 standard level. Statistically significant of students’ learning achievements to their later outcomes on the controlling and experimental groups with the SSIBL and STEMe were differentiated between students’ learning achievements at the .05 level. To compare between students’ reasonable ethics with the SSIBL and STEMe of students’ responses to their instructional activities in the STEMe is higher than the SSIBL instructional methods. Associations between students’ later learning achievements with the SSIBL and STEMe, the predictive efficiency values of the R2 indicate that 67% and 75% for the SSIBL, and indicate that 74% and 81% for the STEMe of the variances were attributable to their developing reasonable ethics and science attitudes toward physics, consequently.

Keywords: socio-scientific issues-based learning method, STEM education, science attitudes, measurement, reasonable ethics, physics classes

Procedia PDF Downloads 292