Search results for: fredholm integral equations
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2583

Search results for: fredholm integral equations

183 Simulation of Focusing of Diamagnetic Particles in Ferrofluid Microflows with a Single Set of Overhead Permanent Magnets

Authors: Shuang Chen, Zongqian Shi, Jiajia Sun, Mingjia Li

Abstract:

Microfluidics is a technology that small amounts of fluids are manipulated using channels with dimensions of tens to hundreds of micrometers. At present, this significant technology is required for several applications in some fields, including disease diagnostics, genetic engineering, and environmental monitoring, etc. Among these fields, manipulation of microparticles and cells in microfluidic device, especially separation, have aroused general concern. In magnetic field, the separation methods include positive and negative magnetophoresis. By comparison, negative magnetophoresis is a label-free technology. It has many advantages, e.g., easy operation, low cost, and simple design. Before the separation of particles or cells, focusing them into a single tight stream is usually a necessary upstream operation. In this work, the focusing of diamagnetic particles in ferrofluid microflows with a single set of overhead permanent magnets is investigated numerically. The geometric model of the simulation is based on the configuration of previous experiments. The straight microchannel is 24mm long and has a rectangular cross-section of 100μm in width and 50μm in depth. The spherical diamagnetic particles of 10μm in diameter are suspended into ferrofluid. The initial concentration of the ferrofluid c₀ is 0.096%, and the flow rate of the ferrofluid is 1.8mL/h. The magnetic field is induced by five identical rectangular neodymium−iron− boron permanent magnets (1/8 × 1/8 × 1/8 in.), and it is calculated by equivalent charge source (ECS) method. The flow of the ferrofluid is governed by the Navier–Stokes equations. The trajectories of particles are solved by the discrete phase model (DPM) in the ANSYS FLUENT program. The positions of diamagnetic particles are recorded by transient simulation. Compared with the results of the mentioned experiments, our simulation shows consistent results that diamagnetic particles are gradually focused in ferrofluid under magnetic field. Besides, the diamagnetic particle focusing is studied by varying the flow rate of the ferrofluid. It is in agreement with the experiment that the diamagnetic particle focusing is better with the increase of the flow rate. Furthermore, it is investigated that the diamagnetic particle focusing is affected by other factors, e.g., the width and depth of the microchannel, the concentration of the ferrofluid and the diameter of diamagnetic particles.

Keywords: diamagnetic particle, focusing, microfluidics, permanent magnet

Procedia PDF Downloads 130
182 On Consolidated Predictive Model of the Natural History of Breast Cancer Considering Primary Tumor and Secondary Distant Metastases Growth in Patients with Lymph Nodes Metastases

Authors: Ella Tyuryumina, Alexey Neznanov

Abstract:

This paper is devoted to mathematical modelling of the progression and stages of breast cancer. We propose Consolidated mathematical growth model of primary tumor and secondary distant metastases growth in patients with lymph nodes metastases (CoM-III) as a new research tool. We are interested in: 1) modelling the whole natural history of primary tumor and secondary distant metastases growth in patients with lymph nodes metastases; 2) developing adequate and precise CoM-III which reflects relations between primary tumor and secondary distant metastases; 3) analyzing the CoM-III scope of application; 4) implementing the model as a software tool. Firstly, the CoM-III includes exponential tumor growth model as a system of determinate nonlinear and linear equations. Secondly, mathematical model corresponds to TNM classification. It allows to calculate different growth periods of primary tumor and secondary distant metastases growth in patients with lymph nodes metastases: 1) ‘non-visible period’ for primary tumor; 2) ‘non-visible period’ for secondary distant metastases growth in patients with lymph nodes metastases; 3) ‘visible period’ for secondary distant metastases growth in patients with lymph nodes metastases. The new predictive tool: 1) is a solid foundation to develop future studies of breast cancer models; 2) does not require any expensive diagnostic tests; 3) is the first predictor which makes forecast using only current patient data, the others are based on the additional statistical data. Thus, the CoM-III model and predictive software: a) detect different growth periods of primary tumor and secondary distant metastases growth in patients with lymph nodes metastases; b) make forecast of the period of the distant metastases appearance in patients with lymph nodes metastases; c) have higher average prediction accuracy than the other tools; d) can improve forecasts on survival of breast cancer and facilitate optimization of diagnostic tests. The following are calculated by CoM-III: the number of doublings for ‘non-visible’ and ‘visible’ growth period of secondary distant metastases; tumor volume doubling time (days) for ‘non-visible’ and ‘visible’ growth period of secondary distant metastases. The CoM-III enables, for the first time, to predict the whole natural history of primary tumor and secondary distant metastases growth on each stage (pT1, pT2, pT3, pT4) relying only on primary tumor sizes. Summarizing: a) CoM-III describes correctly primary tumor and secondary distant metastases growth of IA, IIA, IIB, IIIB (T1-4N1-3M0) stages in patients with lymph nodes metastases (N1-3); b) facilitates the understanding of the appearance period and inception of secondary distant metastases.

Keywords: breast cancer, exponential growth model, mathematical model, primary tumor, secondary metastases, survival

Procedia PDF Downloads 302
181 Analysis and Design of Exo-Skeleton System Based on Multibody Dynamics

Authors: Jatin Gupta, Bishakh Bhattacharya

Abstract:

With the aging process, many people start suffering from the problem of weak limbs resulting in mobility disorders and loss of sensory and motor function of limbs. Wearable robotic devices are viable solutions to help people suffering from these issues by augmenting their strength. These robotic devices, popularly known as exoskeletons aides user by providing external power and controlling the dynamics so as to achieve desired motion. Present work studies a simplified dynamic model of the human gait. A four link open chain kinematic model is developed to describe the dynamics of Single Support Phase (SSP) of the human gait cycle. The dynamic model is developed integrating mathematical models of the motion of inverted and triple pendulums. Stance leg is modeled as inverted pendulum having single degree of freedom and swing leg as triple pendulum having three degrees of freedom viz. thigh, knee, and ankle joints. The kinematic model is formulated using forward kinematics approach. Lagrangian approach is used to formulate governing dynamic equation of the model. For a system of nonlinear differential equations, numerical method is employed to obtain system response. Reference trajectory is generated using human body simulator, LifeMOD. For optimal mechanical design and controller design of exoskeleton system, it is imperative to study parameter sensitivity of the system. Six different parameters viz. thigh, shank, and foot masses and lengths are varied from 85% to 115% of the original value for the present work. It is observed that hip joint of swing leg is the most sensitive and ankle joint of swing leg is the least sensitive one. Changing link lengths causes more deviation in system response than link masses. Also, shank length and thigh mass are most sensitive parameters. Finally, the present study gives an insight on different factors that should be considered while designing a lower extremity exoskeleton.

Keywords: lower limb exoskeleton, multibody dynamics, energy based formulation, optimal design

Procedia PDF Downloads 201
180 Participatory Monitoring Strategy to Address Stakeholder Engagement Impact in Co-creation of NBS Related Project: The OPERANDUM Case

Authors: Teresa Carlone, Matteo Mannocchi

Abstract:

In the last decade, a growing number of International Organizations are pushing toward green solutions for adaptation to climate change. This is particularly true in the field of Disaster Risk Reduction (DRR) and land planning, where Nature-Based Solutions (NBS) had been sponsored through funding programs and planning tools. Stakeholder engagement and co-creation of NBS is growing as a practice and research field in environmental projects, fostering the consolidation of a multidisciplinary socio-ecological approach in addressing hydro-meteorological risk. Even thou research and financial interests are constantly spread, the NBS mainstreaming process is still at an early stage as innovative concepts and practices make it difficult to be fully accepted and adopted by a multitude of different actors to produce wide scale societal change. The monitoring and impact evaluation of stakeholders’ participation in these processes represent a crucial aspect and should be seen as a continuous and integral element of the co-creation approach. However, setting up a fit for purpose-monitoring strategy for different contexts is not an easy task, and multiple challenges emerge. In this scenario, the Horizon 2020 OPERANDUM project, designed to address the major hydro-meteorological risks that negatively affect European rural and natural territories through the co-design, co-deployment, and assessment of Nature-based Solution, represents a valid case study to test a monitoring strategy from which set a broader, general and scalable monitoring framework. Applying a participative monitoring methodology, based on selected indicators list that combines quantitative and qualitative data developed within the activity of the project, the paper proposes an experimental in-depth analysis of the stakeholder engagement impact in the co-creation process of NBS. The main focus will be to spot and analyze which factors increase knowledge, social acceptance, and mainstreaming of NBS, promoting also a base-experience guideline to could be integrated with the stakeholder engagement strategy in current and future similar strongly collaborative approach-based environmental projects, such as OPERANDUM. Measurement will be carried out through survey submitted at a different timescale to the same sample (stakeholder: policy makers, business, researchers, interest groups). Changes will be recorded and analyzed through focus groups in order to highlight causal explanation and to assess the proposed list of indicators to steer the conduction of similar activities in other projects and/or contexts. The idea of the paper is to contribute to the construction of a more structured and shared corpus of indicators that can support the evaluation of the activities of involvement and participation of various levels of stakeholders in the co-production, planning, and implementation of NBS to address climate change challenges.

Keywords: co-creation and collaborative planning, monitoring, nature-based solution, participation & inclusion, stakeholder engagement

Procedia PDF Downloads 115
179 From Indigeneity to Urbanity: A Performative Study of Indian Saang (Folk Play) Tradition

Authors: Shiv Kumar

Abstract:

In the shifting scenario of postmodern age that foregrounds the multiplicity of meanings and discourses, the present research article seeks to investigate various paradigm shift of contemporary performances concerning Haryanvi Saangs, so-called folk plays, which are being performed widely in the regional territory of Haryana, a northern state of India. Folk arts cannot be studied efficiently by using the tools of literary criticism because it differs from the literature in many aspects. One of the most essential differences is that literary works invariably have an author. Folk works, on the contrary, never have an author. The situation is quite clear: either we acknowledge the presence of folk art as a phenomenon in the social and cultural history of people, or we do not acknowledge it and argue it is a poetical or art of fiction. This paper is an effort to understand the performative tradition of Saang which is traditionally known as Saang, Swang or Svang became a popular source for instruction and entertainment in the region and neighbouring states. Scholars and critics have long been debating about the origin of the word swang/svang/saang and their relationship to the Sanskrit word –Sangit, which means singing and music. But in the cultural context of Haryana, the word Saang means ‘to impersonate’ or ‘to imitate’ or ‘to copy someone or something’. The stories they portray are derived for the most part from the same myths, tales, epics and from the lives of Indian religious and folk heroes. Literally, the use of poetic sense, the implication of prose style and elaborate figurative technique are worthwhile to compile the productivity of a performance. All use music and song as an integral part of the performance so that it is also appropriate to call them folk opera. These folk plays are performed strictly by aboriginal people in the state. These people, sometimes denominated as Saangi, possess a culture distinct from the rest of Indian folk performances. The concerned form is also known with various other names like Manch, Khayal, Opera, Nautanki. The group of such folk plays can be seen as a dynamic activity and performed in the open space of the theatre. Nowadays, producers contributed greatly in order to create a rapidly growing musical outlet for budding new style of folk presentation and give rise to the electronic focus genre utilizing many musicians and performers who had to become precursors of the folk tradition in the region. Moreover, the paper proposes to examine available sources relative to this article, and it is believed to draw some different conclusions. For instance, to be a spectator of ongoing performances will contribute to providing enough guidance to move forward on this root. In this connection, the paper focuses critically upon the major performative aspects of Haryanvi Saang in relation to several inquiries such as the study of these plays in the context of Indian literary scenario, gender visualization and their dramatic representation, a song-music tradition in folk creativity and development of Haryanvi dramatic art in the contemporary socio-political background.

Keywords: folk play, indigenous, performance, Saang, tradition

Procedia PDF Downloads 162
178 The Gender Criteria of Film Criticism: Creating the ‘Big’, Avoiding the Important

Authors: Eleni Karasavvidou

Abstract:

Social and anthropological research, parallel to Gender Studies, highlighted the relationship between social structures and symbolic forms as an important field of interaction and recording of 'social trends.' Since the study of representations can contribute to the understanding of the social functions and power relations, they encompass. This ‘mirage,’ however, has not only to do with the representations themselves but also with the ways they are received and the film or critical narratives that are established as dominant or alternative. Cinema and the criticism of its cultural products are no exception. Even in the rapidly changing media landscape of the 21st century, movies remain an integral and widespread part of popular culture, making films an extremely powerful means of 'legitimizing' or 'delegitimizing' visions of domination and commonsensical gender stereotypes throughout society. And yet it is film criticism, the 'language per se,' that legitimizes, reinforces, rewards and reproduces (or at least ignores) the stereotypical depictions of female roles that remain common in the realm of film images. This creates the need for this issue to have emerged (also) in academic research questioning gender criteria in film reviews as part of the effort for an inclusive art and society. Qualitative content analysis is used to examine female roles in selected Oscar-nominated films against their reviews from leading websites and newspapers. This method was chosen because of the complex nature of the depictions in the films and the narratives they evoke. The films were divided into basic scenes depicting social functions, such as love and work relationships, positions of power and their function, which were analyzed by content analysis, with borrowings from structuralism (Gennette) and the local/universal images of intercultural philology (Wierlacher). In addition to the measurement of the general ‘representation-time’ by gender, other qualitative characteristics were also analyzed, such as: speaking time, sayings or key actions, overall quality of the character's action in relation to the development of the scenario and social representations in general, as well as quantitatively (insufficient number of female lead roles, fewer key supporting roles, relatively few female directors and people in the production chain and how they might affect screen representations. The quantitative analysis in this study was used to complement the qualitative content analysis. Then the focus shifted to the criteria of film criticism and to the rhetorical narratives that exclude or highlight in relation to gender identities and functions. In the criteria and language of film criticism, stereotypes are often reproduced or allegedly overturned within the framework of apolitical "identity politics," which mainly addresses the surface of a self-referential cultural-consumer product without connecting it more deeply with the material and cultural life. One of the prime examples of this failure is the Bechtel Test, which tracks whether female characters speak in a film regardless of whether women's stories are represented or not in the films analyzed. If perceived unbiased male filmmakers still fail to tell truly feminist stories, the same is the case with the criteria of criticism and the related interventions.

Keywords: representations, context analysis, reviews, sexist stereotypes

Procedia PDF Downloads 85
177 Impact of Climate Change on Crop Production: Climate Resilient Agriculture Is the Need of the Hour

Authors: Deepak Loura

Abstract:

Climate change is considered one of the major environmental problems of the 21st century and a lasting change in the statistical distribution of weather patterns over periods ranging from decades to millions of years. Agriculture and climate change are internally correlated with each other in various aspects, as the threat of varying global climate has greatly driven the attention of scientists, as these variations are imparting a negative impact on global crop production and compromising food security worldwide. The fast pace of development and industrialization and indiscriminate destruction of the natural environment, more so in the last century, have altered the concentration of atmospheric gases that lead to global warming. Carbon dioxide (CO₂), methane (CH₄), and nitrous oxide (NO) are important biogenic greenhouse gases (GHGs) from the agricultural sector contributing to global warming and their concentration is increasing alarmingly. Agricultural productivity can be affected by climate change in 2 ways: first, directly, by affecting plant growth development and yield due to changes in rainfall/precipitation and temperature and/or CO₂ levels, and second, indirectly, there may be considerable impact on agricultural land use due to snow melt, availability of irrigation, frequency and intensity of inter- and intra-seasonal droughts and floods, soil organic matter transformations, soil erosion, distribution and frequency of infestation by insect pests, diseases or weeds, the decline in arable areas (due to submergence of coastal lands), and availability of energy. An increase in atmospheric CO₂ promotes the growth and productivity of C3 plants. On the other hand, an increase in temperature, can reduce crop duration, increase crop respiration rates, affect the equilibrium between crops and pests, hasten nutrient mineralization in soils, decrease fertilizer- use efficiencies, and increase evapotranspiration among others. All these could considerably affect crop yield in long run. Climate resilient agriculture consisting of adaptation, mitigation, and other agriculture practices can potentially enhance the capacity of the system to withstand climate-related disturbances by resisting damage and recovering quickly. Climate resilient agriculture turns the climate change threats that have to be tackled into new business opportunities for the sector in different regions and therefore provides a triple win: mitigation, adaptation, and economic growth. Improving the soil organic carbon stock of soil is integral to any strategy towards adapting to and mitigating the abrupt climate change, advancing food security, and improving the environment. Soil carbon sequestration is one of the major mitigation strategies to achieve climate-resilient agriculture. Climate-smart agriculture is the only way to lower the negative impact of climate variations on crop adaptation before it might affect global crop production drastically. To cope with these extreme changes, future development needs to make adjustments in technology, management practices, and legislation. Adaptation and mitigation are twin approaches to bringing resilience to climate change in agriculture.

Keywords: climate change, global warming, crop production, climate resilient agriculture

Procedia PDF Downloads 74
176 An Odyssey to Sustainability: The Urban Archipelago of India

Authors: B. Sudhakara Reddy

Abstract:

This study provides a snapshot of the sustainability of selected Indian cities by employing 70 indicators in four dimensions to develop an overall city sustainability index. In recent years, the concept of ‘urban sustainability’ has become prominent due to its complexity. Urban areas propel growth and at the same time poses a lot of ecological, social and infrastructural problems and risks. In case of developing countries, the high population density of and the continuous in-migration run the highest risk in natural and man-made disasters. These issues combined with the inability of policy makers in providing basic services makes the cities unsustainable. To assess whether any given policy is moving towards or against urban sustainability it is necessary to consider the relationships among its various dimensions. Hence, in recent years, while preparing the sustainability index, an integral approach involving indicators of different dimensions such as ‘economic’, ‘environmental’ and 'social' is being used. It is also important for urban planners, social analysts and other related institutions to identify and understand the relationships in this complex system. The objective of the paper is to develop a city performance index (CPI) to measure and evaluate the urban regions in terms of sustainable performances. The objectives include: i) Objective assessment of a city’s performance, ii) setting achievable goals iii) prioritise relevant indicators for improvement, iv) learning from leaders, iv) assessment of the effectiveness of programmes that results in achieving high indicator values, v) Strengthening of stakeholder participation. Using the benchmark approach, a conceptual framework is developed for evaluating 25 Indian cities. We develop City Sustainability index (CSI) in order to rank cities according to their level of sustainability. The CSI is composed of four dimensions: Economic, Environment, Social, and Institutional. Each dimension is further composed of multiple indicators: (1) Economic that considers growth, access to electricity, and telephone availability; (2) environmental that includes waste water treatment, carbon emissions, (3) social that includes, equity, infant mortality, and 4) institutional that includes, voting share of population, urban regeneration policies. The CSI, consisting of four dimensions disaggregate into 12 categories and ultimately into 70 indicators. The data are obtained from public and non-governmental organizations, and also from city officials and experts. By ranking a sample of diverse cities on a set of specific dimensions the study can serve as a baseline of current conditions and a marker for referencing future results. The benchmarks and indices presented in the study provide a unique resource for the government and the city authorities to learn about the positive and negative attributes of a city and prepare plans for a sustainable urban development. As a result of our conceptual framework, the set of criteria we suggest is somewhat different to any already in the literature. The scope of our analysis is intended to be broad. Although illustrated with specific examples, it should be apparent that the principles identified are relevant to any monitoring that is used to inform decisions involving decision variables. These indicators are policy-relevant and, hence they are useful tool for decision-makers and researchers.

Keywords: benchmark, city, indicator, performance, sustainability

Procedia PDF Downloads 270
175 Understanding Hydrodynamic in Lake Victoria Basin in a Catchment Scale: A Literature Review

Authors: Seema Paul, John Mango Magero, Prosun Bhattacharya, Zahra Kalantari, Steve W. Lyon

Abstract:

The purpose of this review paper is to develop an understanding of lake hydrodynamics and the potential climate impact on the Lake Victoria (LV) catchment scale. This paper briefly discusses the main problems of lake hydrodynamics and its’ solutions that are related to quality assessment and climate effect. An empirical methodology in modeling and mapping have considered for understanding lake hydrodynamic and visualizing the long-term observational daily, monthly, and yearly mean dataset results by using geographical information system (GIS) and Comsol techniques. Data were obtained for the whole lake and five different meteorological stations, and several geoprocessing tools with spatial analysis are considered to produce results. The linear regression analyses were developed to build climate scenarios and a linear trend on lake rainfall data for a long period. A potential evapotranspiration rate has been described by the MODIS and the Thornthwaite method. The rainfall effect on lake water level observed by Partial Differential Equations (PDE), and water quality has manifested by a few nutrients parameters. The study revealed monthly and yearly rainfall varies with monthly and yearly maximum and minimum temperatures, and the rainfall is high during cool years and the temperature is high associated with below and average rainfall patterns. Rising temperatures are likely to accelerate evapotranspiration rates and more evapotranspiration is likely to lead to more rainfall, drought is more correlated with temperature and cloud is more correlated with rainfall. There is a trend in lake rainfall and long-time rainfall on the lake water surface has affected the lake level. The onshore and offshore have been concentrated by initial literature nutrients data. The study recommended that further studies should consider fully lake bathymetry development with flow analysis and its’ water balance, hydro-meteorological processes, solute transport, wind hydrodynamics, pollution and eutrophication these are crucial for lake water quality, climate impact assessment, and water sustainability.

Keywords: climograph, climate scenarios, evapotranspiration, linear trend flow, rainfall event on LV, concentration

Procedia PDF Downloads 99
174 Robust Numerical Method for Singularly Perturbed Semilinear Boundary Value Problem with Nonlocal Boundary Condition

Authors: Habtamu Garoma Debela, Gemechis File Duressa

Abstract:

In this work, our primary interest is to provide ε-uniformly convergent numerical techniques for solving singularly perturbed semilinear boundary value problems with non-local boundary condition. These singular perturbation problems are described by differential equations in which the highest-order derivative is multiplied by an arbitrarily small parameter ε (say) known as singular perturbation parameter. This leads to the existence of boundary layers, which are basically narrow regions in the neighborhood of the boundary of the domain, where the gradient of the solution becomes steep as the perturbation parameter tends to zero. Due to the appearance of the layer phenomena, it is a challenging task to provide ε-uniform numerical methods. The term 'ε-uniform' refers to identify those numerical methods in which the approximate solution converges to the corresponding exact solution (measured to the supremum norm) independently with respect to the perturbation parameter ε. Thus, the purpose of this work is to develop, analyze, and improve the ε-uniform numerical methods for solving singularly perturbed problems. These methods are based on nonstandard fitted finite difference method. The basic idea behind the fitted operator, finite difference method, is to replace the denominator functions of the classical derivatives with positive functions derived in such a way that they capture some notable properties of the governing differential equation. A uniformly convergent numerical method is constructed via nonstandard fitted operator numerical method and numerical integration methods to solve the problem. The non-local boundary condition is treated using numerical integration techniques. Additionally, Richardson extrapolation technique, which improves the first-order accuracy of the standard scheme to second-order convergence, is applied for singularly perturbed convection-diffusion problems using the proposed numerical method. Maximum absolute errors and rates of convergence for different values of perturbation parameter and mesh sizes are tabulated for the numerical example considered. The method is shown to be ε-uniformly convergent. Finally, extensive numerical experiments are conducted which support all of our theoretical findings. A concise conclusion is provided at the end of this work.

Keywords: nonlocal boundary condition, nonstandard fitted operator, semilinear problem, singular perturbation, uniformly convergent

Procedia PDF Downloads 143
173 Simulation of Optimum Sculling Angle for Adaptive Rowing

Authors: Pornthep Rachnavy

Abstract:

The purpose of this paper is twofold. First, we believe that there are a significant relationship between sculling angle and sculling style among adaptive rowing. Second, we introduce a methodology used for adaptive rowing, namely simulation, to identify effectiveness of adaptive rowing. For our study we simulate the arms only single scull of adaptive rowing. The method for rowing fastest under the 1000 meter was investigated by study sculling angle using the simulation modeling. A simulation model of a rowing system was developed using the Matlab software package base on equations of motion consist of many variation for moving the boat such as oars length, blade velocity and sculling style. The boat speed, power and energy consumption on the system were compute. This simulation modeling can predict the force acting on the boat. The optimum sculling angle was performing by computer simulation for compute the solution. Input to the model are sculling style of each rower and sculling angle. Outputs of the model are boat velocity at 1000 meter. The present study suggests that the optimum sculling angle exist depends on sculling styles. The optimum angle for blade entry and release with respect to the perpendicular through the pin of the first style is -57.00 and 22.0 degree. The optimum angle for blade entry and release with respect to the perpendicular through the pin of the second style is -57.00 and 22.0 degree. The optimum angle for blade entry and release with respect to the perpendicular through the pin of the third style is -51.57 and 28.65 degree. The optimum angle for blade entry and release with respect to the perpendicular through the pin of the fourth style is -45.84 and 34.38 degree. A theoretical simulation for rowing has been developed and presented. The results suggest that it may be advantageous for the rowers to select the sculling angles proper to sculling styles. The optimum sculling angles of the rower depends on the sculling styles made by each rower. The investigated of this paper can be concludes in three directions: 1;. There is the optimum sculling angle in arms only single scull of adaptive rowing. 2. The optimum sculling angles depend on the sculling styles. 3. Computer simulation of rowing can identify opportunities for improving rowing performance by utilizing the kinematic description of rowing. The freedom to explore alternatives in speed, thrust and timing with the computer simulation will provide the coach with a tool for systematic assessments of rowing technique In addition, the ability to use the computer to examine the very complex movements during rowing will help both the rower and the coach to conceptualize the components of movements that may have been previously unclear or even undefined.

Keywords: simulation, sculling, adaptive, rowing

Procedia PDF Downloads 465
172 From Faces to Feelings: Exploring Emotional Contagion and Empathic Accuracy through the Enfacement Illusion

Authors: Ilenia Lanni, Claudia Del Gatto, Allegra Indraccolo, Riccardo Brunetti

Abstract:

Empathy represents a multifaceted construct encompassing affective and cognitive components. Among these, empathic accuracy—defined as the ability to accurately infer another person’s emotions or mental state—plays a pivotal role in fostering empathetic understanding. Emotional contagion, the automatic process through which individuals mimic and synchronize facial expressions, vocalizations, and postures, is considered a foundational mechanism for empathy. This embodied simulation enables shared emotional experiences and facilitates the recognition of others’ emotional states, forming the basis of empathic accuracy. Facial mimicry, an integral part of emotional contagion, creates a physical and emotional resonance with others, underscoring its potential role in enhancing empathic understanding. Building on these findings, the present study explores how manipulating emotional contagion through the enfacement illusion impacts empathic accuracy, particularly in the recognition of complex emotional expressions. The enfacement illusion was implemented as a visuo-tactile multisensory manipulation, during which participants experienced synchronous and spatially congruent tactile stimulation on their own face while observing the same stimulation being applied to another person’s face. This manipulation enhances facial mimicry, which is hypothesized to play a key role in improving empathic accuracy. Following the enfacement illusion, participants completed a modified version of the Diagnostic Analysis of Nonverbal Accuracy–Form 2 (DANVA2-AF). The task included 48 images of adult faces expressing happiness, sadness, or morphed emotions blending neutral with happiness or sadness to increase recognition difficulty. These images featured both familiar and unfamiliar faces, with familiar faces belonging to the actors involved in the prior visuo-tactile stimulation. Participants were required to identify the target’s emotional state as either "happy" or "sad," with response accuracy and reaction times recorded. Results from this study indicate that emotional contagion, as manipulated through the enfacement illusion, significantly enhances empathic accuracy, particularly for the recognition of happiness. Participants demonstrated greater accuracy and faster response times in identifying happiness when viewing familiar faces compared to unfamiliar ones. These findings suggest that the enfacement illusion strengthens emotional resonance and facilitates the processing of positive emotions, which are inherently more likely to be shared and mimicked. Conversely, for the recognition of sadness, an opposite but non-significant trend was observed. Specifically, participants were slightly faster at recognizing sadness in unfamiliar faces compared to familiar ones. This pattern suggests potential differences in how positive and negative emotions are processed within the context of facial mimicry and emotional contagion, warranting further investigation. These results provide insights into the role of facial mimicry in emotional contagion and its selective impact on empathic accuracy. This study highlights how the enfacement illusion can precisely modulate the recognition of specific emotions, offering a deeper understanding of the mechanisms underlying empathy.

Keywords: empathy, emotional contagion, enfacement illusion, emotion recognition

Procedia PDF Downloads 12
171 Satisfaction Among Preclinical Medical Students with Low-Fidelity Simulation-Based Learning

Authors: Shilpa Murthy, Hazlina Binti Abu Bakar, Juliet Mathew, Chandrashekhar Thummala Hlly Sreerama Reddy, Pathiyil Ravi Shankar

Abstract:

Simulation is defined as a technique that replaces or expands real experiences with guided experiences that interactively imitate real-world processes or systems. Simulation enables learners to train in a safe and non-threatening environment. For decades, simulation has been considered an integral part of clinical teaching and learning strategy in medical education. The several types of simulation used in medical education and the clinical environment can be applied to several models, including full-body mannequins, task trainers, standardized simulated patients, virtual or computer-generated simulation, or Hybrid simulation that can be used to facilitate learning. Simulation allows healthcare practitioners to acquire skills and experience while taking care of patient safety. The recent COVID pandemic has also led to an increase in simulation use, as there were limitations on medical student placements in hospitals and clinics. The learning is tailored according to the educational needs of students to make the learning experience more valuable. Simulation in the pre-clinical years has challenges with resource constraints, effective curricular integration, student engagement and motivation, and evidence of educational impact, to mention a few. As instructors, we may have more reliance on the use of simulation for pre-clinical students while the students’ confidence levels and perceived competence are to be evaluated. Our research question was whether the implementation of simulation-based learning positively influences preclinical medical students' confidence levels and perceived competence. This study was done to align the teaching activities with the student’s learning experience to introduce more low-fidelity simulation-based teaching sessions for pre-clinical years and to obtain students’ input into the curriculum development as part of inclusivity. The study was carried out at International Medical University, involving pre-clinical year (Medical) students who were started with low-fidelity simulation-based medical education from their first semester and were gradually introduced to medium fidelity, too. The Student Satisfaction and Self-Confidence in Learning Scale questionnaire from the National League of Nursing was employed to collect the responses. The internal consistency reliability for the survey items was tested with Cronbach’s alpha using an Excel file. IBM SPSS for Windows version 28.0 was used to analyze the data. Spearman’s rank correlation was used to analyze the correlation between students’ satisfaction and self-confidence in learning. The significance level was set at p value less than 0.05. The results from this study have prompted the researchers to undertake a larger-scale evaluation, which is currently underway. The current results show that 70% of students agreed that the teaching methods used in the simulation were helpful and effective. The sessions are dependent on the learning materials that are provided and how the facilitators engage the students and make the session more enjoyable. The feedback provided inputs on the following areas to focus on while designing simulations for pre-clinical students. There are quality learning materials, an interactive environment, motivating content, skills and knowledge of the facilitator, and effective feedback.

Keywords: low-fidelity simulation, pre-clinical simulation, students satisfaction, self-confidence

Procedia PDF Downloads 78
170 Ship Roll Reduction Using Water-Flow Induced Coriolis Effect

Authors: Mario P. Walker, Masaaki Okuma

Abstract:

Ships are subjected to motions which can disrupt on-board operations and damage equipment. Roll motion, in particular, is of great interest due to low damping conditions which may lead to capsizing. Therefore finding ways to reduce this motion is important in ship designs. Several techniques have been investigated to reduce rolling. These include the commonly used anti-roll tanks, fin stabilizers and bilge keels. However, these systems are not without their challenges. For example, water-flow in anti-roll tanks creates complications, and for fin stabilizers and bilge keels, an extremely large size is required to produce any significant damping creating operational challenges. Additionally, among these measures presented above only anti-roll tanks are effective in zero forward motion of the vessels. This paper proposes and investigates a method to reduce rolling by inducing Coriolis effect using water-flow in the radial direction. Motion in the radial direction of a rolling structure will induce Coriolis force and, depending on the direction of flow will either amplify or attenuate the structure. The system is modelled with two degrees of freedom, having rotational motion for parametric rolling and radial motion of the water-flow. Equations of motion are derived and investigated. Numerical examples are analyzed in detail. To demonstrate applicability parameters from a Ro-Ro vessel are used as extensive research have been conducted on these over the years. The vessel is investigated under free and forced roll conditions. Several models are created using various masses, heights, and velocities of water-flow at a given time. The proposed system was found to produce substantial roll reduction which increases with increase in any of the parameters varied as stated above, with velocity having the most significant effect. The proposed system provides a simple approach to reduce ship rolling. Water-flow control is very simple as the water flows in only one direction with constant velocity. Only needing to control the time at which the system should be turned on or off. Furthermore, the proposed system is effective in both forward and zero forward motion of the ship, and provides no hydrodynamic drag. This is a starting point for designing an effective and practical system. For this to be a viable approach further investigations are needed to address challenges that present themselves.

Keywords: Coriolis effect, damping, rolling, water-flow

Procedia PDF Downloads 450
169 Control Performance Simulation and Analysis for Microgravity Vibration Isolation System Onboard Chinese Space Station

Authors: Wei Liu, Shuquan Wang, Yang Gao

Abstract:

Microgravity Science Experiment Rack (MSER) will be onboard TianHe (TH) spacecraft planned to be launched in 2018. TH is one module of Chinese Space Station. Microgravity Vibration Isolation System (MVIS), which is MSER’s core part, is used to isolate disturbance from TH and provide high-level microgravity for science experiment payload. MVIS is two stage vibration isolation system, consisting of Follow Unit (FU) and Experiment Support Unit (ESU). FU is linked to MSER by umbilical cables, and ESU suspends within FU and without physical connection. The FU’s position and attitude relative to TH is measured by binocular vision measuring system, and the acceleration and angular velocity is measured by accelerometers and gyroscopes. Air-jet thrusters are used to generate force and moment to control FU’s motion. Measurement module on ESU contains a set of Position-Sense-Detectors (PSD) sensing the ESU’s position and attitude relative to FU, accelerometers and gyroscopes sensing ESU’s acceleration and angular velocity. Electro-magnetic actuators are used to control ESU’s motion. Firstly, the linearized equations of FU’s motion relative to TH and ESU’s motion relative to FU are derived, laying the foundation for control system design and simulation analysis. Subsequently, two control schemes are proposed. One control scheme is that ESU tracks FU and FU tracks TH, shorten as E-F-T. The other one is that FU tracks ESU and ESU tracks TH, shorten as F-E-T. In addition, motion spaces are constrained within ±15 mm、±2° between FU and ESU, and within ±300 mm between FU and TH or between ESU and TH. A Proportional-Integrate-Differentiate (PID) controller is designed to control FU’s position and attitude. ESU’s controller includes an acceleration feedback loop and a relative position feedback loop. A Proportional-Integrate (PI) controller is designed in the acceleration feedback loop to reduce the ESU’s acceleration level, and a PID controller in the relative position feedback loop is used to avoid collision. Finally, simulations of E-F-T and F-E-T are performed considering variety uncertainties, disturbances and motion space constrains. The simulation results of E-T-H showed that control performance was from 0 to -20 dB for vibration frequency from 0.01 to 0.1 Hz, and vibration was attenuated 40 dB per ten octave above 0.1Hz. The simulation results of T-E-H showed that vibration was attenuated 20 dB per ten octave at the beginning of 0.01Hz.

Keywords: microgravity science experiment rack, microgravity vibration isolation system, PID control, vibration isolation performance

Procedia PDF Downloads 161
168 Different Stages for the Creation of Electric Arc Plasma through Slow Rate Current Injection to Single Exploding Wire, by Simulation and Experiment

Authors: Ali Kadivar, Kaveh Niayesh

Abstract:

This work simulates the voltage drop and resistance of the explosion of copper wires of diameters 25, 40, and 100 µm surrounded by 1 bar nitrogen exposed to a 150 A current and before plasma formation. The absorption of electrical energy in an exploding wire is greatly diminished when the plasma is formed. This study shows the importance of considering radiation and heat conductivity in the accuracy of the circuit simulations. The radiation of the dense plasma formed on the wire surface is modeled with the Net Emission Coefficient (NEC) and is mixed with heat conductivity through PLASIMO® software. A time-transient code for analyzing wire explosions driven by a slow current rise rate is developed. It solves a circuit equation coupled with one-dimensional (1D) equations for the copper electrical conductivity as a function of its physical state and Net Emission Coefficient (NEC) radiation. At first, an initial voltage drop over the copper wire, current, and temperature distribution at the time of expansion is derived. The experiments have demonstrated that wires remain rather uniform lengthwise during the explosion and can be simulated utilizing 1D simulations. Data from the first stage are then used as the initial conditions of the second stage, in which a simplified 1D model for high-Mach-number flows is adopted to describe the expansion of the core. The current was carried by the vaporized wire material before it was dispersed in nitrogen by the shock wave. In the third stage, using a three-dimensional model of the test bench, the streamer threshold is estimated. Electrical breakdown voltage is calculated without solving a full-blown plasma model by integrating Townsend growth coefficients (TdGC) along electric field lines. BOLSIG⁺ and LAPLACE databases are used to calculate the TdGC at different mixture ratios of nitrogen/copper vapor. The simulations show both radiation and heat conductivity should be considered for an adequate description of wire resistance, and gaseous discharges start at lower voltages than expected due to ultraviolet radiation and the exploding shocks, which may have ionized the nitrogen.

Keywords: exploding wire, Townsend breakdown mechanism, streamer, metal vapor, shock waves

Procedia PDF Downloads 89
167 Teaching Linguistic Humour Research Theories: Egyptian Higher Education EFL Literature Classes

Authors: O. F. Elkommos

Abstract:

“Humour studies” is an interdisciplinary research area that is relatively recent. It interests researchers from the disciplines of psychology, sociology, medicine, nursing, in the work place, gender studies, among others, and certainly teaching, language learning, linguistics, and literature. Linguistic theories of humour research are numerous; some of which are of interest to the present study. In spite of the fact that humour courses are now taught in universities around the world in the Egyptian context it is not included. The purpose of the present study is two-fold: to review the state of arts and to show how linguistic theories of humour can be possibly used as an art and craft of teaching and of learning in EFL literature classes. In the present study linguistic theories of humour were applied to selected literary texts to interpret humour as an intrinsic artistic communicative competence challenge. Humour in the area of linguistics was seen as a fifth component of communicative competence of the second language leaner. In literature it was studied as satire, irony, wit, or comedy. Linguistic theories of humour now describe its linguistic structure, mechanism, function, and linguistic deviance. Semantic Script Theory of Verbal Humor (SSTH), General Theory of Verbal Humor (GTVH), Audience Based Theory of Humor (ABTH), and their extensions and subcategories as well as the pragmatic perspective were employed in the analyses. This research analysed the linguistic semantic structure of humour, its mechanism, and how the audience reader (teacher or learner) becomes an interactive interpreter of the humour. This promotes humour competence together with the linguistic, social, cultural, and discourse communicative competence. Studying humour as part of the literary texts and the perception of its function in the work also brings its positive association in class for educational purposes. Humour is by default a provoking/laughter-generated device. Incongruity recognition, perception and resolving it, is a cognitive mastery. This cognitive process involves a humour experience that lightens up the classroom and the mind. It establishes connections necessary for the learning process. In this context the study examined selected narratives to exemplify the application of the theories. It is, therefore, recommended that the theories would be taught and applied to literary texts for a better understanding of the language. Students will then develop their language competence. Teachers in EFL/ESL classes will teach the theories, assist students apply them and interpret text and in the process will also use humour. This is thus easing students' acquisition of the second language, making the classroom an enjoyable, cheerful, self-assuring, and self-illuminating experience for both themselves and their students. It is further recommended that courses of humour research studies should become an integral part of higher education curricula in Egypt.

Keywords: ABTH, deviance, disjuncture, episodic, GTVH, humour competence, humour comprehension, humour in the classroom, humour in the literary texts, humour research linguistic theories, incongruity-resolution, isotopy-disjunction, jab line, longer text joke, narrative story line (macro-micro), punch line, six knowledge resource, SSTH, stacks, strands, teaching linguistics, teaching literature, TEFL, TESL

Procedia PDF Downloads 303
166 Development of an Artificial Neural Network to Measure Science Literacy Leveraging Neuroscience

Authors: Amanda Kavner, Richard Lamb

Abstract:

Faster growth in science and technology of other nations may make staying globally competitive more difficult without shifting focus on how science is taught in US classes. An integral part of learning science involves visual and spatial thinking since complex, and real-world phenomena are often expressed in visual, symbolic, and concrete modes. The primary barrier to spatial thinking and visual literacy in Science, Technology, Engineering, and Math (STEM) fields is representational competence, which includes the ability to generate, transform, analyze and explain representations, as opposed to generic spatial ability. Although the relationship is known between the foundational visual literacy and the domain-specific science literacy, science literacy as a function of science learning is still not well understood. Moreover, the need for a more reliable measure is necessary to design resources which enhance the fundamental visuospatial cognitive processes behind scientific literacy. To support the improvement of students’ representational competence, first visualization skills necessary to process these science representations needed to be identified, which necessitates the development of an instrument to quantitatively measure visual literacy. With such a measure, schools, teachers, and curriculum designers can target the individual skills necessary to improve students’ visual literacy, thereby increasing science achievement. This project details the development of an artificial neural network capable of measuring science literacy using functional Near-Infrared Spectroscopy (fNIR) data. This data was previously collected by Project LENS standing for Leveraging Expertise in Neurotechnologies, a Science of Learning Collaborative Network (SL-CN) of scholars of STEM Education from three US universities (NSF award 1540888), utilizing mental rotation tasks, to assess student visual literacy. Hemodynamic response data from fNIRsoft was exported as an Excel file, with 80 of both 2D Wedge and Dash models (dash) and 3D Stick and Ball models (BL). Complexity data were in an Excel workbook separated by the participant (ID), containing information for both types of tasks. After changing strings to numbers for analysis, spreadsheets with measurement data and complexity data were uploaded to RapidMiner’s TurboPrep and merged. Using RapidMiner Studio, a Gradient Boosted Trees artificial neural network (ANN) consisting of 140 trees with a maximum depth of 7 branches was developed, and 99.7% of the ANN predictions are accurate. The ANN determined the biggest predictors to a successful mental rotation are the individual problem number, the response time and fNIR optode #16, located along the right prefrontal cortex important in processing visuospatial working memory and episodic memory retrieval; both vital for science literacy. With an unbiased measurement of science literacy provided by psychophysiological measurements with an ANN for analysis, educators and curriculum designers will be able to create targeted classroom resources to help improve student visuospatial literacy, therefore improving science literacy.

Keywords: artificial intelligence, artificial neural network, machine learning, science literacy, neuroscience

Procedia PDF Downloads 121
165 Characteristics of Plasma Synthetic Jet Actuator in Repetitive Working Mode

Authors: Haohua Zong, Marios Kotsonis

Abstract:

Plasma synthetic jet actuator (PSJA) is a new concept of zero net mass flow actuator which utilizes pulsed arc/spark discharge to rapidly pressurize gas in a small cavity under constant-volume conditions. The unique combination of high exit jet velocity (>400 m/s) and high actuation frequency (>5 kHz) provides a promising solution for high-speed high-Reynolds-number flow control. This paper focuses on the performance of PSJA in repetitive working mode which is more relevant to future flow control applications. A two-electrodes PSJA (cavity volume: 424 mm3, orifice diameter: 2 mm) together with a capacitive discharge circuit (discharge energy: 50 mJ-110 mJ) is designed to enable repetitive operation. Time-Resolved Particle Imaging Velocimetry (TR-PIV) system working at 10 kHz is exploited to investigate the influence of discharge frequency on performance of PSJA. In total, seven cases are tested, covering a wide range of discharge frequencies (20 Hz-560 Hz). The pertinent flow features (shock wave, vortex ring and jet) remain the same for single shot mode and repetitive working mode. Shock wave is issued prior to jet eruption. Two distinct vortex rings are formed in one cycle. The first one is produced by the starting jet whereas the second one is related with the shock wave reflection in cavity. A sudden pressure rise is induced at the throat inlet by the reflection of primary shock wave, promoting the shedding of second vortex ring. In one cycle, jet exit velocity first increases sharply, then decreases almost linearly. Afterwards, an alternate occurrence of multiple jet stages and refresh stages is observed. By monitoring the dynamic evolution of exit velocity in one cycle, some integral performance parameters of PSJA can be deduced. As frequency increases, the jet intensity in steady phase decreases monotonically. In the investigated frequency range, jet duration time drops from 250 µs to 210 µs and peak jet velocity decreases from 53 m/s to approximately 39 m/s. The jet impulse and the expelled gas mass (0.69 µN∙s and 0.027 mg at 20 Hz) decline by 48% and 40%, respectively. However, the electro-mechanical efficiency of PSJA defined by the ratio of jet mechanical energy to capacitor energy doesn’t show significant difference (o(0.01%)). Fourier transformation of the temporal exit velocity signal indicates two dominant frequencies. One corresponds to the discharge frequency, while the other accounts for the alternation frequency of jet stage and refresh stage in one cycle. The alternation period (300 µs approximately) is independent of discharge frequency, and possibly determined intrinsically by the actuator geometry. A simple analytical model is established to interpret the alternation of jet stage and refresh stage. Results show that the dynamic response of exit velocity to a small-scale disturbance (jump in cavity pressure) can be treated as a second-order under-damping system. Oscillation frequency of the exit velocity, namely alternation frequency, is positively proportional to exit area, but inversely proportional to cavity volume and throat length. Theoretical value of alternation period (305 µs) agrees well with the experimental value.

Keywords: plasma, synthetic jet, actuator, frequency effect

Procedia PDF Downloads 254
164 Towards Sustainable Concrete: Maturity Method to Evaluate the Effect of Curing Conditions on the Strength Development in Concrete Structures under Kuwait Environmental Conditions

Authors: F. Al-Fahad, J. Chakkamalayath, A. Al-Aibani

Abstract:

Conventional methods of determination of concrete strength under controlled laboratory conditions will not accurately represent the actual strength of concrete developed under site curing conditions. This difference in strength measurement will be more in the extreme environment in Kuwait as it is characterized by hot marine environment with normal temperature in summer exceeding 50°C accompanied by dry wind in desert areas and salt laden wind on marine and on shore areas. Therefore, it is required to have test methods to measure the in-place properties of concrete for quality assurance and for the development of durable concrete structures. The maturity method, which defines the strength of a given concrete mix as a function of its age and temperature history, is an approach for quality control for the production of sustainable and durable concrete structures. The unique harsh environmental conditions in Kuwait make it impractical to adopt experiences and empirical equations developed from the maturity methods in other countries. Concrete curing, especially in the early age plays an important role in developing and improving the strength of the structure. This paper investigates the use of maturity method to assess the effectiveness of three different types of curing methods on the compressive and flexural strength development of one high strength concrete mix of 60 MPa produced with silica fume. This maturity approach was used to predict accurately, the concrete compressive and flexural strength at later ages under different curing conditions. Maturity curves were developed for compressive and flexure strengths for a commonly used concrete mix in Kuwait, which was cured using three different curing conditions, including water curing, external spray coating and the use of internal curing compound during concrete mixing. It was observed that the maturity curve developed for the same mix depends on the type of curing conditions. It can be used to predict the concrete strength under different exposure and curing conditions. This study showed that concrete curing with external spray curing method cannot be recommended to use as it failed to aid concrete in reaching accepted values of strength, especially for flexural strength. Using internal curing compound lead to accepted levels of strength when compared with water cuing. Utilization of the developed maturity curves will help contactors and engineers to determine the in-place concrete strength at any time, and under different curing conditions. This will help in deciding the appropriate time to remove the formwork. The reduction in construction time and cost has positive impacts towards sustainable construction.

Keywords: curing, durability, maturity, strength

Procedia PDF Downloads 306
163 A One-Dimensional Model for Contraction in Burn Wounds: A Sensitivity Analysis and a Feasibility Study

Authors: Ginger Egberts, Fred Vermolen, Paul van Zuijlen

Abstract:

One of the common complications in post-burn scars is contractions. Depending on the extent of contraction and the wound dimensions, the contracture can cause a limited range-of-motion of joints. A one-dimensional morphoelastic continuum hypothesis-based model describing post-burn scar contractions is considered. The beauty of the one-dimensional model is the speed; hence it quickly yields new results and, therefore, insight. This model describes the movement of the skin and the development of the strain present. Besides these mechanical components, the model also contains chemical components that play a major role in the wound healing process. These components are fibroblasts, myofibroblasts, the so-called signaling molecules, and collagen. The dermal layer is modeled as an isotropic morphoelastic solid, and pulling forces are generated by myofibroblasts. The solution to the model equations is approximated by the finite-element method using linear basis functions. One of the major challenges in biomechanical modeling is the estimation of parameter values. Therefore, this study provides a comprehensive description of skin mechanical parameter values and a sensitivity analysis. Further, since skin mechanical properties change with aging, it is important that the model is feasible for predicting the development of contraction in burn patients of different ages, and hence this study provides a feasibility study. The variability in the solutions is caused by varying the values for some parameters simultaneously over the domain of computation, for which the results of the sensitivity analysis are used. The sensitivity analysis shows that the most sensitive parameters are the equilibrium concentration of collagen, the apoptosis rate of fibroblasts and myofibroblasts, and the secretion rate of signaling molecules. This suggests that most of the variability in the evolution of contraction in burns in patients of different ages might be caused mostly by the decreasing equilibrium of collagen concentration. As expected, the feasibility study shows this model can be used to show distinct extents of contractions in burns in patients of different ages. Nevertheless, contraction formation in children differs from contraction formation in adults because of the growth. This factor has not been incorporated in the model yet, and therefore the feasibility results for children differ from what is seen in the clinic.

Keywords: biomechanics, burns, feasibility, fibroblasts, morphoelasticity, sensitivity analysis, skin mechanics, wound contraction

Procedia PDF Downloads 160
162 Nonlinear Dynamic Analysis of Base-Isolated Structures Using a Partitioned Solution Approach and an Exponential Model

Authors: Nicolò Vaiana, Filip C. Filippou, Giorgio Serino

Abstract:

The solution of the nonlinear dynamic equilibrium equations of base-isolated structures adopting a conventional monolithic solution approach, i.e. an implicit single-step time integration method employed with an iteration procedure, and the use of existing nonlinear analytical models, such as differential equation models, to simulate the dynamic behavior of seismic isolators can require a significant computational effort. In order to reduce numerical computations, a partitioned solution method and a one dimensional nonlinear analytical model are presented in this paper. A partitioned solution approach can be easily applied to base-isolated structures in which the base isolation system is much more flexible than the superstructure. Thus, in this work, the explicit conditionally stable central difference method is used to evaluate the base isolation system nonlinear response and the implicit unconditionally stable Newmark’s constant average acceleration method is adopted to predict the superstructure linear response with the benefit in avoiding iterations in each time step of a nonlinear dynamic analysis. The proposed mathematical model is able to simulate the dynamic behavior of seismic isolators without requiring the solution of a nonlinear differential equation, as in the case of widely used differential equation model. The proposed mixed explicit-implicit time integration method and nonlinear exponential model are adopted to analyze a three dimensional seismically isolated structure with a lead rubber bearing system subjected to earthquake excitation. The numerical results show the good accuracy and the significant computational efficiency of the proposed solution approach and analytical model compared to the conventional solution method and mathematical model adopted in this work. Furthermore, the low stiffness value of the base isolation system with lead rubber bearings allows to have a critical time step considerably larger than the imposed ground acceleration time step, thus avoiding stability problems in the proposed mixed method.

Keywords: base-isolated structures, earthquake engineering, mixed time integration, nonlinear exponential model

Procedia PDF Downloads 281
161 Geochemistry and Tectonic Framework of Malani Igneous Suite and Their Effect on Groundwater Quality of Tosham, India

Authors: Naresh Kumar, Savita Kumari, Naresh Kochhar

Abstract:

The objective of the study was to assess the role of mineralogy and subsurface structure on water quality of Tosham, Malani Igneous Suite (MIS), Western Rajasthan, India. MIS is the largest (55,000 km2) A-type, anorogenic and high heat producing acid magmatism in the peninsular India and owes its origin to hot spot tectonics. Apart from agricultural and industrial wastes, geogenic activities cause fluctuations in quality parameters of water resources. Twenty water samples (20) selected from Tosham and surrounding areas were analyzed for As, Pb, B, Al, Zn, Fe, Ni using Inductive coupled plasma emission and F by Ion Chromatography. The concentration of As, Pb, B, Ni and F was above the stipulated level specified by BIS (Bureau of Indian Standards IS-10500, 2012). The concentration of As and Pb in surrounding areas of Tosham ranged from 1.2 to 4.1 mg/l and from 0.59 to 0.9 mg/l respectively which is higher than limits of 0.05mg/l (As) and 0.01 mg/l (Pb). Excess trace metal accumulation in water is toxic to humans and adversely affects the central nervous system, kidneys, gastrointestinal tract, skin and cause mental confusion. Groundwater quality is defined by nature of rock formation, mineral water reaction, physiography, soils, environment, recharge and discharge conditions of the area. Fluoride content in groundwater is due to the solubility of fluoride-bearing minerals like fluorite, cryolite, topaz, and mica, etc. Tosham is comprised of quartz mica schist, quartzite, schorl, tuff, quartz porphyry and associated granites, thus, fluoride is leached out and dissolved in groundwater. In the study area, Ni concentration ranged from 0.07 to 0.5 mg/l (permissible limit 0.02 mg/l). The primary source of nickel in drinking water is leached out nickel from ore-bearing rocks. Higher concentration of As is found in some igneous rocks specifically containing minerals as arsenopyrite (AsFeS), realgar (AsS) and orpiment (As2S3). MIS consists of granite (hypersolvus and subsolvus), rhyolite, dacite, trachyte, andesite, pyroclasts, basalt, gabbro and dolerite which increased the trace elements concentration in groundwater. Nakora, a part of MIS rocks has high concentration of trace and rare earth elements (Ni, Rb, Pb, Sr, Y, Zr, Th, U, La, Ce, Nd, Eu and Yb) which percolates the Ni and Pb to groundwater by weathering, contacts and joints/fractures in rocks. Additionally, geological setting of MIS also causes dissolution of trace elements in water resources beneath the surface. NE–SW tectonic lineament, radial pattern of dykes and volcanic vent at Nakora created a way for leaching of these elements to groundwater. Rain water quality might be altered by major minerals constituents of host Tosham rocks during its percolation through the rock fracture, joints before becoming the integral part of groundwater aquifer. The weathering process like hydration, hydrolysis and solution might be the cause of change in water chemistry of particular area. These studies suggest that geological relation of soil-water horizon with MIS rocks via mineralogical variations, structures and tectonic setting affects the water quality of the studied area.

Keywords: geochemistry, groundwater, malani igneous suite, tosham

Procedia PDF Downloads 219
160 Comparing Community Health Agents, Physicians and Nurses in Brazil's Family Health Strategy

Authors: Rahbel Rahman, Rogério Meireles Pinto, Margareth Santos Zanchetta

Abstract:

Background: Existing shortcomings of current health-service delivery include poor teamwork, competencies that do not address consumer needs, and episodic rather than continuous care. Brazil’s Sistema Único de Saúde (Unified Health System, UHS) is acknowledged worldwide as a model for delivering community-based care through Estratégia Saúde da Família (FHS; Family Health Strategy) interdisciplinary teams, comprised of Community Health Agents (in Portuguese, Agentes Comunitário de Saude, ACS), nurses, and physicians. FHS teams are mandated to collectively offer clinical care, disease prevention services, vector control, health surveillance and social services. Our study compares medical providers (nurses and physicians) and community-based providers (ACS) on their perceptions of work environment, professional skills, cognitive capacities and job context. Global health administrators and policy makers can leverage on similarities and differences across care providers to develop interprofessional training for community-based primary care. Methods: Cross-sectional data were collected from 168 ACS, 62 nurses and 32 physicians in Brazil. We compared providers’ demographic characteristics (age, race, and gender) and job context variables (caseload, work experience, work proximity to community, the length of commute, and familiarity with the community). Providers perceptions were compared to their work environment (work conditions and work resources), professional skills (consumer-input, interdisciplinary collaboration, efficacy of FHS teams, work-methods and decision-making autonomy), and cognitive capacities (knowledge and skills, skill variety, confidence and perseverance). Descriptive and bi-variate analysis, such as Pearson Chi-square and Analysis of Variance (ANOVA) F-tests, were performed to draw comparisons across providers. Results: Majority of participants were ACS (64%); 24% nurses; and 12% physicians. Majority of nurses and ACS identified as mixed races (ACS, n=85; nurses, n=27); most physicians identified as males (n=16; 52%), and white (n=18; 58%). Physicians were less likely to incorporate consumer-input and demonstrated greater decision-making autonomy than nurses and ACS. ACS reported the highest levels of knowledge and skills but the least confidence compared to nurses and physicians. ACS, nurses, and physicians were efficacious that FHS teams improved the quality of health in their catchment areas, though nurses tend to disagree that interdisciplinary collaboration facilitated their work. Conclusion: To our knowledge, there has been no study comparing key demographic and cognitive variables across ACS, nurses and physicians in the context of their work environment and professional training. We suggest that global health systems can leverage upon the diverse perspectives of providers to implement a community-based primary care model grounded in interprofessional training. Our study underscores the need for in-service trainings to instill reflective skills of providers, improve communication skills of medical providers and curative skills of ACS. Greater autonomy needs to be extended to community based providers to offer care integral to addressing consumer and community needs.

Keywords: global health systems, interdisciplinary health teams, community health agents, community-based care

Procedia PDF Downloads 234
159 Laminar Periodic Vortex Shedding over a Square Cylinder in Pseudoplastic Fluid Flow

Authors: Shubham Kumar, Chaitanya Goswami, Sudipto Sarkar

Abstract:

Pseudoplastic (n < 1, n being the power index) fluid flow can be found in food, pharmaceutical and process industries and has very complex flow nature. To our knowledge, inadequate research work has been done in this kind of flow even at very low Reynolds numbers. Here, in the present computation, we have considered unsteady laminar flow over a square cylinder in pseudoplastic flow environment. For Newtonian fluid flow, this laminar vortex shedding range lies between Re = 47-180. In this problem, we consider Re = 100 (Re = U∞ a/ ν, U∞ is the free stream velocity of the flow, a is the side of the cylinder and ν is the kinematic viscosity of the fluid). The pseudoplastic fluid range has been chosen from close to the Newtonian fluid (n = 0.8) to very high pseudoplasticity (n = 0.1). The flow domain is constituted using Gambit 2.2.30 and this software is also used to generate mesh and to impose the boundary conditions. For all places, the domain size is considered as 36a × 16a with 280 ×192 grid point in the streamwise and flow normal directions respectively. The domain and the grid points are selected after a thorough grid independent study at n = 1.0. Fine and equal grid spacing is used close to the square cylinder to capture the upper and lower shear layers shed from the cylinder. Away from the cylinder the grid is unequal in size and stretched out in all direction. Velocity inlet (u = U∞), pressure outlet (Neumann condition), symmetry (free-slip boundary condition du/dy = 0, v = 0) at upper and lower domain boundary conditions are used for this simulation. Wall boundary (u = v = 0) is considered on the square cylinder surface. Fully conservative 2-D unsteady Navier-Stokes equations are discretized and then solved by Ansys Fluent 14.5 to understand the flow nature. SIMPLE algorithm written in finite volume method is selected for this purpose which is the default solver in scripted in Fluent. The result obtained for Newtonian fluid flow agrees well with previous work supporting Fluent’s usefulness in academic research. A minute analysis of instantaneous and time averaged flow field is obtained both for Newtonian and pseudoplastic fluid flow. It has been observed that drag coefficient increases continuously with the reduced value of n. Also, the vortex shedding phenomenon changes at n = 0.4 due to flow instability. These are some of the remarkable findings for laminar periodic vortex shedding regime in pseudoplastic flow environment.

Keywords: Ansys Fluent, CFD, periodic vortex shedding, pseudoplastic fluid flow

Procedia PDF Downloads 207
158 On Consolidated Predictive Model of the Natural History of Breast Cancer Considering Primary Tumor and Primary Distant Metastases Growth

Authors: Ella Tyuryumina, Alexey Neznanov

Abstract:

Finding algorithms to predict the growth of tumors has piqued the interest of researchers ever since the early days of cancer research. A number of studies were carried out as an attempt to obtain reliable data on the natural history of breast cancer growth. Mathematical modeling can play a very important role in the prognosis of tumor process of breast cancer. However, mathematical models describe primary tumor growth and metastases growth separately. Consequently, we propose a mathematical growth model for primary tumor and primary metastases which may help to improve predicting accuracy of breast cancer progression using an original mathematical model referred to CoM-IV and corresponding software. We are interested in: 1) modelling the whole natural history of primary tumor and primary metastases; 2) developing adequate and precise CoM-IV which reflects relations between PT and MTS; 3) analyzing the CoM-IV scope of application; 4) implementing the model as a software tool. The CoM-IV is based on exponential tumor growth model and consists of a system of determinate nonlinear and linear equations; corresponds to TNM classification. It allows to calculate different growth periods of primary tumor and primary metastases: 1) ‘non-visible period’ for primary tumor; 2) ‘non-visible period’ for primary metastases; 3) ‘visible period’ for primary metastases. The new predictive tool: 1) is a solid foundation to develop future studies of breast cancer models; 2) does not require any expensive diagnostic tests; 3) is the first predictor which makes forecast using only current patient data, the others are based on the additional statistical data. Thus, the CoM-IV model and predictive software: a) detect different growth periods of primary tumor and primary metastases; b) make forecast of the period of primary metastases appearance; c) have higher average prediction accuracy than the other tools; d) can improve forecasts on survival of BC and facilitate optimization of diagnostic tests. The following are calculated by CoM-IV: the number of doublings for ‘nonvisible’ and ‘visible’ growth period of primary metastases; tumor volume doubling time (days) for ‘nonvisible’ and ‘visible’ growth period of primary metastases. The CoM-IV enables, for the first time, to predict the whole natural history of primary tumor and primary metastases growth on each stage (pT1, pT2, pT3, pT4) relying only on primary tumor sizes. Summarizing: a) CoM-IV describes correctly primary tumor and primary distant metastases growth of IV (T1-4N0-3M1) stage with (N1-3) or without regional metastases in lymph nodes (N0); b) facilitates the understanding of the appearance period and manifestation of primary metastases.

Keywords: breast cancer, exponential growth model, mathematical modelling, primary metastases, primary tumor, survival

Procedia PDF Downloads 335
157 Development of Market Penetration for High Energy Efficiency Technologies in Alberta’s Residential Sector

Authors: Saeidreza Radpour, Md. Alam Mondal, Amit Kumar

Abstract:

Market penetration of high energy efficiency technologies has key impacts on energy consumption and GHG mitigation. Also, it will be useful to manage the policies formulated by public or private organizations to achieve energy or environmental targets. Energy intensity in residential sector of Alberta was 148.8 GJ per household in 2012 which is 39% more than the average of Canada 106.6 GJ, it was the highest amount among the provinces on per household energy consumption. Energy intensity by appliances of Alberta was 15.3 GJ per household in 2012 which is 14% higher than average value of other provinces and territories in energy demand intensity by appliances in Canada. In this research, a framework has been developed to analyze the market penetration and market share of high energy efficiency technologies in residential sector. The overall methodology was based on development of data-intensive models’ estimation of the market penetration of the appliances in the residential sector over a time period. The developed models were a function of a number of macroeconomic and technical parameters. Developed mathematical equations were developed based on twenty-two years of historical data (1990-2011). The models were analyzed through a series of statistical tests. The market shares of high efficiency appliances were estimated based on the related variables such as capital and operating costs, discount rate, appliance’s life time, annual interest rate, incentives and maximum achievable efficiency in the period of 2015 to 2050. Results show that the market penetration of refrigerators is higher than that of other appliances. The stocks of refrigerators per household are anticipated to increase from 1.28 in 2012 to 1.314 and 1.328 in 2030 and 2050, respectively. Modelling results show that the market penetration rate of stand-alone freezers will decrease between 2012 and 2050. Freezer stock per household will decline from 0.634 in 2012 to 0.556 and 0.515 in 2030 and 2050, respectively. The stock of dishwashers per household is expected to increase from 0.761 in 2012 to 0.865 and 0.960 in 2030 and 2050, respectively. The increase in the market penetration rate of clothes washers and clothes dryers is nearly parallel. The stock of clothes washers and clothes dryers per household is expected to rise from 0.893 and 0.979 in 2012 to 0.960 and 1.0 in 2050, respectively. This proposed presentation will include detailed discussion on the modelling methodology and results.

Keywords: appliances efficiency improvement, energy star, market penetration, residential sector

Procedia PDF Downloads 288
156 Surge in U. S. Citizens Expatriation: Testing Structual Equation Modeling to Explain the Underlying Policy Rational

Authors: Marco Sewald

Abstract:

Comparing present to past the numbers of Americans expatriating U. S. citizenship have risen. Even though these numbers are small compared to the immigrants, U. S. citizens expatriations have historically been much lower, making the uptick worrisome. In addition, the published lists and numbers from the U.S. government seems incomplete, with many not counted. Different branches of the U. S. government report different numbers and no one seems to know exactly how big the real number is, even though the IRS and the FBI both track and/or publish numbers of Americans who renounce. Since there is no single explanation, anecdotal evidence suggests this uptick is caused by global tax law and increased compliance burdens imposed by the U.S. lawmakers on U.S. citizens abroad. Within a research project the question arose about the reasons why a constant growing number of U.S. citizens are expatriating – the answers are believed helping to explain the underlying governmental policy rational, leading to such activities. While it is impossible to locate former U.S. citizens to conduct a survey on the reasons and the U.S. government is not commenting on the reasons given within the process of expatriation, the chosen methodology is Structural Equation Modeling (SEM), in the first step by re-using current surveys conducted by different researchers within the population of U. S. citizens residing abroad during the last years. Surveys questioning the personal situation in the context of tax, compliance, citizenship and likelihood to repatriate to the U. S. In general SEM allows: (1) Representing, estimating and validating a theoretical model with linear (unidirectional or not) relationships. (2) Modeling causal relationships between multiple predictors (exogenous) and multiple dependent variables (endogenous). (3) Including unobservable latent variables. (4) Modeling measurement error: the degree to which observable variables describe latent variables. Moreover SEM seems very appealing since the results can be represented either by matrix equations or graphically. Results: the observed variables (items) of the construct are caused by various latent variables. The given surveys delivered a high correlation and it is therefore impossible to identify the distinct effect of each indicator on the latent variable – which was one desired result. Since every SEM comprises two parts: (1) measurement model (outer model) and (2) structural model (inner model), it seems necessary to extend the given data by conducting additional research and surveys to validate the outer model to gain the desired results.

Keywords: expatriation of U. S. citizens, SEM, structural equation modeling, validating

Procedia PDF Downloads 222
155 Development of Vertically Integrated 2D Lake Victoria Flow Models in COMSOL Multiphysics

Authors: Seema Paul, Jesper Oppelstrup, Roger Thunvik, Vladimir Cvetkovic

Abstract:

Lake Victoria is the second largest fresh water body in the world, located in East Africa with a catchment area of 250,000 km², of which 68,800 km² is the actual lake surface. The hydrodynamic processes of the shallow (40–80 m deep) water system are unique due to its location at the equator, which makes Coriolis effects weak. The paper describes a St.Venant shallow water model of Lake Victoria developed in COMSOL Multiphysics software, a general purpose finite element tool for solving partial differential equations. Depth soundings taken in smaller parts of the lake were combined with recent more extensive data to resolve the discrepancies of the lake shore coordinates. The topography model must have continuous gradients, and Delaunay triangulation with Gaussian smoothing was used to produce the lake depth model. The model shows large-scale flow patterns, passive tracer concentration and water level variations in response to river and tracer inflow, rain and evaporation, and wind stress. Actual data of precipitation, evaporation, in- and outflows were applied in a fifty-year simulation model. It should be noted that the water balance is dominated by rain and evaporation and model simulations are validated by Matlab and COMSOL. The model conserves water volume, the celerity gradients are very small, and the volume flow is very slow and irrotational except at river mouths. Numerical experiments show that the single outflow can be modelled by a simple linear control law responding only to mean water level, except for a few instances. Experiments with tracer input in rivers show very slow dispersion of the tracer, a result of the slow mean velocities, in turn, caused by the near-balance of rain with evaporation. The numerical and hydrodynamical model can evaluate the effects of wind stress which is exerted by the wind on the lake surface that will impact on lake water level. Also, model can evaluate the effects of the expected climate change, as manifest in changes to rainfall over the catchment area of Lake Victoria in the future.

Keywords: bathymetry, lake flow and steady state analysis, water level validation and concentration, wind stress

Procedia PDF Downloads 227
154 Forecasting Regional Data Using Spatial Vars

Authors: Taisiia Gorshkova

Abstract:

Since the 1980s, spatial correlation models have been used more often to model regional indicators. An increasingly popular method for studying regional indicators is modeling taking into account spatial relationships between objects that are part of the same economic zone. In 2000s the new class of model – spatial vector autoregressions was developed. The main difference between standard and spatial vector autoregressions is that in the spatial VAR (SpVAR), the values of indicators at time t may depend on the values of explanatory variables at the same time t in neighboring regions and on the values of explanatory variables at time t-k in neighboring regions. Thus, VAR is a special case of SpVAR in the absence of spatial lags, and the spatial panel data model is a special case of spatial VAR in the absence of time lags. Two specifications of SpVAR were applied to Russian regional data for 2000-2017. The values of GRP and regional CPI are used as endogenous variables. The lags of GRP, CPI and the unemployment rate were used as explanatory variables. For comparison purposes, the standard VAR without spatial correlation was used as “naïve” model. In the first specification of SpVAR the unemployment rate and the values of depending variables, GRP and CPI, in neighboring regions at the same moment of time t were included in equations for GRP and CPI respectively. To account for the values of indicators in neighboring regions, the adjacency weight matrix is used, in which regions with a common sea or land border are assigned a value of 1, and the rest - 0. In the second specification the values of depending variables in neighboring regions at the moment of time t were replaced by these values in the previous time moment t-1. According to the results obtained, when inflation and GRP of neighbors are added into the model both inflation and GRP are significantly affected by their previous values, and inflation is also positively affected by an increase in unemployment in the previous period and negatively affected by an increase in GRP in the previous period, which corresponds to economic theory. GRP is not affected by either the inflation lag or the unemployment lag. When the model takes into account lagged values of GRP and inflation in neighboring regions, the results of inflation modeling are practically unchanged: all indicators except the unemployment lag are significant at a 5% significance level. For GRP, in turn, GRP lags in neighboring regions also become significant at a 5% significance level. For both spatial and “naïve” VARs the RMSE were calculated. The minimum RMSE are obtained via SpVAR with lagged explanatory variables. Thus, according to the results of the study, it can be concluded that SpVARs can accurately model both the actual values of macro indicators (particularly CPI and GRP) and the general situation in the regions

Keywords: forecasting, regional data, spatial econometrics, vector autoregression

Procedia PDF Downloads 143