Search results for: dummy injury assessment reference values
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 14777

Search results for: dummy injury assessment reference values

1007 Concept of a Pseudo-Lower Bound Solution for Reinforced Concrete Slabs

Authors: M. De Filippo, J. S. Kuang

Abstract:

In construction industry, reinforced concrete (RC) slabs represent fundamental elements of buildings and bridges. Different methods are available for analysing the structural behaviour of slabs. In the early ages of last century, the yield-line method has been proposed to attempt to solve such problem. Simple geometry problems could easily be solved by using traditional hand analyses which include plasticity theories. Nowadays, advanced finite element (FE) analyses have mainly found their way into applications of many engineering fields due to the wide range of geometries to which they can be applied. In such cases, the application of an elastic or a plastic constitutive model would completely change the approach of the analysis itself. Elastic methods are popular due to their easy applicability to automated computations. However, elastic analyses are limited since they do not consider any aspect of the material behaviour beyond its yield limit, which turns to be an essential aspect of RC structural performance. Furthermore, their applicability to non-linear analysis for modeling plastic behaviour gives very reliable results. Per contra, this type of analysis is computationally quite expensive, i.e. not well suited for solving daily engineering problems. In the past years, many researchers have worked on filling this gap between easy-to-implement elastic methods and computationally complex plastic analyses. This paper aims at proposing a numerical procedure, through which a pseudo-lower bound solution, not violating the yield criterion, is achieved. The advantages of moment distribution are taken into account, hence the increase in strength provided by plastic behaviour is considered. The lower bound solution is improved by detecting over-yielded moments, which are used to artificially rule the moment distribution among the rest of the non-yielded elements. The proposed technique obeys Nielsen’s yield criterion. The outcome of this analysis provides a simple, yet accurate, and non-time-consuming tool of predicting the lower-bound solution of the collapse load of RC slabs. By using this method, structural engineers can find the fracture patterns and ultimate load bearing capacity. The collapse triggering mechanism is found by detecting yield-lines. An application to the simple case of a square clamped slab is shown, and a good match was found with the exact values of collapse load.

Keywords: computational mechanics, lower bound method, reinforced concrete slabs, yield-line

Procedia PDF Downloads 179
1006 Nurture Early for Optimal Nutrition: A Community-Based Randomized Controlled Trial to Improve Infant Feeding and Care Practices Using Participatory Learning and Actions Approach

Authors: Priyanka Patil, Logan Manikam

Abstract:

Background: The first 1000 days of life are a critical window and can result in adverse health consequences due to inadequate nutrition. South-Asian (SA) communities face significant health disparities, particularly in maternal and child health. Community-based interventions, often employing Participatory-Learning and Action (PLA) approaches, have effectively addressed health inequalities in lower-income nations. The aim of this study was to assess the feasibility of implementing a PLA intervention to improve infant feeding and care practices in SA communities living in London. Methods: Comprehensive analyses were conducted to assess the feasibility/fidelity of this pilot randomized controlled trial. Summary statistics were computed to compare key metrics, including participant consent rates, attendance, retention, intervention support, and perceived effectiveness, against predefined progression rules guiding toward a definitive trial. Secondary outcomes were analyzed, drawing insights from multiple sources, such as The Children’s-Eating-Behaviour Questionnaire (CEBQ), Parental-Feeding-Style Questionnaires (PFSQ), Food-diary, and the Equality-Impact-Assessment (EIA) tool. A video analysis of children's mealtime behavior trends was conducted. Feedback interviews were collected from study participants. Results: Process-outcome measures met predefined progression rules for a definitive trial, which deemed the intervention as feasible and acceptable. The secondary outcomes analysis revealed no significant changes in children's BMI z-scores. This could be attributed to the abbreviated follow-up period of 6 months, reduced from 12 months, due to COVID-19-related delays. CEBQ analysis showed increased food responsiveness, along with decreased emotional over/undereating. A similar trend was observed in PFSQ. The EIA tool found no potential discrimination areas, and video analysis revealed a decrease in force-feeding practices. Participant feedback revealed improved awareness and knowledge sharing. Conclusion: This study demonstrates that a co-adapted PLA intervention is feasible and well-received in optimizing infant-care practices among South-Asian community members in a high-income country. These findings highlight the potential of community-based interventions to enhance health outcomes, promoting health equity.

Keywords: child health, childhood obesity, community-based, infant nutrition

Procedia PDF Downloads 57
1005 Implication of Fractal Kinetics and Diffusion Limited Reaction on Biomass Hydrolysis

Authors: Sibashish Baksi, Ujjaini Sarkar, Sudeshna Saha

Abstract:

In the present study, hydrolysis of Pinus roxburghi wood powder was carried out with Viscozyme, and kinetics of the hydrolysis has been investigated. Finely ground sawdust is submerged into 2% aqueous peroxide solution (pH=11.5) and pretreated through autoclaving, probe sonication, and alkaline peroxide pretreatment. Afterward, the pretreated material is subjected to hydrolysis. A chain of experiments was executed with delignified biomass (50 g/l) and varying enzyme concentrations (24.2–60.5 g/l). In the present study, 14.32 g/l of glucose, along with 7.35 g/l of xylose, have been recovered with a viscozyme concentration of 48.8 g/l and the same condition was treated as optimum condition. Additionally, thermal deactivation of viscozyme has been investigated and found to be gradually decreasing with escalated enzyme loading from 48.4 g/l (dissociation constant= 0.05 h⁻¹) to 60.5 g/l (dissociation constant= 0.02 h⁻¹). The hydrolysis reaction is a pseudo first-order reaction, and therefore, the rate of the hydrolysis can be expressed as a fractal-like kinetic equation that communicates between the product concentration and hydrolytic time t. It is seen that the value of rate constant (K) increases from 0.008 to 0.017 with augmented enzyme concentration from 24.2 g/l to 60.5 g/l. Greater value of K is associated with stronger enzyme binding capacity of the substrate mass. However, escalated concentration of supplied enzyme ensures improved interaction with more substrate molecules resulting in an enhanced de-polymerization of the polymeric sugar chains per unit time which eventually modifies the physiochemical structure of biomass. All fractal dimensions are in between 0 and 1. Lower the value of fractal dimension, more easily the biomass get hydrolyzed. It can be seen that with increased enzyme concentration from 24.2 g/l to 48.4 g/l, the values of fractal dimension go down from 0.1 to 0.044. This indicates that the presence of more enzyme molecules can more easily hydrolyze the substrate. However, an increased value has been observed with a further increment of enzyme concentration to 60.5g/l because of diffusional limitation. It is evident that the hydrolysis reaction system is a heterogeneous organization, and the product formation rate depends strongly on the enzyme diffusion resistances caused by the rate-limiting structures of the substrate-enzyme complex. Value of the rate constant increases from 1.061 to 2.610 with escalated enzyme concentration from 24.2 to 48.4 g/l. As the rate constant is proportional to Fick’s diffusion coefficient, it can be assumed that with a higher concentration of enzyme, a larger amount of enzyme mass dM diffuses into the substrate through the surface dF per unit time dt. Therefore, a higher rate constant value is associated with a faster diffusion of enzyme into the substrate. Regression analysis of time curves with various enzyme concentrations shows that diffusion resistant constant increases from 0.3 to 0.51 for the first two enzyme concentrations and again decreases with enzyme concentration of 60.5 g/l. During diffusion in a differential scale, the enzyme also experiences a greater resistance during diffusion of larger dM through dF in dt.

Keywords: viscozyme, glucose, fractal kinetics, thermal deactivation

Procedia PDF Downloads 111
1004 A Battle of Identity(ies): Deconstructing Spaces of Belonging in Saleem Haddad’s Guapa and Hasan Namir’s God in Pink

Authors: Nour Aladdin

Abstract:

This paper explores the interconnectedness of belonging, space, and identity in Anglo Arab literature, particularly Saleem Haddad’s Guapa and Hasan Namir’sGod in Pink. This paper suggest that Rasa and Ramy, the queer Arab characters respectively, do not belong in either the Middle East or the West. Using Amin Maalouf’s analysis of the Arab identity, specifically his argument that an individual identifies strongly with the aspect of their identity that is under attack, this paper argues that all of Rasa and Ramy’s spaces are politically charged - a term that denotes that all values and beliefs instilled in Arabs and their spaces are heavily influenced by Arab politics, culture, and, often times religion. Therefore, the politically charged environments Rasa and Ramy inhabit will always be against one part of their identity, which is why they cannot identify as queer and Arab simultaneously. For Rasa, the unnamed Middle Eastern country, his home environment, as well as the so-called safe space nightclub, condemn his queerness, leading him to connect more to his sexual orientation. However, Rasa associates himself with his Arab roots when he migrates to America, a different form of politically charged space that minoritizes his ethnicity. Similarly, Ramy’s spaces are naturally religiopolitical after Islam heightened in Iraq during the Iraq War; as a result, Ramy’s home environment, Sheikh Ammar’s house, the mosque, and the nightclub are influenced by the religiopolitics and bombard his ability to identify as not only a queer Arab but a queer Arab Muslim. Ultimately, because Rasa and Ramy are constantly in movement, their identity attributes are also in movement. This paper is divided into three sections. The first section focuses on Guapa and the Arab Spring’s politics, mainly its influence on queer Arabs in and around the Middle East. Drawing from a number of queer and Arab gender theories, I analyze all of Rasa’s spaces as politically charged that prevent him from the means to be queer and Arab. The second section examines God in Pink in close connection to the 2003 invasion of Iraq. Ramy’s spaces are religiopolitically charged, that prevent him to embrace all of his identity attributes – nationality, ethnicity, sexual orientation, and religious affiliation – concomitantly. The last section considers the rapid use of technology and social media in the Middle East as a means to provide deviant heterotopic spaces for queer Arabs. With the rise of subtle and covert queer heterotopias, there is a slow and steady shift of queer tolerance in the Arab world.

Keywords: belonging, identity, spaces, queer, arabness, middle east, orientalism

Procedia PDF Downloads 115
1003 Thinking Differently about Diversity: A Literature Review

Authors: Natalie Rinfret, Francine Tougas, Ann Beaton

Abstract:

Conventions No. 100 and 111 of the International Labor Organization, passed in 1951 and 1958 respectively, established the principles of equal pay for men and women for work of equal value and freedom from discrimination in employment. Governments of different countries followed suit. For example, in 1964, the Civil Rights Act was passed in the United States and in 1972, Canada ratified Convention 100. Thus, laws were enacted and programs were implemented to combat discrimination in the workplace and, over time, more than 90% of the member countries of the International Labour Organization have ratified these conventions by implementing programs such as employment equity in Canada aimed at groups recognized as being discriminated against in the labor market, including women. Although legislation has been in place for several decades, employment discrimination has not gone away. In this study, we pay particular attention to the hidden side of the effects of employment discrimination. This is the emergence of subtle forms of discrimination that often fly under the radar but nevertheless, have adverse effects on the attitudes and behaviors of members of targeted groups. Researchers have identified two forms of racial and gender bias. On the one hand, there are traditional prejudices referring to beliefs about the inferiority and innate differences of women and racial minorities compared to White men. They have the effect of confining these two groups to job categories suited to their perceived limited abilities and can result in degrading, if not violent and hateful, language and actions. On the other hand, more subtle prejudices are more suited to current social norms. However, this subtlety harbors a conflict between values of equality and remnants of negative beliefs and feelings toward women and racial minorities. Our literature review also takes into account an overlooked part of the groups targeted by the programs in place, senior workers, and highlights the quantifiable and observable effects of prejudice and discriminatory behaviors in employment. The study proposes a hybrid model of interventions, taking into account the organizational system (employment equity practices), discriminatory attitudes and behaviors, and the type of leadership to be advocated. This hybrid model includes, in the first instance, the implementation of initiatives aimed at both promoting employment equity and combating discrimination and, in the second instance, the establishment of practices that foster inclusion, the full and complete participation of all, including seniors, in the mission of their organization.

Keywords: employment discrimination, gender bias, the hybrid model of interventions, senior workers

Procedia PDF Downloads 221
1002 Language Education Policy in Arab Schools in Israel

Authors: Fatin Mansour Daas

Abstract:

Language education responds to and is reflective of emerging social and political trends. Language policies and practices are shaped by political, economic, social and cultural considerations. Following this, Israeli language education policy as implemented in Arab schools in Israel is influenced by the particular political and social situation of Arab-Palestinian citizens of Israel. This national group remained in their homeland following the war in 1948 between Israel and its Arab neighbors and became Israeli citizens following the establishment of the State of Israel. This study examines language policy in Arab schools in Israel from 1948 until the present time in light of the unique experience of the Palestinian Arab homeland minority in Israel with a particular focus on questions of politics and identity. The establishment of the State of Israel triggered far-reaching political, social and educational transformations within Arab Palestinian society in Israel, including in the area of language and language studies. Since 1948, the linguistic repertoire of Palestinian Arabs in Israel has become more complex and diverse, while the place and status of different languages have changed. Following the establishment of the State of Israel, only Hebrew and Arabic were retained as the official languages, and Israeli policy reflected this in schools as well: with the advent of the Jewish state, Hebrew language education among Palestinians in Israel has increased. Similarly, in Arab Palestinian schools in Israel, English is taught as a third language, Hebrew as a second language, and Arabic as a first language – even though it has become less important to native Arabic speakers. This research focuses on language studies and language policy in the Arab school system in Israel from 1948 onwards. It will analyze the relative focus of language education between the different languages, the rationale of various language education policies, and the pedagogic approach used to teach each language and student achievements vis-à-vis language skills. This study seeks to understand the extent to which Arab schools in Israel are multi-lingual by examining successes, challenges and difficulties in acquiring the respective languages. This qualitative study will analyze five different components of language education policy: (1) curriculum, (2) learning materials; (3) assessment; (4) interviews and (5) archives. Firstly, it consists of an analysis examining language education curricula, learning materials and assessments used in Arab schools in Israel from 1948-2018 including a selection of language textbooks for the compulsory years of study and the final matriculation (Bagrut) examinations. The findings will also be based on archival material which traces the evolution of language education policy in Arabic schools in Israel from the years 1948-2018. This archival research, furthermore, will reveal power relations and general decision-making in the field of the Arabic education system in Israel. The research will also include interviews with Ministry of Education staff who provide instructional oversight in the instruction of the three languages in the Arabic education system in Israel. These interviews will shed light on the goals of language education as understood by those who are in charge of implementing policy.

Keywords: language education policy, languages, multilingualism, language education, educational policy, identity, Palestinian-Arabs, Arabs in Israel, educational school system

Procedia PDF Downloads 92
1001 The Scientific Study of the Relationship Between Physicochemical and Microstructural Properties of Ultrafiltered Cheese: Protein Modification and Membrane Separation

Authors: Shahram Naghizadeh Raeisi, Ali Alghooneh

Abstract:

The loss of curd cohesiveness and syneresis are two common problems in the ultrafiltered cheese industry. In this study, by using membrane technology and protein modification, a modified cheese was developed and its properties were compared with a control sample. In order to decrease the lactose content and adjust the protein, acidity, dry matter and milk minerals, a combination of ultrafiltration, nanofiltration and reverse osmosis technologies was employed. For protein modification, a two-stage chemical and enzymatic reaction was employed before and after ultrafiltration. The physicochemical and microstructural properties of the modified ultrafiltered cheese were compared with the control one. Results showed that the modified protein enhanced the functional properties of the final cheese significantly (pvalue< 0.05), even if the protein content was 50% lower than the control one. The modified cheese showed 21 ± 0.70, 18 ± 1.10 & 25±1.65% higher hardness, cohesiveness and water-holding capacity values, respectively, than the control sample. This behavior could be explained by the developed microstructure of the gel network. Furthermore, chemical-enzymatic modification of milk protein induced a significant change in the network parameter of the final cheese. In this way, the indices of network linkage strength, network linkage density, and time scale of junctions were 10.34 ± 0.52, 68.50 ± 2.10 & 82.21 ± 3.85% higher than the control sample, whereas the distance between adjacent linkages was 16.77 ± 1.10% lower than the control sample. These results were supported by the results of the textural analysis. A non-linear viscoelastic study showed a triangle waveform stress of the modified protein contained cheese, while the control sample showed rectangular waveform stress, which suggested a better sliceability of the modified cheese. Moreover, to study the shelf life of the products, the acidity, as well as molds and yeast population, were determined in 120 days. It’s worth mentioning that the lactose content of modified cheese was adjusted at 2.5% before fermentation, while the lactose of the control one was at 4.5%. The control sample showed 8 weeks shelf life, while the shelf life of the modified cheese was 18 weeks in the refrigerator. During 18 weeks, the acidity of modified and control samples increased from 82 ± 1.50 to 94 ± 2.20 °D and 88 ± 1.64 to 194 ± 5.10 °D, respectively. The mold and yeast populations, with time, followed the semicircular shape model (R2 = 0.92, R2adj = 0.89, RMSE = 1.25). Furthermore, the mold and yeast counts and their growth rate in the modified cheese were lower than those for control one; Aforementioned result could be explained by the shortage of the source of energy for the microorganism in the modified cheese. The lactose content of the modified sample was less than 0.2 ± 0.05% at the end of fermentation, while this was 3.7 ± 0.68% in the control sample.

Keywords: non-linear viscoelastic, protein modification, semicircular shape model, ultrafiltered cheese

Procedia PDF Downloads 75
1000 A Review on Assessment on the Level of Development of Macedonia and Iran Organic Agriculture as Compared to Nigeria

Authors: Yusuf Ahmad Sani, Adamu Alhaji Yakubu, Alhaji Abdullahi Jamilu, Joel Omeke, Ibrahim Jumare Sambo

Abstract:

With the rising global threat of food security, cancer, and related diseases (carcinogenic) because of increased usage of inorganic substances in agricultural food production, the Ministry of Food Agriculture and Livestock of the Republic of Turkey organized an International Workshop on Organic Agriculture between 8 – 12th December 2014 at the International Agricultural Research and Training Center, Izmir. About 21 countries, including Nigeria, were invited to attend the training workshop. Several topics on organic agriculture were presented by renowned scholars, ranging from regulation, certification, crop, animal, seed production, pest and disease management, soil composting, and marketing of organic agricultural products, among others. This paper purposely selected two countries (Macedonia and Iran) out of the 21 countries to assess their level of development in terms of organic agriculture as compared to Nigeria. Macedonia, with a population of only 2.1 million people as of 2014, started organic agriculture in 2005 with only 266ha of land and has grown significantly to over 5,000ha in 2010, covering such crops as cereals (62%), forage (20%) fruit orchard (7%), vineyards (5%), vegetables (4%), oil seed and industrial crops (1%) each. Others are organic beekeeping from 110 hives to over 15,000 certified colonies. As part of government commitment, the level of government subsidy for organic products was 30% compared to the direct support for conventional agricultural products. About 19 by-laws were introduced on organic agricultural production that was fully consistent with European Union regulations. The republic of Iran, on the other hand, embarked on organic agriculture for the fact, that the country recorded the highest rate of cancer disease in the world, with over 30,000 people dying every year and 297 people diagnosed every day. However, the host country, Turkey, is well advanced in organic agricultural production and now being the largest exporter of organic products to Europe and other parts of the globe. A technical trip to one of the villages that are under the government scheme on organic agriculture reveals that organic agriculture was based on market-demand-driven and the support of the government was very visible, linking the farmers with private companies that provide inputs to them while the companies purchase the products at harvest with high premium price. However, in Nigeria, research on organic agriculture was very recent, and there was very scanty information on organic agriculture due to poor documentation and very low awareness, even among the elites. The paper, therefore, recommends that the government should provide funds to NARIs to conduct research on organic agriculture and to establish clear government policy and good pre-conditions for sustainable organic agricultural production in the country.

Keywords: organic agriculture, food security, food safety, food nutrition

Procedia PDF Downloads 52
999 Creating Renewable Energy Investment Portfolio in Turkey between 2018-2023: An Approach on Multi-Objective Linear Programming Method

Authors: Berker Bayazit, Gulgun Kayakutlu

Abstract:

The World Energy Outlook shows that energy markets will substantially change within a few forthcoming decades. First, determined action plans according to COP21 and aim of CO₂ emission reduction have already impact on policies of countries. Secondly, swiftly changed technological developments in the field of renewable energy will be influential upon medium and long-term energy generation and consumption behaviors of countries. Furthermore, share of electricity on global energy consumption is to be expected as high as 40 percent in 2040. Electrical vehicles, heat pumps, new electronical devices and digital improvements will be outstanding technologies and innovations will be the testimony of the market modifications. In order to meet highly increasing electricity demand caused by technologies, countries have to make new investments in the field of electricity production, transmission and distribution. Specifically, electricity generation mix becomes vital for both prevention of CO₂ emission and reduction of power prices. Majority of the research and development investments are made in the field of electricity generation. Hence, the prime source diversity and source planning of electricity generation are crucial for improving the wealth of citizen life. Approaches considering the CO₂ emission and total cost of generation, are necessary but not sufficient to evaluate and construct the product mix. On the other hand, employment and positive contribution to macroeconomic values are important factors that have to be taken into consideration. This study aims to constitute new investments in renewable energies (solar, wind, geothermal, biogas and hydropower) between 2018-2023 under 4 different goals. Therefore, a multi-objective programming model is proposed to optimize the goals of minimizing the CO₂ emission, investment amount and electricity sales price while maximizing the total employment and positive contribution to current deficit. In order to avoid the user preference among the goals, Dinkelbach’s algorithm and Guzel’s approach have been combined. The achievements are discussed with comparison to the current policies. Our study shows that new policies like huge capacity allotment might be discussible although obligation for local production is positive. The improvements in grid infrastructure and re-design support for the biogas and geothermal can be recommended.

Keywords: energy generation policies, multi-objective linear programming, portfolio planning, renewable energy

Procedia PDF Downloads 245
998 Optimization of Artisanal Fishing Waste Fermentation for Volatile Fatty Acids Production

Authors: Luz Stella Cadavid-Rodriguez, Viviana E. Castro-Lopez

Abstract:

Fish waste (FW) has a high content of potentially biodegradable components, so it is amenable to be digested anaerobically. In this line, anaerobic digestion (AD) of FW has been studied for biogas production. Nevertheless, intermediate products such as volatile fatty acids (VFA), generated during the acidogenic stage, have been scarce investigated, even though they have a high potential as a renewable source of carbon. In the literature, there are few studies about the Inoculum-Substrate (I/S) ratio on acidogenesis. On the other hand, it is well known that pH is a critical factor in the production of VFA. The optimum pH for the production of VFA seems to change depending on the substrate and can vary in a range between 5.25 and 11. Nonetheless, the literature about VFA production from protein-rich waste, such as FW, is scarce. In this context, it is necessary to deepen on the determination of the optimal operating conditions of acidogenic fermentation for VFA production from protein-rich waste. Therefore, the aim of this research was to optimize the volatile fatty acid production from artisanal fishing waste, studying the effect of pH and the I/S ratio on the acidogenic process. For this research, the inoculum used was a methanogenic sludge (MS) obtained from a UASB reactor treating wastewater of a slaughterhouse plant, and the FW was collected in the port of Tumaco (Colombia) from the local artisanal fishers. The acidogenic fermentation experiments were conducted in batch mode, in 500 mL glass bottles as anaerobic reactors, equipped with rubber stoppers provided with a valve to release biogas. The effective volume used was 300 mL. The experiments were carried out for 15 days at a mesophilic temperature of 37± 2 °C and constant agitation of 200 rpm. The effect of 3 pH levels: 5, 7, 9, coupled with five I/S ratios, corresponding to 0.20, 0.15, 0.10, 0.05, 0.00 was evaluated taking as a response variable the production of VFA. A complete randomized block design was selected for the experiments in a 5x3 factorial arrangement, with two repetitions per treatment. At the beginning and during the process, pH in the experimental reactors was adjusted to the corresponding values of 5, 7, and 9 using 1M NaOH or 1M H2SO4, as was appropriated. In addition, once the optimum I/S ratio was determined, the process was evaluated at this condition without pH control. The results indicated that pH is the main factor in the production of VFA, obtaining the highest concentration with neutral pH. By reducing the I/S ratio, as low as 0.05, it was possible to maximize VFA production. Thus, the optimum conditions found were natural pH (6.6-7.7) and I/S ratio of 0.05, with which it was possible to reach a maximum total VFA concentration of 70.3 g Ac/L, whose major components were acetic acid (35%) and butyric acid (32%). The findings showed that the acidogenic fermentation of FW is an efficient way of producing VFA and that the operating conditions can be simple and economical.

Keywords: acidogenesis, artisanal fishing waste, inoculum to substrate ratio, volatile fatty acids

Procedia PDF Downloads 126
997 The Inclusive Human Trafficking Checklist: A Dialectical Measurement Methodology

Authors: Maria C. Almario, Pam Remer, Jeff Resse, Kathy Moran, Linda Theander Adam

Abstract:

The identification of victims of human trafficking and consequential service provision is characterized by a significant disconnection between the estimated prevalence of this issue and the number of cases identified. This poses as tremendous problem for human rights advocates as it prevents data collection, information sharing, allocation of resources and opportunities for international dialogues. The current paper introduces the Inclusive Human Trafficking Checklist (IHTC) as a measurement methodology with theoretical underpinnings derived from dialectic theory. The presence of human trafficking in a person’s life is conceptualized as a dynamic and dialectic interaction between vulnerability and exploitation. The current papers explores the operationalization of exploitation and vulnerability, evaluates the metric qualities of the instrument, evaluates whether there are differences in assessment based on the participant’s profession, level of knowledge, and training, and assesses if users of the instrument perceive it as useful. A total of 201 participants were asked to rate three vignettes predetermined by experts to qualify as a either human trafficking case or not. The participants were placed in three conditions: business as usual, utilization of the IHTC with and without training. The results revealed a statistically significant level of agreement between the expert’s diagnostic and the application of the IHTC with an improvement of 40% on identification when compared with the business as usual condition While there was an improvement in identification in the group with training, the difference was found to have a small effect size. Participants who utilized the IHTC showed an increased ability to identify elements of identity-based vulnerabilities as well as elements of fraud, which according to the results, are distinctive variables in cases of human trafficking. In terms of the perceived utility, the results revealed higher mean scores for the groups utilizing the IHTC when compared to the business as usual condition. These findings suggest that the IHTC improves appropriate identification of cases and that it is perceived as a useful instrument. The application of the IHTC as a multidisciplinary instrumentation that can be utilized in legal and human services settings is discussed as a pivotal piece of helping victims restore their sense of dignity, and advocate for legal, physical and psychological reparations. It is noteworthy that this study was conducted with a sample in the United States and later re-tested in Colombia. The implications of the instrument for treatment conceptualization and intervention in human trafficking cases are discussed as opportunities for enhancement of victim well-being, restoration engagement and activism. With the idea that what is personal is also political, we believe that the careful observation and data collection in specific cases can inform new areas of human rights activism.

Keywords: exploitation, human trafficking, measurement, vulnerability, screening

Procedia PDF Downloads 331
996 Assessing Mycotoxin Exposure from Processed Cereal-Based Foods for Children

Authors: Soraia V. M. de Sá, Miguel A. Faria, José O. Fernandes, Sara C. Cunha

Abstract:

Cereals play a vital role in fulfilling the nutritional needs of children, supplying essential nutrients crucial for their growth and development. However, concerns arise due to children's heightened vulnerability due to their unique physiology, specific dietary requirements, and relatively higher intake in relation to their body weight. This vulnerability exposes them to harmful food contaminants, particularly mycotoxins, prevalent in cereals. Because of the thermal stability of mycotoxins, conventional industrial food processing often falls short of eliminating them. Children, especially those aged 4 months to 12 years, frequently encounter mycotoxins through the consumption of specialized food products, such as instant foods, breakfast cereals, bars, cookie snacks, fruit puree, and various dairy items. A close monitoring of this demographic group's exposure to mycotoxins is essential, as toxins ingestion may weaken children’s immune systems, reduce their resistance to infectious diseases, and potentially lead to cognitive impairments. The severe toxicity of mycotoxins, some of which are classified as carcinogenic, has spurred the establishment and ongoing revision of legislative limits on mycotoxin levels in food and feed globally. While EU Commission Regulation 1881/2006 addresses well-known mycotoxins in processed cereal-based foods and infant foods, the absence of regulations specifically addressing emerging mycotoxins underscores a glaring gap in the regulatory framework, necessitating immediate attention. Emerging mycotoxins have gained mounting scrutiny in recent years due to their pervasive presence in various foodstuffs, notably cereals and cereal-based products. Alarmingly, exposure to multiple mycotoxins is hypothesized to exhibit higher toxicity than isolated effects, raising particular concerns for products primarily aimed at children. This study scrutinizes the presence of 22 mycotoxins of the diverse range of chemical classes in 148 processed cereal-based foods, including 39 breakfast cereals, 25 infant formulas, 27 snacks, 25 cereal bars, and 32 cookies commercially available in Portugal. The analytical approach employed a modified QuEChERS procedure followed by ultra-performance liquid chromatography-tandem mass spectrometry (UPLC-MS/MS) analysis. Given the paucity of information on the risk assessment of children to multiple mycotoxins in cereal and cereal-based products consumed by children of Portugal pioneers the evaluation of this critical aspect. Overall, aflatoxin B1 (AFB1) and aflatoxin G2 (AFG2) emerged as the most prevalent regulated mycotoxins, while enniatin B (ENNB) and sterigmatocystin (STG) were the most frequently detected emerging mycotoxins.

Keywords: cereal-based products, children´s nutrition, food safety, UPLC-MS/MS analysis

Procedia PDF Downloads 73
995 Assessment of Nuclear Medicine Radiation Protection Practices Among Radiographers and Nurses at a Small Nuclear Medicine Department in a Tertiary Hospital

Authors: Nyathi Mpumelelo; Moeng Thabiso Maria

Abstract:

BACKGROUND AND OBJECTIVES: Radiopharmaceuticals are used for diagnosis, treatment, staging and follow up of various diseases. However, there is concern that the ionizing radiation (gamma rays, α and ß particles) emitted by radiopharmaceuticals may result in exposure of radiographers and nurses with limited knowledge of the principles of radiation protection and safety, raising the risk of cancer induction. This study aimed at investigation radiation safety awareness levels among radiographers and nurses at a small tertiary hospital in South Africa. METHODS: An analytical cross-sectional study. A validated two-part questionnaire was implemented to consenting radiographers and nurses working in a Nuclear Medicine Department. Part 1 gathered demographic information (age, gender, work experience, attendance to/or passing ionizing radiation protection courses). Part 2 covered questions related to knowledge and awareness of radiation protection principles. RESULTS: Six radiographers and five nurses participated (27% males and 73% females). The mean age was 45 years (age range 20-60 years). The study revealed that neither professional development courses nor radiation protection courses are offered at the Nuclear Medicine Department understudy. However, 6/6 (100%) radiographers exhibited a high level of awareness of radiation safety principles on handling and working with radiopharmaceuticals which correlated to their years of experience. As for nurses, 4/5 (80%) showed limited knowledge and awareness of radiation protection principles irrespective of the number of years in the profession. CONCLUSION: Despite their major role of caring for patients undergoing diagnostic and therapeutic treatments, the nurses showed limited knowledge of ionizing radiation and associated side effects. This was not surprising since they never received any formal basic radiation safety course. These findings were not unique to this Centre. A study conducted in a Kuwaiti Radiology Department also established that the vast majority of nurses did not understand the risks of working with ionizing radiation. Similarly, nurses in an Australian hospital exhibited knowledge limitations. However, nursing managers did provide the necessary radiation safety training when requested. In Guatemala and Saudi Arabia, where there was shortage of professional radiographers, nurses underwent radiography training, a course that equipped them with basic radiation safety principles. The radiographers in the Centre understudy unlike others in various parts of the world demonstrated substantial knowledge and awareness on radiation protection. Radiations safety courses attended when an opportunity arose played a critical role in their awareness. The knowledge and awareness levels of these radiographers were comparable to their counterparts in Sudan. However, it was much more above that of their counterparts in Jordan, Nigeria, Nepal and Iran who were found to have limited awareness and inadequate knowledge on radiation dose. Formal radiation safety and awareness courses and workshops can play a crucial role in raising the awareness of nurses and radiographers on radiation safety for their personal benefit and that of their patients.

Keywords: radiation safety, radiation awareness, training, nuclear medicine

Procedia PDF Downloads 81
994 Unionisation, Participation and Democracy: Forms of Convergence and Divergence between Union Membership and Civil and Political Activism in European Countries

Authors: Silvia Lucciarini, Antonio Corasaniti

Abstract:

The issue of democracy in capitalist countries has once again become the focus of debate in recent years. A number of socio-economic and political tensions have triggered discussion of this topic from various perspectives and disciplines. Political developments, the rise of both right-wing parties and populism and the constant growth of inequalities in a context of welfare downsizing, have led scholars to question if European capitalist countries are really capable of creating and redistributing resources and look for elements that might make democratic capital in European countries more dense. The aim of the work is to shed light on the trajectories, intensity and convergence or divergence between political and associative participation, on one hand, and organization, on the other, as these constitute two of the main points of connection between the norms, values and actions that bind citizens to the state. Using the European Social Survey database, some studies have sought to analyse degrees of unionization by investigating the relationship between systems of industrial relations and vulnerable groups (in terms of value-oriented practices or political participation). This paper instead aims to investigate the relationship between union participation and civil/political participation, comparing union members and non-members and then distinguishing between employees and self-employed professionals to better understand participatory behaviors among different workers. The first component of the research will employ a multilinear logistic model to examine a sample of 10 countries selected according to a grid that combines the industrial relations models identified by Visser (2006) and the Welfare State systems identified by Esping-Andersen (1990). On the basis of this sample, we propose to compare the choices made by workers and their propensity to join trade unions, together with their level of social and political participation, from 2002 to 2016. In the second component, we aim to verify whether workers within the same system of industrial relations and welfare show a similar propensity to engage in civil participation through political bodies and associations, or if instead these tendencies take on more specific and varied forms. The results will allow us to see: (1) if political participation is higher among unionized workers than it is among the non-unionized. (2) what are the differences in unionisation and civil/political participation between self-employed, temporary and full-time employees and (3) whether the trajectories within industrial relations and welfare models display greater inclusiveness and participation, thereby confirming or disproving the patterns that have been documented among the different European countries.

Keywords: union membership, participation, democracy, industrial relations, welfare systems

Procedia PDF Downloads 142
993 Contraceptive Uptake among Women in Low Socio-Economic Areas in Kenya: Quantitative Analysis of Secondary Data

Authors: J. Waita, S. Wamuhu, J. Makoyo, M. Rachel, T. Ngangari, W. Christine, M. Zipporah

Abstract:

Contraceptive use is one of the key global strategies to alleviate maternal mortality. Global efforts through advocating for contraceptive uptake and service provision has led improved contraceptive prevalence. In Kenya maternal mortality rate has remained a challenged despites efforts by government and non-governmental organizations. Objective: To describe the uptake of contraceptives among women in Tunza Clinics, Kenya. Design and Methods: Ps Kenya through health care marketing fund is implementing a family planning program among its 350 Tunza fractional franchise facilities. Through private partnership, private owned facilities in low socio-economic areas are recruited and trained on contraceptive technology update. The providers are supported through facilitative supervision through a mobile based application Health Network Quality Improvement System (HNQIS) and interpersonal communication through 150 community based volunteers. The data analyzed in this paper was collected between January to July 2017 to show the uptake of modern Contraceptives among women in the Tunza franchise, method mix, age and distribution among the age bracket. Further analysis compares two different service delivery strategies; outreach and walk ins. Supportive supervision HNQIS scores was analyzed. Results: During the time period, a total of 132121 family planning clients were attended in 350 facilities. The average age of clients was 29.6 years. The average number of clients attended in the facilities per month was 18874. 73.7 %( n=132121) of the clients attended in the Tunza facilities were aged above 25 years while 22.1% 20-24 years and 4.2% 15-19 years. On contraceptive method mix, intra uterine device insertions clients contributed to 7.5%, implant insertions 15.3%, pills 11.2%, injections 62.7% while condoms and emergency pills had 2.7% and 0.6% respectively. Analysis of service delivery strategy indicated more than 79% of the clients were walk ins while 21% were attended to during outreaches. Uptake of long term contraceptive methods during outreaches was 73% of the clients while short term modern methods were 27%. Health Network Quality Improvement system assessment scores indicated 51% of the facilities scored over 90%, 25% scoring 80-89% while 21% scored below 80%. Conclusion: Preference for short term methods by women is possibly associated to cost as they are cheaper and easy to administer. When the cost of intra uterine device Implants is meant affordable during outreaches, the uptake is observed to increase. Making intra uterine device and implants affordable to women is a key strategy in increasing contraceptive prevalence hence averting maternal mortality.

Keywords: contraceptives, contraceptive uptake, low socio economic, supportive supervision

Procedia PDF Downloads 169
992 Performance Assessment of a Three-Staged Natural Treatment Technology for On-Site Domestic Sewage Treatment

Authors: Harshvardhan Soni, Anil Kumar Dikshit, R. K. Pathak

Abstract:

Nowadays, a large amount of wastewater is being generated from cities and travels very long distances from their point of generation to their point of treatment, i.e., conventional centralized wastewater treatment plants (CCWTPs) which in turn results in several operational troubles due to heavy mechanized systems, also the large CCWTPs are sometimes even unable to handle these large volumes of wastewater being generated and the wastewater is either partially treated or sometimes may be even disposed of directly without any treatment into the water bodies, thus causing environmental problems. To overcome these operational troubles of heavily mechanized centralized treatment systems, there is a need for on-spot safe and complete treatment of wastewater being generated from various residential areas and areas such as holiday homes, industries, resorts, etc. These days, it is being felt, and in fact, several municipal corporations have already started requiring the proposed residential/commercial/industrial projects (i.e., where a conventional CCWTP is not there or not working or does not function properly or where there is a scarcity of freshwater supply) to take care of their wastewater within their premises, so that the effluent can be reused for a variety of non-potable uses including agriculture, irrigation, landscaping, surface storages, domestic uses, commercial uses, urban uses, environmental and recreational uses and industrial applications, and hence the freshwater demand of the area can be reduced. So, there's a need to design some specific units for some specific social needs and assess them and verify that they are capable of not only treating the sewage but also recycling the associated resources. Hence, there is a scope for decentralized/on-site treatment of sewage, which forms the basis for the research/innovation being proposed in this study. In view of that and considering the above requirements, for residential areas, a decentralized wastewater treatment plant (DWTP) (completely based on natural treatment technology to avoid heavy mechanized systems as in CCWTPs) was developed and deployed at the Indian Institute of Technology Bombay (IIT Bombay) campus, Mumbai, Maharashtra, India, to assess and evaluate its efficacy in long run. The system was deployed at the sewage pumping station of the campus for having a continuous 24 hours sewage flow into the system. The reactor configuration consists of an aerobic, facultative, and anaerobic tank as a pre-treatment unit followed by a planted gravel bed as a post-treatment unit in series. Results of the start-up period indicated that the system was very efficient/effective in the treatment of wastewater. The COD of the final effluent was found to be 29.7 mg/l; BOD was 0.7 mg/l, turbidity was 1.7 NTU, nitrate concentration was 1 mg/l, while the phosphorous concentration was 4.6 mg/l, and nearly all the parameters have very well complied with the reuse standards as per the Indian Standards. If seen on a daily basis also, turbidity has met the reuse standards around 92% of the time, COD around 84% of the time, and BOD and nitrates at all times.

Keywords: centralized wastewater treatment systems, decentralized wastewater treatment systems, reuse, effluent

Procedia PDF Downloads 5
991 The Dynamics of Planktonic Crustacean Populations in an Open Access Lagoon, Bordered by Heavy Industry, Southwest, Nigeria

Authors: E. O. Clarke, O. J. Aderinola, O. A. Adeboyejo, M. A. Anetekhai

Abstract:

Aims: The study is aimed at establishing the influence of some physical and chemical parameters on the abundance, distribution pattern and seasonal variations of the planktonic crustacean populations. Place and Duration of Study: A premier investigation into the dynamics of planktonic crustacean populations in Ologe lagoon was carried out from January 2011 to December 2012. Study Design: The study covered identification, temporal abundance, spatial distribution and diversity of the planktonic crustacea. Methodology: Standard techniques were used to collect samples from eleven stations covering five proximal satellite towns (Idoluwo, Oto, Ibiye, Obele, and Gbanko) bordering the lagoon. Data obtained were statistically analyzed using linear regression and hierarchical clustering. Results:Thirteen (13) planktonic crustacean populations were identified. Total percentage abundance was highest for Bosmina species (20%) and lowest for Polyphemus species (0.8%). The Pearson’s correlation coefficient (“r” values) between total planktonic crustacean population and some physical and chemical parameters showed that positive correlations having low level of significance occurred with salinity (r = 0.042) (sig = 0.184) and with surface water dissolved oxygen (r = 0.299) (sig = 0.155). Linear regression plots indicated that, the total population of planktonic crustacea were mainly influenced and only increased with an increase in value of surface water temperature (Rsq = 0.791) and conductivity (Rsq = 0.589). The total population of planktonic crustacea had a near neutral (zero correlation) with the surface water dissolved oxygen and thus, does not significantly change with the level of the surface water dissolved oxygen. The correlations were positive with NO3-N (midstream) at Ibiye (Rsq =0.022) and (downstream) Gbanko (Rsq =0.013), PO4-P at Ibiye (Rsq =0.258), K at Idoluwo (Rsq =0.295) and SO4-S at Oto (Rsq = 0.094) and Gbanko (Rsq = 0.457). The Berger-Parker Dominance Index (BPDI) showed that the most dominant species was Bosmina species (BPDI = 1.000), followed by Calanus species (BPDI = 1.254). Clusters by squared Euclidan distances using average linkage between groups showed proximities, transcending the borders of genera. Conclusion: The results revealed that planktonic crustacean population in Ologe lagoon undergo seasonal perturbations, were highly influenced by nutrient, metal and organic matter inputs from river Owoh, Agbara industrial estate and surrounding farmlands and were patchy in spatial distribution.

Keywords: diversity, dominance, perturbations, richness, crustacea, lagoon

Procedia PDF Downloads 723
990 Adjustment of the Level of Vibrational Force on Targeted Teeth

Authors: Amin Akbari, Dongcai Wang, Huiru Li, Xiaoping Du, Jie Chen

Abstract:

The effect of vibrational force (VF) on accelerating orthodontic tooth movement depends on the level of delivered stimulation to the tooth in terms of peak load (PL), which requires contacts between the tooth and the VF device. A personalized device ensures the contacts, but the resulting PL distribution on the teeth is unknown. Furthermore, it is unclear whether the PL on particular teeth can be adjusted to the prescribed values. The objective of this study was to investigate the efficacy of apersonalized VF device in controlling the level of stimulation on two teeth, the mandibular canines and 2nd molars. A 3-D finite element (FE) model of human dentition, including teeth, PDL, and alveolar bone, was created from the cone beam computed tomography images of an anonymous subject. The VF was applied to the teeth through a VFdevice consisting of a mouthpiece with engraved tooth profile of the subject and a VF source that applied 0.3 N force with the frequency of 30 Hz. The dentition and mouthpiece were meshed using 10-node tetrahedral elements. Interface elements were created at the interfaces between the teeth and the mouthpiece. The upper and lower teeth bite on the mouthpiece to receive the vibration. The depth of engraved individual tooth profile could be adjusted, which was accomplished by adding a layer of material as an interference or removing a layer of material as a clearance to change the PL on the tooth. The interference increases the PL while the clearance decreases it. Fivemouthpiece design cases were simulated, which included a mouthpiece without interference/clearance; the mouthpieces with bilateral interferences on both mandibular canines and 2nd molars with magnitudes of 0.1, 0.15, and 0.2-mm, respectively; and mouthpiece with bilateral 0.3-mm clearances on the four teeth. Then, the force distributions on the entire dentition were compared corresponding to these adjustments. The PL distribution on the teeth is uneven when there is no interference or clearance. Among all teeth, the anterior segment receives the highest level of PL. Adding 0.1, 0.15, and 0.2-mm interferences to the canines and 2nd molars bilaterally leads to increase of the PL on the canines by 10, 62, and 73 percent and on the 2nd molar by 14, 55, and 87 percent, respectively. Adding clearances to the canines and 2nd molars by removing the contactsbetween these teeth and the mouthpiece results in zero PL on them. Moreover, introducing interference to mandibular canines and 2nd molarsredistributes the PL on the entireteeth. The share of the PL on the anterior teeth are reduced. The use of the personalized mouthpiece ensures contactsof the teeth to the mouthpiece so that all teeth can be stimulated. However, the PL distribution is uneven. Adding interference between a tooth and the mouthpiece increases the PL while introducing clearance decreases the PL. As a result, the PL is redistributed. This study confirms that the level of VF stimulation on the individual tooth can be adjusted to a prescribed value.

Keywords: finite element method, orthodontic treatment, stress analysis, tooth movement, vibrational force

Procedia PDF Downloads 224
989 Metadiscourse in EFL, ESP and Subject-Teaching Online Courses in Higher Education

Authors: Maria Antonietta Marongiu

Abstract:

Propositional information in discourse is made coherent, intelligible, and persuasive through metadiscourse. The linguistic and rhetorical choices that writers/speakers make to organize and negotiate content matter are intended to help relate a text to its context. Besides, they help the audience to connect to and interpret a text according to the values of a specific discourse community. Based on these assumptions, this work aims to analyse the use of metadiscourse in the spoken performance of teachers in online EFL, ESP, and subject-teacher courses taught in English to non-native learners in higher education. In point of fact, the global spread of Covid 19 has forced universities to transition their in-class courses to online delivery. This has inevitably placed on the instructor a heavier interactional responsibility compared to in-class courses. Accordingly, online delivery needs greater structuring as regards establishing the reader/listener’s resources for text understanding and negotiating. Indeed, in online as well as in in-class courses, lessons are social acts which take place in contexts where interlocutors, as members of a community, affect the ways ideas are presented and understood. Following Hyland’s Interactional Model of Metadiscourse (2005), this study intends to investigate Teacher Talk in online academic courses during the Covid 19 lock-down in Italy. The selected corpus includes the transcripts of online EFL and ESP courses and subject-teachers online courses taught in English. The objective of the investigation is, firstly, to ascertain the presence of metadiscourse in the form of interactive devices (to guide the listener through the text) and interactional features (to involve the listener in the subject). Previous research on metadiscourse in academic discourse, in college students' presentations in EAP (English for Academic Purposes) lessons, as well as in online teaching methodology courses and MOOC (Massive Open Online Courses) has shown that instructors use a vast array of metadiscoursal features intended to express the speakers’ intentions and standing with respect to discourse. Besides, they tend to use directions to orient their listeners and logical connectors referring to the structure of the text. Accordingly, the purpose of the investigation is also to find out whether metadiscourse is used as a rhetorical strategy by instructors to control, evaluate and negotiate the impact of the ongoing talk, and eventually to signal their attitudes towards the content and the audience. Thus, the use of metadiscourse can contribute to the informative and persuasive impact of discourse, and to the effectiveness of online communication, especially in learning contexts.

Keywords: discourse analysis, metadiscourse, online EFL and ESP teaching, rhetoric

Procedia PDF Downloads 129
988 Scalable Performance Testing: Facilitating The Assessment Of Application Performance Under Substantial Loads And Mitigating The Risk Of System Failures

Authors: Solanki Ravirajsinh

Abstract:

In the software testing life cycle, failing to conduct thorough performance testing can result in significant losses for an organization due to application crashes and improper behavior under high user loads in production. Simulating large volumes of requests, such as 5 million within 5-10 minutes, is challenging without a scalable performance testing framework. Leveraging cloud services to implement a performance testing framework makes it feasible to handle 5-10 million requests in just 5-10 minutes, helping organizations ensure their applications perform reliably under peak conditions. Implementing a scalable performance testing framework using cloud services and tools like JMeter, EC2 instances (Virtual machine), cloud logs (Monitor errors and logs), EFS (File storage system), and security groups offers several key benefits for organizations. Creating performance test framework using this approach helps optimize resource utilization, effective benchmarking, increased reliability, cost savings by resolving performance issues before the application is released. In performance testing, a master-slave framework facilitates distributed testing across multiple EC2 instances to emulate many concurrent users and efficiently handle high loads. The master node orchestrates the test execution by coordinating with multiple slave nodes to distribute the workload. Slave nodes execute the test scripts provided by the master node, with each node handling a portion of the overall user load and generating requests to the target application or service. By leveraging JMeter's master-slave framework in conjunction with cloud services like EC2 instances, EFS, CloudWatch logs, security groups, and command-line tools, organizations can achieve superior scalability and flexibility in their performance testing efforts. In this master-slave framework, JMeter must be installed on both the master and each slave EC2 instance. The master EC2 instance functions as the "brain," while the slave instances operate as the "body parts." The master directs each slave to execute a specified number of requests. Upon completion of the execution, the slave instances transmit their results back to the master. The master then consolidates these results into a comprehensive report detailing metrics such as the number of requests sent, encountered errors, network latency, response times, server capacity, throughput, and bandwidth. Leveraging cloud services, the framework benefits from automatic scaling based on the volume of requests. Notably, integrating cloud services allows organizations to handle more than 5-10 million requests within 5 minutes, depending on the server capacity of the hosted website or application.

Keywords: identify crashes of application under heavy load, JMeter with cloud Services, Scalable performance testing, JMeter master and slave using cloud Services

Procedia PDF Downloads 30
987 Mycotoxin Bioavailability in Sparus Aurata Muscle After Human Digestion and Intestinal Transport (Caco-2/HT-29 Cells) Simulation

Authors: Cheila Pereira, Sara C. Cunha, Miguel A. Faria, José O. Fernandes

Abstract:

The increasing world population brings several concerns, one of which is food security and sustainability. To meet this challenge, aquaculture, the farming of aquatic animals and plants, including fish, mollusks, bivalves, and algae, has experienced sustained growth and development in recent years. Recent advances in this industry have focused on reducing its economic and environmental costs, for example, the substitution of protein sources in fish feed. Plant-based proteins are now a common approach, and while it is a greener alternative to animal-based proteins, there are some disadvantages, such as their putative content and intoxicants such as mycotoxins. These are naturally occurring plant contaminants, and their exposure in fish can cause health problems, stunted growth or even death, resulting in economic losses for the producers and health concerns for the consumers. Different works have demonstrated the presence of both AFB1 (aflatoxin B1) and ENNB1 (enniatin B1) in fish feed and their capacity to be absorbed and bioaccumulate in the fish organism after digestion, further reaching humans through fish ingestion. The aim of this work was to evaluate the bioaccessibility of both mycotoxins in samples of Sparus aurata muscle using a static digestion model based on the INFOGEST protocol. The samples were subjected to different cooking procedures – raw, grilled and fried – and different seasonings – none, thyme and ginger – in order to evaluate their potential reduction effect on mycotoxins bioaccessibility, followed by the evaluation of the intestinal transport of both compounds with an in vitro cell model composed of Caco-2/HT-29 co-culture monolayers, simulating the human intestinal epithelium. The bioaccessible fractions obtained in the digestion studies were used in the transport studies for a more realistic approach to bioavailability evaluation. Results demonstrated the effect of the use of different cooking procedures and seasoning on the toxin's bioavailability. Sparus aurata was chosen in this study for its large production in aquaculture and high consumption in Europe. Also, with the continued evolution of fish farming practices and more common usage of novel feed ingredients based on plants, there is a growing concern about less studied contaminants in aquaculture and their consequences for human health. In pair with greener advances in this industry, there is a convergence towards alternative research methods, such as in vitro applications. In the case of bioavailability studies, both in vitro digestion protocols and intestinal transport assessment are excellent alternatives to in vivo studies. These methods provide fast, reliable and comparable results without ethical restraints.

Keywords: AFB1, aquaculture, bioaccessibility, ENNB1, intestinal transport.

Procedia PDF Downloads 67
986 Combining Patients Pain Scores Reports with Functionality Scales in Chronic Low Back Pain Patients

Authors: Ivana Knezevic, Kenneth D. Candido, N. Nick Knezevic

Abstract:

Background: While pain intensity scales remain generally accepted assessment tool, and the numeric pain rating score is highly subjective, we nevertheless rely on them to make a judgment about treatment effects. Misinterpretation of pain can lead practitioners to underestimate or overestimate the patient’s medical condition. The purpose of this study was to analyze how the numeric rating pain scores given by patients with low back pain correlate with their functional activity levels. Methods: We included 100 consecutive patients with radicular low back pain (LBP) after the Institutional Review Board (IRB) approval. Pain scores, numeric rating scale (NRS) responses at rest and in the movement,Oswestry Disability Index (ODI) questionnaire answers were collected 10 times through 12 months. The ODI questionnaire is targeting a patient’s activities and physical limitations as well as a patient’s ability to manage stationary everyday duties. Statistical analysis was performed by using SPSS Software version 20. Results: The average duration of LBP was 14±22 months at the beginning of the study. All patients included in the study were between 24 and 78 years old (average 48.85±14); 56% women and 44% men. Differences between ODI and pain scores in the range from -10% to +10% were considered “normal”. Discrepancies in pain scores were graded as mild between -30% and -11% or +11% and +30%; moderate between -50% and -31% and +31% and +50% and severe if differences were more than -50% or +50%. Our data showed that pain scores at rest correlate well with ODI in 65% of patients. In 30% of patients mild discrepancies were present (negative in 21% and positive in 9%), 4% of patients had moderate and 1% severe discrepancies. “Negative discrepancy” means that patients graded their pain scores much higher than their functional ability, and most likely exaggerated their pain. “Positive discrepancy” means that patients graded their pain scores much lower than their functional ability, and most likely underrated their pain. Comparisons between ODI and pain scores during movement showed normal correlation in only 39% of patients. Mild discrepancies were present in 42% (negative in 39% and positive in 3%); moderate in 14% (all negative), and severe in 5% (all negative) of patients. A 58% unknowingly exaggerated their pain during movement. Inconsistencies were equal in male and female patients (p=0.606 and p=0.928).Our results showed that there was a negative correlation between patients’ satisfaction and the degree of reporting pain inconsistency. Furthermore, patients talking opioids showed more discrepancies in reporting pain intensity scores than did patients taking non-opioid analgesics or not taking medications for LBP (p=0.038). There was a highly statistically significant correlation between morphine equivalents doses and the level of discrepancy (p<0.0001). Conclusion: We have put emphasis on the patient education in pain evaluation as a vital step in accurate pain level reporting. We have showed a direct correlation with patients’ satisfaction. Furthermore, we must identify other parameters in defining our patients’ chronic pain conditions, such as functionality scales, quality of life questionnaires, etc., and should move away from an overly simplistic subjective rating scale.

Keywords: pain score, functionality scales, low back pain, lumbar

Procedia PDF Downloads 235
985 Quality of Life Responses of Students with Intellectual Disabilities Entering an Inclusive, Residential Post-Secondary Program

Authors: Mary A. Lindell

Abstract:

Adults with intellectual disabilities (ID) are increasingly attending postsecondary institutions, including inclusive residential programs at four-year universities. The legislation, national organizations, and researchers support developing postsecondary education (PSE) options for this historically underserved population. Simultaneously, researchers are assessing the quality of life indicators (QOL) for people with ID. This study explores the quality of life characteristics for individuals with ID entering a two-year PSE program. A survey aligned with the PSE program was developed and administered to participants before they began their college program (in future studies, the same survey will be administered 6 months and 1 year after graduating). Employment, income, and housing are frequently cited QOL measures. People with disabilities, and especially people with ID, are more likely to experience unemployment and low wages than people without disabilities. PSE improves adult outcomes (e.g., employment, income, housing) for people with and without disabilities. Similarly, adults with ID who attend PSE are more likely to be employed than their peers who do not attend PSE; however, adults with ID are least likely among their typical peers and other students with disabilities to attend PSE. There is increased attention to providing individuals with ID access to PSE and more research is needed regarding the characteristics of students attending PSE. This study focuses on the participants of a fully residential two-year program for individuals with ID. Students earn an Applied Skills Certificate while focusing on five benchmarks: self-care, home care, relationships, academics, and employment. To create a QOL measure, the goals of the PSE program were identified, and possible assessment items were initially selected from the National Core Indicators (NCI) and the National Transition Longitudinal Survey 2 (NTLS2) that aligned with the five program goals. Program staff and advisory committee members offered input on potential item alignment with program goals and expected value to students with ID in the program. National experts in researching QOL outcomes of people with ID were consulted and concurred that the items selected would be useful in measuring the outcomes of postsecondary students with ID. The measure was piloted, modified, and administered to incoming students with ID. Research questions: (1) In what ways are students with ID entering a two-year PSE program similar to individuals with ID who complete the NCI and NTLS2 surveys? (2) In what ways are students with ID entering a two-year PSE program different than individuals with ID who completed the NCI and NTLS2 surveys? The process of developing a QOL measure specific to a PSE program for individuals with ID revealed that many of the items in comprehensive national QOL measures are not relevant to stake-holders of this two-year residential inclusive PSE program. Specific responses of students with ID entering an inclusive PSE program will be presented as well as a comparison to similar items on national QOL measures. This study explores the characteristics of students with ID entering a residential, inclusive PSE program. This information is valuable for, researchers, educators, and policy makers as PSE programs become more accessible for individuals with ID.

Keywords: intellectual disabilities, inclusion, post-secondary education, quality of life

Procedia PDF Downloads 101
984 Numerical Investigation of Solid Subcooling on a Low Melting Point Metal in Latent Thermal Energy Storage Systems Based on Flat Slab Configuration

Authors: Cleyton S. Stampa

Abstract:

This paper addresses the perspectives of using low melting point metals (LMPMs) as phase change materials (PCMs) in latent thermal energy storage (LTES) units, through a numerical approach. This is a new class of PCMs that has been one of the most prospective alternatives to be considered in LTES, due to these materials present high thermal conductivity and elevated heat of fusion, per unit volume. The chosen type of LTES consists of several horizontal parallel slabs filled with PCM. The heat transfer fluid (HTF) circulates through the channel formed between each two consecutive slabs on a laminar regime through forced convection. The study deals with the LTES charging process (heat-storing) by using pure gallium as PCM, and it considers heat conduction in the solid phase during melting driven by natural convection in the melt. The transient heat transfer problem is analyzed in one arbitrary slab under the influence of the HTF. The mathematical model to simulate the isothermal phase change is based on a volume-averaged enthalpy method, which is successfully verified by comparing its predictions with experimental data from works available in the pertinent literature. Regarding the convective heat transfer problem in the HTF, it is assumed that the flow is thermally developing, whereas the velocity profile is already fully developed. The study aims to learn about the effect of the solid subcooling in the melting rate through comparisons with the melting process of the solid in which it starts to melt from its fusion temperature. In order to best understand this effect in a metallic compound, as it is the case of pure gallium, the study also evaluates under the same conditions established for the gallium, the melting process of commercial paraffin wax (organic compound) and of the calcium chloride hexahydrate (CaCl₂ 6H₂O-inorganic compound). In the present work, it is adopted the best options that have been established by several researchers in their parametric studies with respect to this type of LTES, which lead to high values of thermal efficiency. To do so, concerning with the geometric aspects, one considers a gap of the channel formed by two consecutive slabs, thickness and length of the slab. About the HTF, one considers the type of fluid, the mass flow rate, and inlet temperature.

Keywords: flat slab, heat storing, pure metal, solid subcooling

Procedia PDF Downloads 141
983 Accurate Calculation of the Penetration Depth of a Bullet Using ANSYS

Authors: Eunsu Jang, Kang Park

Abstract:

In developing an armored ground combat vehicle (AGCV), it is a very important step to analyze the vulnerability (or the survivability) of the AGCV against enemy’s attack. In the vulnerability analysis, the penetration equations are usually used to get the penetration depth and check whether a bullet can penetrate the armor of the AGCV, which causes the damage of internal components or crews. The penetration equations are derived from penetration experiments which require long time and great efforts. However, they usually hold only for the specific material of the target and the specific type of the bullet used in experiments. Thus, penetration simulation using ANSYS can be another option to calculate penetration depth. However, it is very important to model the targets and select the input parameters in order to get an accurate penetration depth. This paper performed a sensitivity analysis of input parameters of ANSYS on the accuracy of the calculated penetration depth. Two conflicting objectives need to be achieved in adopting ANSYS in penetration analysis: maximizing the accuracy of calculation and minimizing the calculation time. To maximize the calculation accuracy, the sensitivity analysis of the input parameters for ANSYS was performed and calculated the RMS error with the experimental data. The input parameters include mesh size, boundary condition, material properties, target diameter are tested and selected to minimize the error between the calculated result from simulation and the experiment data from the papers on the penetration equation. To minimize the calculation time, the parameter values obtained from accuracy analysis are adjusted to get optimized overall performance. As result of analysis, the followings were found: 1) As the mesh size gradually decreases from 0.9 mm to 0.5 mm, both the penetration depth and calculation time increase. 2) As diameters of the target decrease from 250mm to 60 mm, both the penetration depth and calculation time decrease. 3) As the yield stress which is one of the material property of the target decreases, the penetration depth increases. 4) The boundary condition with the fixed side surface of the target gives more penetration depth than that with the fixed side and rear surfaces. By using above finding, the input parameters can be tuned to minimize the error between simulation and experiments. By using simulation tool, ANSYS, with delicately tuned input parameters, penetration analysis can be done on computer without actual experiments. The data of penetration experiments are usually hard to get because of security reasons and only published papers provide them in the limited target material. The next step of this research is to generalize this approach to anticipate the penetration depth by interpolating the known penetration experiments. This result may not be accurate enough to be used to replace the penetration experiments, but those simulations can be used in the early stage of the design process of AGCV in modelling and simulation stage.

Keywords: ANSYS, input parameters, penetration depth, sensitivity analysis

Procedia PDF Downloads 402
982 Transition towards a Market Society: Commodification of Public Health in India and Pakistan

Authors: Mayank Mishra

Abstract:

Market Economy can be broadly defined as economic system where supply and demand regulate the economy and in which decisions pertaining to production, consumption, allocation of resources, price and competition are made by collective actions of individuals or organisations with limited government intervention. On the other hand Market Society is one where instead of the economy being embedded in social relations, social relations are embedded in the economy. A market economy becomes a market society when all of land, labour and capital are commodified. This transition also has effect on people’s attitude and values. Such a transition commence impacting the non-material aspect of life such as public education, public health and the like. The inception of neoliberal policies in non-market norms altered the nature of social goods like public health that raised the following questions. What impact would the transition to a market society make on people in terms of accessibility to public health? Is healthcare a commodity that can be subjected to a competitive market place? What kind of private investments are being made in public health and how do private investments alter the nature of a public good like healthcare? This research problem will employ empirical-analytical approach that includes deductive reasoning which will be using the existing concept of market economy and market society as a foundation for the analytical framework and the hypotheses to be examined. The research also intends to inculcate the naturalistic elements of qualitative methodology which refers to studying of real world situations as they unfold. The research will analyse the existing literature available on the subject. Concomitantly the research intends to access the primary literature which includes reports from the World Bank, World Health Organisation (WHO) and the different departments of respective ministries of the countries for the analysis. This paper endeavours to highlight how the issue of commodification of public health would lead to perpetual increase in its inaccessibility leading to stratification of healthcare services where one can avail the better services depending on the extent of one’s ability to pay. Since the fundamental maxim of private investments is to churn out profits, these kinds of trends would pose a detrimental effect on the society at large perpetuating the lacuna between the have and the have-nots.The increasing private investments, both, domestic and foreign, in public health sector are leading to increasing inaccessibility of public health services. Despite the increase in various public health schemes the quality and impact of government public health services are on a continuous decline.

Keywords: commodity, India and Pakistan, market society, public health

Procedia PDF Downloads 314
981 Harnessing Artificial Intelligence for Early Detection and Management of Infectious Disease Outbreaks

Authors: Amarachukwu B. Isiaka, Vivian N. Anakwenze, Chinyere C. Ezemba, Chiamaka R. Ilodinso, Chikodili G. Anaukwu, Chukwuebuka M. Ezeokoli, Ugonna H. Uzoka

Abstract:

Infectious diseases continue to pose significant threats to global public health, necessitating advanced and timely detection methods for effective outbreak management. This study explores the integration of artificial intelligence (AI) in the early detection and management of infectious disease outbreaks. Leveraging vast datasets from diverse sources, including electronic health records, social media, and environmental monitoring, AI-driven algorithms are employed to analyze patterns and anomalies indicative of potential outbreaks. Machine learning models, trained on historical data and continuously updated with real-time information, contribute to the identification of emerging threats. The implementation of AI extends beyond detection, encompassing predictive analytics for disease spread and severity assessment. Furthermore, the paper discusses the role of AI in predictive modeling, enabling public health officials to anticipate the spread of infectious diseases and allocate resources proactively. Machine learning algorithms can analyze historical data, climatic conditions, and human mobility patterns to predict potential hotspots and optimize intervention strategies. The study evaluates the current landscape of AI applications in infectious disease surveillance and proposes a comprehensive framework for their integration into existing public health infrastructures. The implementation of an AI-driven early detection system requires collaboration between public health agencies, healthcare providers, and technology experts. Ethical considerations, privacy protection, and data security are paramount in developing a framework that balances the benefits of AI with the protection of individual rights. The synergistic collaboration between AI technologies and traditional epidemiological methods is emphasized, highlighting the potential to enhance a nation's ability to detect, respond to, and manage infectious disease outbreaks in a proactive and data-driven manner. The findings of this research underscore the transformative impact of harnessing AI for early detection and management, offering a promising avenue for strengthening the resilience of public health systems in the face of evolving infectious disease challenges. This paper advocates for the integration of artificial intelligence into the existing public health infrastructure for early detection and management of infectious disease outbreaks. The proposed AI-driven system has the potential to revolutionize the way we approach infectious disease surveillance, providing a more proactive and effective response to safeguard public health.

Keywords: artificial intelligence, early detection, disease surveillance, infectious diseases, outbreak management

Procedia PDF Downloads 68
980 An Application of Quantile Regression to Large-Scale Disaster Research

Authors: Katarzyna Wyka, Dana Sylvan, JoAnn Difede

Abstract:

Background and significance: The following disaster, population-based screening programs are routinely established to assess physical and psychological consequences of exposure. These data sets are highly skewed as only a small percentage of trauma-exposed individuals develop health issues. Commonly used statistical methodology in post-disaster mental health generally involves population-averaged models. Such models aim to capture the overall response to the disaster and its aftermath; however, they may not be sensitive enough to accommodate population heterogeneity in symptomatology, such as post-traumatic stress or depressive symptoms. Methods: We use an archival longitudinal data set from Weill-Cornell 9/11 Mental Health Screening Program established following the World Trade Center (WTC) terrorist attacks in New York in 2001. Participants are rescue and recovery workers who participated in the site cleanup and restoration (n=2960). The main outcome is the post-traumatic stress symptoms (PTSD) severity score assessed via clinician interviews (CAPS). For a detailed understanding of response to the disaster and its aftermath, we are adapting quantile regression methodology with particular focus on predictors of extreme distress and resilience to trauma. Results: The response variable was defined as the quantile of the CAPS score for each individual under two different scenarios specifying the unconditional quantiles based on: 1) clinically meaningful CAPS cutoff values and 2) CAPS distribution in the population. We present graphical summaries of the differential effects. For instance, we found that the effect of the WTC exposures, namely seeing bodies and feeling that life was in danger during rescue/recovery work was associated with very high PTSD symptoms. A similar effect was apparent in individuals with prior psychiatric history. Differential effects were also present for age and education level of the individuals. Conclusion: We evaluate the utility of quantile regression in disaster research in contrast to the commonly used population-averaged models. We focused on assessing the distribution of risk factors for post-traumatic stress symptoms across quantiles. This innovative approach provides a comprehensive understanding of the relationship between dependent and independent variables and could be used for developing tailored training programs and response plans for different vulnerability groups.

Keywords: disaster workers, post traumatic stress, PTSD, quantile regression

Procedia PDF Downloads 285
979 Investigations on Pyrolysis Model for Radiatively Dominant Diesel Pool Fire Using Fire Dynamic Simulator

Authors: Siva K. Bathina, Sudheer Siddapureddy

Abstract:

Pool fires are formed when the flammable liquid accidentally spills on the ground or water and ignites. Pool fire is a kind of buoyancy-driven and diffusion flame. There have been many pool fire accidents caused during processing, handling and storing of liquid fuels in chemical and oil industries. Such kind of accidents causes enormous damage to property as well as the loss of lives. Pool fires are complex in nature due to the strong interaction among the combustion, heat and mass transfers and pyrolysis at the fuel surface. Moreover, the experimental study of such large complex fires involves fire safety issues and difficulties in performing experiments. In the present work, large eddy simulations are performed to study such complex fire scenarios using fire dynamic simulator. A 1 m diesel pool fire is considered for the studied cases, and diesel is chosen as it is most commonly involved fuel in fire accidents. Fire simulations are performed by specifying two different boundary conditions: one the fuel is in liquid state and pyrolysis model is invoked, and the other by assuming the fuel is initially in a vapor state and thereby prescribing the mass loss rate. A domain of size 11.2 m × 11.2 m × 7.28 m with uniform structured grid is chosen for the numerical simulations. Grid sensitivity analysis is performed, and a non-dimensional grid size of 12 corresponding to 8 cm grid size is considered. Flame properties like mass burning rate, irradiance, and time-averaged axial flame temperature profile are predicted. The predicted steady-state mass burning rate is 40 g/s and is within the uncertainty limits of the previously reported experimental data (39.4 g/s). Though the profile of the irradiance at a distance from the fire along the height is somewhat in line with the experimental data and the location of the maximum value of irradiance is shifted to a higher location. This may be due to the lack of sophisticated models for the species transportation along with combustion and radiation in the continuous zone. Furthermore, the axial temperatures are not predicted well (for any of the boundary conditions) in any of the zones. The present study shows that the existing models are not sufficient enough for modeling blended fuels like diesel. The predictions are strongly dependent on the experimental values of the soot yield. Future experiments are necessary for generalizing the soot yield for different fires.

Keywords: burning rate, fire accidents, fire dynamic simulator, pyrolysis

Procedia PDF Downloads 201
978 Limiting Freedom of Expression to Fight Radicalization: The 'Silencing' of Terrorists Does Not Always Allow Rights to 'Speak Loudly'

Authors: Arianna Vedaschi

Abstract:

This paper addresses the relationship between freedom of expression, national security and radicalization. Is it still possible to talk about a balance between the first two elements? Or, due to the intrusion of the third, is it more appropriate to consider freedom of expression as “permanently disfigured” by securitarian concerns? In this study, both the legislative and the judicial level are taken into account and the comparative method is employed in order to provide the reader with a complete framework of relevant issues and a workable set of solutions. The analysis moves from the finding according to which the tension between free speech and national security has become a major issue in democratic countries, whose very essence is continuously endangered by the ever-changing and multi-faceted threat of international terrorism. In particular, a change in terrorist groups’ recruiting pattern, attracting more and more people by way of a cutting-edge communicative strategy, often employing sophisticated technology as a radicalization tool, has called on law-makers to modify their approach to dangerous speech. While traditional constitutional and criminal law used to punish speech only if it explicitly and directly incited the commission of a criminal action (“cause-effect” model), so-called glorification offences – punishing mere ideological support for terrorism, often on the web – are becoming commonplace in the comparative scenario. Although this is direct, and even somehow understandable, consequence of the impending terrorist menace, this research shows many problematic issues connected to such a preventive approach. First, from a predominantly theoretical point of view, this trend negatively impacts on the already blurred line between permissible and prohibited speech. Second, from a pragmatic point of view, such legislative tools are not always suitable to keep up with ongoing developments of both terrorist groups and their use of technology. In other words, there is a risk that such measures become outdated even before their application. Indeed, it seems hard to still talk about a proper balance: what was previously clearly perceived as a balancing of values (freedom of speech v. public security) has turned, in many cases, into a hierarchy with security at its apex. In light of these findings, this paper concludes that such a complex issue would perhaps be better dealt with through a combination of policies: not only criminalizing ‘terrorist speech,’ which should be relegated to a last resort tool, but acting at an even earlier stage, i.e., trying to prevent dangerous speech itself. This might be done by promoting social cohesion and the inclusion of minorities, so as to reduce the probability of people considering terrorist groups as a “viable option” to deal with the lack of identification within their social contexts.

Keywords: radicalization, free speech, international terrorism, national security

Procedia PDF Downloads 199