Search results for: parallel mechanism
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4147

Search results for: parallel mechanism

367 Quasi-Photon Monte Carlo on Radiative Heat Transfer: An Importance Sampling and Learning Approach

Authors: Utkarsh A. Mishra, Ankit Bansal

Abstract:

At high temperature, radiative heat transfer is the dominant mode of heat transfer. It is governed by various phenomena such as photon emission, absorption, and scattering. The solution of the governing integrodifferential equation of radiative transfer is a complex process, more when the effect of participating medium and wavelength properties are taken into consideration. Although a generic formulation of such radiative transport problem can be modeled for a wide variety of problems with non-gray, non-diffusive surfaces, there is always a trade-off between simplicity and accuracy of the problem. Recently, solutions of complicated mathematical problems with statistical methods based on randomization of naturally occurring phenomena have gained significant importance. Photon bundles with discrete energy can be replicated with random numbers describing the emission, absorption, and scattering processes. Photon Monte Carlo (PMC) is a simple, yet powerful technique, to solve radiative transfer problems in complicated geometries with arbitrary participating medium. The method, on the one hand, increases the accuracy of estimation, and on the other hand, increases the computational cost. The participating media -generally a gas, such as CO₂, CO, and H₂O- present complex emission and absorption spectra. To model the emission/absorption accurately with random numbers requires a weighted sampling as different sections of the spectrum carries different importance. Importance sampling (IS) was implemented to sample random photon of arbitrary wavelength, and the sampled data provided unbiased training of MC estimators for better results. A better replacement to uniform random numbers is using deterministic, quasi-random sequences. Halton, Sobol, and Faure Low-Discrepancy Sequences are used in this study. They possess better space-filling performance than the uniform random number generator and gives rise to a low variance, stable Quasi-Monte Carlo (QMC) estimators with faster convergence. An optimal supervised learning scheme was further considered to reduce the computation costs of the PMC simulation. A one-dimensional plane-parallel slab problem with participating media was formulated. The history of some randomly sampled photon bundles is recorded to train an Artificial Neural Network (ANN), back-propagation model. The flux was calculated using the standard quasi PMC and was considered to be the training target. Results obtained with the proposed model for the one-dimensional problem are compared with the exact analytical and PMC model with the Line by Line (LBL) spectral model. The approximate variance obtained was around 3.14%. Results were analyzed with respect to time and the total flux in both cases. A significant reduction in variance as well a faster rate of convergence was observed in the case of the QMC method over the standard PMC method. However, the results obtained with the ANN method resulted in greater variance (around 25-28%) as compared to the other cases. There is a great scope of machine learning models to help in further reduction of computation cost once trained successfully. Multiple ways of selecting the input data as well as various architectures will be tried such that the concerned environment can be fully addressed to the ANN model. Better results can be achieved in this unexplored domain.

Keywords: radiative heat transfer, Monte Carlo Method, pseudo-random numbers, low discrepancy sequences, artificial neural networks

Procedia PDF Downloads 190
366 A Systemic Review and Comparison of Non-Isolated Bi-Directional Converters

Authors: Rahil Bahrami, Kaveh Ashenayi

Abstract:

This paper presents a systematic classification and comparative analysis of non-isolated bi-directional DC-DC converters. The increasing demand for efficient energy conversion in diverse applications has spurred the development of various converter topologies. In this study, we categorize bi-directional converters into three distinct classes: Inverting, Non-Inverting, and Interleaved. Each category is characterized by its unique operational characteristics and benefits. Furthermore, a practical comparison is conducted by evaluating the results of simulation of each bi-directional converter. BDCs can be classified into isolated and non-isolated topologies. Non-isolated converters share a common ground between input and output, making them suitable for applications with minimal voltage change. They are easy to integrate, lightweight, and cost-effective but have limitations like limited voltage gain, switching losses, and no protection against high voltages. Isolated converters use transformers to separate input and output, offering safety benefits, high voltage gain, and noise reduction. They are larger and more costly but are essential for automotive designs where safety is crucial. The paper focuses on non-isolated systems.The paper discusses the classification of non-isolated bidirectional converters based on several criteria. Common factors used for classification include topology, voltage conversion, control strategy, power capacity, voltage range, and application. These factors serve as a foundation for categorizing converters, although the specific scheme might vary depending on contextual, application, or system-specific requirements. The paper presents a three-category classification for non-isolated bi-directional DC-DC converters: inverting, non-inverting, and interleaved. In the inverting category, converters produce an output voltage with reversed polarity compared to the input voltage, achieved through specific circuit configurations and control strategies. This is valuable in applications such as motor control and grid-tied solar systems. The non-inverting category consists of converters maintaining the same voltage polarity, useful in scenarios like battery equalization. Lastly, the interleaved category employs parallel converter stages to enhance power delivery and reduce current ripple. This classification framework enhances comprehension and analysis of non-isolated bi-directional DC-DC converters. The findings contribute to a deeper understanding of the trade-offs and merits associated with different converter types. As a result, this work aids researchers, practitioners, and engineers in selecting appropriate bi-directional converter solutions for specific energy conversion requirements. The proposed classification framework and experimental assessment collectively enhance the comprehension of non-isolated bi-directional DC-DC converters, fostering advancements in efficient power management and utilization.The simulation process involves the utilization of PSIM to model and simulate non-isolated bi-directional converter from both inverted and non-inverted category. The aim is to conduct a comprehensive comparative analysis of these converters, considering key performance indicators such as rise time, efficiency, ripple factor, and maximum error. This systematic evaluation provides valuable insights into the dynamic response, energy efficiency, output stability, and overall precision of the converters. The results of this comparison facilitate informed decision-making and potential optimizations, ensuring that the chosen converter configuration aligns effectively with the designated operational criteria and performance goals.

Keywords: bi-directional, DC-DC converter, non-isolated, energy conversion

Procedia PDF Downloads 60
365 Formation of Human Resources in the Light of Sustainable Development and the Achievement of Full Employment

Authors: Kaddour Fellague Mohammed

Abstract:

The world has seen in recent years, significant developments affected various aspects of life and influenced the different types of institutions, thus was born a new world is a world of globalization, which dominated the scientific revolution and the tremendous technological developments, and that contributed to the re-formation of human resources in contemporary organizations, and made patterns new regulatory and at the same time raised and strongly values and new ideas, the organizations have become more flexible, and faster response to consumer and environmental conditions, and exceeded the problem of time and place in the framework of communication and human interaction and use of advanced information technology and adoption mainly mechanism in running its operations , focused on performance and based strategic thinking and approach in order to achieve its strategic goals high degrees of superiority and excellence, this new reality created an increasing need for a new type of human resources, quality aims to renew and aspire to be a strategic player in managing the organization and drafting of various strategies, think globally and act locally, to accommodate local variables in the international markets, which began organizations tend to strongly as well as the ability to work under different cultures. Human resources management of the most important management functions to focus on the human element, which is considered the most valuable resource of the Department and the most influential in productivity at all, that the management and development of human resources Tattabra a cornerstone in the majority of organizations which aims to strengthen the organizational capacity, and enable companies to attract and rehabilitation of the necessary competencies and are able to keep up with current and future challenges, human resources can contribute to and strongly in achieving the objectives and profit organization, and even expand more than contribute to the creation of new jobs to alleviate unemployment and achieve full operation, administration and human resources mean short optimal use of the human element is available and expected, where he was the efficiency and capabilities, and experience of this human element, and his enthusiasm for the work stop the efficiency and success in reaching their goals, so interested administration scientists developed the principles and foundations that help to make the most of each individual benefit in the organization through human resources management, these foundations start of the planning and selection, training and incentives and evaluation, which is not separate from each other, but are integrated with each other as a system systemic order to reach the efficient functioning of the human resources management and has been the organization as a whole in the context of development sustainable.

Keywords: configuration, training, development, human resources, operating

Procedia PDF Downloads 405
364 Concept of a Pseudo-Lower Bound Solution for Reinforced Concrete Slabs

Authors: M. De Filippo, J. S. Kuang

Abstract:

In construction industry, reinforced concrete (RC) slabs represent fundamental elements of buildings and bridges. Different methods are available for analysing the structural behaviour of slabs. In the early ages of last century, the yield-line method has been proposed to attempt to solve such problem. Simple geometry problems could easily be solved by using traditional hand analyses which include plasticity theories. Nowadays, advanced finite element (FE) analyses have mainly found their way into applications of many engineering fields due to the wide range of geometries to which they can be applied. In such cases, the application of an elastic or a plastic constitutive model would completely change the approach of the analysis itself. Elastic methods are popular due to their easy applicability to automated computations. However, elastic analyses are limited since they do not consider any aspect of the material behaviour beyond its yield limit, which turns to be an essential aspect of RC structural performance. Furthermore, their applicability to non-linear analysis for modeling plastic behaviour gives very reliable results. Per contra, this type of analysis is computationally quite expensive, i.e. not well suited for solving daily engineering problems. In the past years, many researchers have worked on filling this gap between easy-to-implement elastic methods and computationally complex plastic analyses. This paper aims at proposing a numerical procedure, through which a pseudo-lower bound solution, not violating the yield criterion, is achieved. The advantages of moment distribution are taken into account, hence the increase in strength provided by plastic behaviour is considered. The lower bound solution is improved by detecting over-yielded moments, which are used to artificially rule the moment distribution among the rest of the non-yielded elements. The proposed technique obeys Nielsen’s yield criterion. The outcome of this analysis provides a simple, yet accurate, and non-time-consuming tool of predicting the lower-bound solution of the collapse load of RC slabs. By using this method, structural engineers can find the fracture patterns and ultimate load bearing capacity. The collapse triggering mechanism is found by detecting yield-lines. An application to the simple case of a square clamped slab is shown, and a good match was found with the exact values of collapse load.

Keywords: computational mechanics, lower bound method, reinforced concrete slabs, yield-line

Procedia PDF Downloads 150
363 Transitioning Towards a Circular Economy in the Textile Industry: Approaches to Address Environmental Challenges

Authors: Atefeh Salehipoor

Abstract:

Textiles play a vital role in human life, particularly in the form of clothing. However, the alarming rate at which textiles end up in landfills presents a significant environmental risk. With approximately one garbage truck per second being filled with discarded textiles, urgent measures are required to mitigate this trend. Governments and responsible organizations are calling upon various stakeholders to shift from a linear economy to a circular economy model in the textile industry. This article highlights several key approaches that can be undertaken to address this pressing issue. These approaches include the creation of renewable raw material sources, rethinking production processes, maximizing the use and reuse of textile products, implementing reproduction and recycling strategies, exploring redistribution to new markets, and finding innovative means to extend the lifespan of textiles. However, the rapid accumulation of textiles in landfills poses a significant threat to the environment. This article explores the urgent need for the textile industry to transition from a linear economy model to a circular economy model. The linear model, characterized by the creation, use, and disposal of textiles, is unsustainable in the long term. By adopting a circular economy approach, the industry can minimize waste, reduce environmental impact, and promote sustainable practices. This article outlines key approaches that can be undertaken to drive this transition. Approaches to Address Environmental Challenges: 1. Creation of Renewable Raw Materials Sources: Exploring and promoting the use of renewable and sustainable raw materials, such as organic cotton, hemp, and recycled fibers, can significantly reduce the environmental footprint of textile production. 2. Rethinking Production Processes: Implementing cleaner production techniques, optimizing resource utilization, and minimizing waste generation are crucial steps in reducing the environmental impact of textile manufacturing. 3. Maximizing Use and Reuse of Textile Products: Encouraging consumers to prolong the lifespan of textile products through proper care, maintenance, and repair services can reduce the frequency of disposal and promote a culture of sustainability. 4. Reproduction and Recycling Strategies: Investing in innovative technologies and infrastructure to enable efficient reproduction and recycling of textiles can close the loop and minimize waste generation. 5. Redistribution of Textiles to New Markets: Exploring opportunities to redistribute textiles to new and parallel markets, such as resale platforms, can extend their lifecycle and prevent premature disposal. 6. Improvising Means to Extend Textile Lifespan: Encouraging design practices that prioritize durability, versatility, and timeless aesthetics can contribute to prolonging the lifespan of textiles. Conclusion The textile industry must urgently transition from a linear economy to a circular economy model to mitigate the adverse environmental impact caused by textile waste. By implementing the outlined approaches, such as sourcing renewable raw materials, rethinking production processes, promoting reuse and recycling, exploring new markets, and extending the lifespan of textiles, stakeholders can work together to create a more sustainable and environmentally friendly textile industry. These measures require collective action and collaboration between governments, organizations, manufacturers, and consumers to drive positive change and safeguard the planet for future generations.

Keywords: textiles, circular economy, environmental challenges, renewable raw materials, production processes, reuse, recycling, redistribution, textile lifespan extension

Procedia PDF Downloads 53
362 Effect of Fast and Slow Tempo Music on Muscle Endurance Time

Authors: Rohit Kamal, Devaki Perumal Rajaram, Rajam Krishna, Sai Kumar Pindagiri, Silas Danielraj

Abstract:

Introduction: According to WHO, Global health observatory at least 2.8 million people die each year because of obesity and overweight. This is mainly because of the adverse metabolic effects of obesity and overweight on blood pressure, lipid profile especially cholesterol and insulin resistance. To achieve optimum health WHO has set the BMI in the range of 18.5 to 24.9 kg/m2. Due to modernization of life style, physical exercise in the form of work is no longer a possibility and hence an effective way to burn out calories to achieve the optimum BMI is the need of the hour. Studies have shown that exercising for more than 60 minutes /day helps to maintain the weight and to reduce the weight exercise should be done for 90 minutes a day. Moderate exercise for about 30 min is essential for burning up of calories. People with low endurance fail to perform even the low intensity exercise for minimal time. Hence, it is necessary to find out some effective method to increase the endurance time. Methodology: This study was approved by the Institutional Ethical committee of our college. After getting written informed consent, 25 apparently healthy males between the age group 18-20 years were selected. Subjects are with muscular disorder, subjects who are Hypertensive, Diabetes, Smokers, Alcoholics, taking drugs affecting the muscle strength. To determine the endurance time: Maximum voluntary contraction (MVC) was measured by asking the participants to squeeze the hand grip dynamometer as hard as possible and hold it for 3 seconds. This procedure was repeated thrice and the average of the three reading was taken as the maximum voluntary contraction. The participant was then asked to squeeze the dynamometer and hold it at 70% of the maximum voluntary contraction while hearing fast tempo music which was played for about ten minutes then the participant was asked to relax for ten minutes and was made to hold the hand grip dynamometer at 70% of the maximum voluntary contraction while hearing slow tempo music. To avoid the bias of getting habituated to the procedure the order of hearing for the fast and slow tempo music was changed. The time for which they can hold it at 70% of MVC was determined by using a stop watch and that was taken as the endurance time. Results: The mean value of the endurance time during fast and slow tempo music was compared in all the subjects. The mean MVC was 34.92 N. The mean endurance time was 21.8 (16.3) seconds with slow tempo music which was more then with fast tempo music with which the mean endurance time was 20.6 (11.7) seconds. The preference was more for slow tempo music then for fast tempo music. Conclusion: Music when played during exercise by some unknown mechanism helps to increase the endurance time by alleviating the symptoms of lactic acid accumulation.

Keywords: endurance time, fast tempo music, maximum voluntary contraction, slow tempo music

Procedia PDF Downloads 279
361 Micromechanism of Ionization Effects on Metal/Gas Mixing Instabilty at Extreme Shock Compressing Conditions

Authors: Shenghong Huang, Weirong Wang, Xisheng Luo, Xinzhu Li, Xinwen Zhao

Abstract:

Understanding of material mixing induced by Richtmyer-Meshkov instability (RMI) at extreme shock compressing conditions (high energy density environment: P >> 100GPa, T >> 10000k) is of great significance in engineering and science, such as inertial confinement fusion(ICF), supersonic combustion, etc. Turbulent mixing induced by RMI is a kind of complex fluid dynamics, which is closely related with hydrodynamic conditions, thermodynamic states, material physical properties such as compressibility, strength, surface tension and viscosity, etc. as well as initial perturbation on interface. For phenomena in ordinary thermodynamic conditions (low energy density environment), many investigations have been conducted and many progresses have been reported, while for mixing in extreme thermodynamic conditions, the evolution may be very different due to ionization as well as large difference of material physical properties, which is full of scientific problems and academic interests. In this investigation, the first principle based molecular dynamic method is applied to study metal Lithium and gas Hydrogen (Li-H2) interface mixing in micro/meso scale regime at different shock compressing loading speed ranging from 3 km/s to 30 km/s. It's found that, 1) Different from low-speed shock compressing cases, in high-speed shock compresing (>9km/s) cases, a strong acceleration of metal/gas interface after strong shock compression is observed numerically, leading to a strong phase inverse and spike growing with a relative larger linear rate. And more specially, the spike growing rate is observed to be increased with shock loading speed, presenting large discrepancy with available empirical RMI models; 2) Ionization is happened in shock font zone at high-speed loading cases(>9km/s). An additional local electric field induced by the inhomogeneous diffusion of electrons and nuclei after shock font is observed to occur near the metal/gas interface, leading to a large acceleration of nuclei in this zone; 3) In conclusion, the work of additional electric field contributes to a mechanism of RMI in micro/meso scale regime at extreme shock compressing conditions, i.e., a Rayleigh-Taylor instability(RTI) is induced by additional electric field during RMI mixing process and thus a larger linear growing rate of interface spike.

Keywords: ionization, micro/meso scale, material mixing, shock

Procedia PDF Downloads 203
360 Air Breakdown Voltage Prediction in Post-arcing Conditions for Compact Circuit Breakers

Authors: Jing Nan

Abstract:

The air breakdown voltage in compact circuit breakers is a critical factor in the design and reliability of electrical distribution systems. This voltage determines the threshold at which the air insulation between conductors will fail or 'break down,' leading to an arc. This phenomenon is highly sensitive to the conditions within the breaker, such as the temperature and the distance between electrodes. Typically, air breakdown voltage models have been reliable for predicting failure under standard operational temperatures. However, in conditions post-arcing, where temperatures can soar above 2000K, these models face challenges due to the complex physics of ionization and electron behaviour at such high-energy states. Building upon the foundational understanding that the breakdown mechanism is initiated by free electrons and propelled by electric fields, which lead to ionization and, potentially, to avalanche or streamer formation, we acknowledge the complexity introduced by high-temperature environments. Recognizing the limitations of existing experimental data, a notable research gap exists in the accurate prediction of breakdown voltage at elevated temperatures, typically observed post-arcing, where temperatures exceed 2000K.To bridge this knowledge gap, we present a method that integrates gap distance and high-temperature effects into air breakdown voltage assessment. The proposed model is grounded in the physics of ionization, accounting for the dynamic behaviour of free electrons which, under intense electric fields at elevated temperatures, lead to thermal ionization and potentially reach the threshold for streamer formation as Meek's criterion. Employing the Saha equation, our model calculates equilibrium electron densities, adapting to the atmospheric pressure and the hot temperature regions indicative of post-arc temperature conditions. Our model is rigorously validated against established experimental data, demonstrating substantial improvements in predicting air breakdown voltage in the high-temperature regime. This work significantly improves the predictive power for air breakdown voltage under conditions that closely mimic operational stressors in compact circuit breakers. Looking ahead, the proposed methods are poised for further exploration in alternative insulating media, like SF6, enhancing the model's utility for a broader range of insulation technologies and contributing to the future of high-temperature electrical insulation research.

Keywords: air breakdown voltage, high-temperature insulation, compact circuit breakers, electrical discharge, saha equation

Procedia PDF Downloads 55
359 Effect of Carbide Precipitates in Tool Steel on Material Transfer: A Molecular Dynamics Study

Authors: Ahmed Tamer AlMotasem, Jens Bergström, Anders Gåård, Pavel Krakhmalev, Thijs Jan Holleboom

Abstract:

In sheet metal forming processes, accumulation and transfer of sheet material to tool surfaces, often referred to as galling, is the major cause of tool failure. Initiation of galling is assumed to occur due to local adhesive wear between two surfaces. Therefore, reducing adhesion between the tool and the work sheet has a great potential to improve the tool materials galling resistance. Experimental observations and theoretical studies show that the presence of primary micro-sized carbides and/or nitrides in alloyed steels may significantly improve galling resistance. Generally, decreased adhesion between the ceramic precipitates and the sheet material counter-surface are attributed as main reason to the latter observations. On the other hand, adhesion processes occur at an atomic scale and, hence, fundamental understanding of galling can be obtained via atomic scale simulations. In the present study, molecular dynamics simulations are used, with utilizing second nearest neighbor embedded atom method potential to investigate the influence of nano-sized cementite precipitates embedded in tool atoms. The main aim of the simulations is to gain new fundamental knowledge on galling initiation mechanisms. Two tool/work piece configurations, iron/iron and iron-cementite/iron, are studied under dry sliding conditions. We find that the average frictional force decreases whereas the normal force increases for the iron-cementite/iron system, in comparison to the iron/iron configuration. Moreover, the average friction coefficient between the tool/work-piece decreases by about 10 % for the iron-cementite/iron case. The increase of the normal force in the case of iron-cementite/iron system may be attributed to the high stiffness of cementite compared to bcc iron. In order to qualitatively explain the effect of cementite on adhesion, the adhesion force between self-mated iron/iron and cementite/iron surfaces has been determined and we found that iron/cementite surface exhibits lower adhesive force than that of iron-iron surface. The variation of adhesion force with temperature was investigated up to 600 K and we found that the adhesive force, generally, decreases with increasing temperature. Structural analyses show that plastic deformation is the main deformation mechanism of the work-piece, accompanied with dislocations generation.

Keywords: adhesion, cementite, galling, molecular dynamics

Procedia PDF Downloads 279
358 Differentiation of Drug Stereoisomers by Their Stereostructure-Selective Membrane Interactions as One of Pharmacological Mechanisms

Authors: Maki Mizogami, Hironori Tsuchiya, Yoshiroh Hayabuchi, Kenji Shigemi

Abstract:

Since drugs exhibit significant structure-dependent differences in activity and toxicity, their differentiation based on the mechanism of action should have implications for comparative drug efficacy and safety. We aimed to differentiate drug stereoisomers by their stereostructure-selective membrane interactions underlying pharmacological and toxicological effects. Biomimetic lipid bilayer membranes were prepared with phospholipids and sterols (either cholesterol or epicholesterol) to mimic the lipid compositions of neuronal and cardiomyocyte membranes and to provide these membranes with the chirality. The membrane preparations were treated with different classes of stereoisomers at clinically- and pharmacologically-relevant concentrations (25-200 μM), followed by measuring fluorescence polarization to determine the membrane interactivity of drugs to change the physicochemical property of membranes. All the tested drugs acted on lipid bilayers to increase or decrease the membrane fluidity. Drug stereoisomers could not be differentiated when interacting with the membranes consisting of phospholipids alone. However, they stereostructure-selectively interacted with neuro-mimetic and cardio-mimetic membranes containing 40 mol% cholesterol ((3β)-cholest-5-en-3-ol) to show the relative potencies being local anesthetic R(+)-bupivacaine > rac-bupivacaine > S(‒)-bupivacaine, α2-adrenergic agonistic D-medetomidine > rac-medetomidine > L-medetomidine, β-adrenergic antagonistic R(+)-propranolol > rac-propranolol > S(–)-propranolol, NMDA receptor antagonistic S(+)-ketamine > rac-ketamine, analgesic monoterpenoid (+)-menthol > (‒)-menthol, non-steroidal anti-inflammatory S(+)-ibuprofen > rac-ibuprofen > R(‒)-ibuprofen, and bioactive flavonoid (+)-epicatechin > (‒)-epicatechin. All of the order of membrane interactivity were correlated to those of beneficial and adverse effects of the tested stereoisomers. In contrast, the membranes prepared with epicholesterol ((3α)-chotest-5-en-3-ol), an epimeric form of cholesterol, reversed the rank order of membrane interactivity to be S(‒)-enantiomeric > racemic > R(+)-enantiomeric bupivacaine, L-enantiomeric > racemic > D-enantiomeric medetomidine, S(–)-enantiomeric > racemic > R(+)-enantiomeric propranolol, racemic > S(+)-enantiomeric ketamine, (‒)-enantiomeric > (+)-enantiomeric menthol, R(‒)-enantiomeric > racemic > S(+)-enantiomeric ibuprofen, and (‒)-enantiomeric > (+)-enantiomeric epicatechin. The opposite configuration allows drug molecules to interact with chiral sterol membranes enantiomer-selectively. From the comparative results, it is speculated that a 3β-hydroxyl group in cholesterol is responsible for the enantioselective interactions of drugs. In conclusion, the differentiation of drug stereoisomers by their stereostructure-selective membrane interactions would be useful for designing and predicting drugs with higher activity and/or lower toxicity.

Keywords: chiral membrane, differentiation, drug stereoisomer, enantioselective membrane interaction

Procedia PDF Downloads 198
357 Post Liberal Perspective on Minorities Visibility in Contemporary Visual Culture: The Case of Mizrahi Jews

Authors: Merav Alush Levron, Sivan Rajuan Shtang

Abstract:

From as early as their emergence in Europe and the US, postmodern and post-colonial paradigm have formed the backbone of the visual culture field of study. The self-representation project of political minorities is studied, described and explained within the premises and perspectives drawn from these paradigms, addressing the key issues they had raised: modernism’s crisis of representation. The struggle for self-representation, agency and multicultural visibility sought to challenge the liberal pretense of universality and equality, hitting at its different blind spots, on issues such as class, gender, race, sex, and nationality. This struggle yielded subversive identity and hybrid performances, including reclaiming, mimicry and masquerading. These performances sought to defy the uniform, universal self, which forms the basis for the liberal, rational, enlightened subject. The argument of this research runs that this politics of representation itself is confined within liberal thought. Alongside post-colonialism and multiculturalism’s contribution in undermining oppressive structures of power, generating diversity in cultural visibility, and exposing the failure of liberal colorblindness, this subversion is constituted in the visual field by way of confrontation, flying in the face of the universal law and relying on its ongoing comparison and attribution to this law. Relying on Deleuze and Guattari, this research set out to draw theoretic and empiric attention to an alternative, post-liberal occurrence which has been taking place in the visual field in parallel to the contra-hegemonic phase and as a product of political reality in the aftermath of the crisis of representation. It is no longer a counter-representation; rather, it is a motion of organic minor desire, progressing in the form of flows and generating what Deleuze and Guattari termed deterritorialization of social structures. This discussion shall have its focus on current post-liberal performances of ‘Mizrahim’ (Jewish Israelis of Arab and Muslim extraction) in the visual field in Israel. In television, video art and photography, these performances challenge the issue of representation and generate concrete peripheral Mizrahiness, realized in the visual organization of the photographic frame. Mizrahiness then transforms from ‘confrontational’ representation into a 'presence', flooding the visual sphere in our plain sight, in a process of 'becoming'. The Mizrahi desire is exerted on the plains of sound, spoken language, the body and the space where they appear. It removes from these plains the coding and stratification engendered by European dominance and rational, liberal enlightenment. This stratification, adhering to the hegemonic surface, is flooded not by way of resisting false consciousness or employing hybridity, but by way of the Mizrahi identity’s own productive, material immanent yearning. The Mizrahi desire reverberates with Mizrahi peripheral 'worlds of meaning', where post-colonial interpretation almost invariably identifies a product of internalized oppression, and a recurrence thereof, rather than a source in itself - an ‘offshoot, never a wellspring’, as Nissim Mizrachi clarifies in his recent pioneering work. The peripheral Mizrahi performance ‘unhook itself’, in Deleuze and Guattari words, from the point of subjectification and interpretation and does not correspond with the partialness, absence, and split that mark post-colonial identities.

Keywords: desire, minority, Mizrahi Jews, post-colonialism, post-liberalism, visibility, Deleuze and Guattari

Procedia PDF Downloads 300
356 Prenatal Use of Serotonin Reuptake Inhibitors (SRIs) and Congenital Heart Anomalies (CHA): An Exploratory Pharmacogenetics Study

Authors: Aizati N. A. Daud, Jorieke E. H. Bergman, Wilhelmina S. Kerstjens-Frederikse, Pieter Van Der Vlies, Eelko Hak, Rolf M. F. Berger, Henk Groen, Bob Wilffert

Abstract:

Prenatal use of SRIs was previously associated with Congenital Heart Anomalies (CHA). The aim of the study is to explore whether pharmacogenetics plays a role in this teratogenicity using a gene-environment interaction study. A total of 33 case-mother dyads and 2 mother-only (children deceased) registered in EUROCAT Northern Netherlands were included in a case-only study. Five case-mother dyads and two mothers-only were exposed to SRIs (paroxetine=3, fluoxetine=2, venlafaxine=1, paroxetine and venlafaxine=1) in the first trimester of pregnancy. The remaining 28 case-mother dyads were not exposed to SRIs. Ten genes that encode the enzymes or proteins important in determining fetal exposure to SRIs or its mechanism of action were selected: CYPs (CYP1A2, CYP2C9, CYP2C19, CYP2D6), ABCB1 (placental P-glycoprotein), SLC6A4 (serotonin transporter) and serotonin receptor genes (HTR1A, HTR1B, HTR2A, and HTR3B). All included subjects were genotyped for 58 genetic variations in these ten genes. Logistic regression analyses were performed to determine the interaction odds ratio (OR) between genetic variations and SRIs exposure on the risk of CHA. Due to low phenotype frequencies of CYP450 poor metabolizers among exposed cases, the OR cannot be calculated. For ABCB1, there was no indication of changes in the risk of CHA with any of the ABCB1 SNPs in the children and their mothers. Several genetic variations of the serotonin transporter and receptors (SLC6A4 5-HTTLPR and 5-HTTVNTR, HTR1A rs1364043, HTR1B rs6296 & rs6298, HTR3B rs1176744) were associated with an increased risk of CHA, but with too limited sample size to reach statistical significance. For SLC6A4 genetic variations, the mean genetic scores of the exposed case-mothers tended to be higher than the unexposed mothers (2.5 ± 0.8 and 1.88 ± 0.7, respectively; p=0.061). For SNPs of the serotonin receptors, the mean genetic score for exposed cases (children) tended to be higher than the unexposed cases (3.4 ± 2.2, and 1.9 ± 1.6, respectively; p=0.065). This study might be among the first to explore the potential gene-environment interaction between pharmacogenetic determinants and SRIs use on the risk of CHA. With small sample sizes, it was not possible to find a significant interaction. However, there were indications for a role of serotonin receptor polymorphisms in fetuses exposed to SRIs on fetal risk of CHA which warrants further investigation.

Keywords: gene-environment interaction, heart defects, pharmacogenetics, serotonin reuptake inhibitors, teratogenicity

Procedia PDF Downloads 196
355 VIAN-DH: Computational Multimodal Conversation Analysis Software and Infrastructure

Authors: Teodora Vukovic, Christoph Hottiger, Noah Bubenhofer

Abstract:

The development of VIAN-DH aims at bridging two linguistic approaches: conversation analysis/interactional linguistics (IL), so far a dominantly qualitative field, and computational/corpus linguistics and its quantitative and automated methods. Contemporary IL investigates the systematic organization of conversations and interactions composed of speech, gaze, gestures, and body positioning, among others. These highly integrated multimodal behaviour is analysed based on video data aimed at uncovering so called “multimodal gestalts”, patterns of linguistic and embodied conduct that reoccur in specific sequential positions employed for specific purposes. Multimodal analyses (and other disciplines using videos) are so far dependent on time and resource intensive processes of manual transcription of each component from video materials. Automating these tasks requires advanced programming skills, which is often not in the scope of IL. Moreover, the use of different tools makes the integration and analysis of different formats challenging. Consequently, IL research often deals with relatively small samples of annotated data which are suitable for qualitative analysis but not enough for making generalized empirical claims derived quantitatively. VIAN-DH aims to create a workspace where many annotation layers required for the multimodal analysis of videos can be created, processed, and correlated in one platform. VIAN-DH will provide a graphical interface that operates state-of-the-art tools for automating parts of the data processing. The integration of tools that already exist in computational linguistics and computer vision, facilitates data processing for researchers lacking programming skills, speeds up the overall research process, and enables the processing of large amounts of data. The main features to be introduced are automatic speech recognition for the transcription of language, automatic image recognition for extraction of gestures and other visual cues, as well as grammatical annotation for adding morphological and syntactic information to the verbal content. In the ongoing instance of VIAN-DH, we focus on gesture extraction (pointing gestures, in particular), making use of existing models created for sign language and adapting them for this specific purpose. In order to view and search the data, VIAN-DH will provide a unified format and enable the import of the main existing formats of annotated video data and the export to other formats used in the field, while integrating different data source formats in a way that they can be combined in research. VIAN-DH will adapt querying methods from corpus linguistics to enable parallel search of many annotation levels, combining token-level and chronological search for various types of data. VIAN-DH strives to bring crucial and potentially revolutionary innovation to the field of IL, (that can also extend to other fields using video materials). It will allow the processing of large amounts of data automatically and, the implementation of quantitative analyses, combining it with the qualitative approach. It will facilitate the investigation of correlations between linguistic patterns (lexical or grammatical) with conversational aspects (turn-taking or gestures). Users will be able to automatically transcribe and annotate visual, spoken and grammatical information from videos, and to correlate those different levels and perform queries and analyses.

Keywords: multimodal analysis, corpus linguistics, computational linguistics, image recognition, speech recognition

Procedia PDF Downloads 80
354 Sources of Precipitation and Hydrograph Components of the Sutri Dhaka Glacier, Western Himalaya

Authors: Ajit Singh, Waliur Rahaman, Parmanand Sharma, Laluraj C. M., Lavkush Patel, Bhanu Pratap, Vinay Kumar Gaddam, Meloth Thamban

Abstract:

The Himalayan glaciers are the potential source of perennial water supply to Asia’s major river systems like the Ganga, Brahmaputra and the Indus. In order to improve our understanding about the source of precipitation and hydrograph components in the interior Himalayan glaciers, it is important to decipher the sources of moisture and their contribution to the glaciers in this river system. In doing so, we conducted an extensive pilot study in a Sutri Dhaka glacier, western Himalaya during 2014-15. To determine the moisture sources, rain, surface snow, ice, and stream meltwater samples were collected and analyzed for stable oxygen (δ¹⁸O) and hydrogen (δD) isotopes. A two-component hydrograph separation was performed for the glacier stream using these isotopes assuming the contribution of rain, groundwater and spring water contribution is negligible based on field studies and available literature. To validate the results obtained from hydrograph separation using above method, snow and ice melt ablation were measured using a network of bamboo stakes and snow pits. The δ¹⁸O and δD in rain samples range from -5.3% to -20.8% and -31.7% to -148.4% respectively. It is noteworthy to observe that the rain samples showed enriched values in the early season (July-August) and progressively get depleted at the end of the season (September). This could be due to the ‘amount effect’. Similarly, old snow samples have shown enriched isotopic values compared to fresh snow. This could because of the sublimation processes operating over the old surface snow. The δ¹⁸O and δD values in glacier ice samples range from -11.6% to -15.7% and -31.7% to -148.4%, whereas in a Sutri Dhaka meltwater stream, it ranges from -12.7% to -16.2% and -82.9% to -112.7% respectively. The mean deuterium excess (d-excess) value in all collected samples exceeds more than 16% which suggests the predominant moisture source of precipitation is from the Western Disturbances. Our detailed estimates of the hydrograph separation of Sutri Dhaka meltwater using isotope hydrograph separation and glaciological field methods agree within their uncertainty; stream meltwater budget is dominated by glaciers ice melt over snowmelt. The present study provides insights into the sources of moisture, controlling mechanism of the isotopic characteristics of Sutri Dhaka glacier water and helps in understanding the snow and ice melt components in Chandra basin, Western Himalaya.

Keywords: D-excess, hydrograph separation, Sutri Dhaka, stable water isotope, western Himalaya

Procedia PDF Downloads 132
353 Developing Gifted Students’ STEM Career Interest

Authors: Wing Mui Winnie So, Tian Luo, Zeyu Han

Abstract:

To fully explore and develop the potentials of gifted students systematically and strategically by providing them with opportunities to receive education at appropriate levels, schools in Hong Kong are encouraged to adopt the "Three-Tier Implementation Model" to plan and implement the school-based gifted education, with Level Three refers to the provision of learning opportunities for the exceptionally gifted students in the form of specialist training outside the school setting by post-secondary institutions, non-government organisations, professional bodies and technology enterprises. Due to the growing concern worldwide about low interest among students in pursuing STEM (Science, Technology, Engineering, and Mathematics) careers, cultivating and boosting STEM career interest has been an emerging research focus worldwide. Although numerous studies have explored its critical contributors, little research has examined the effectiveness of comprehensive interventions such as “Studying with STEM professional”. This study aims to examine the effect on gifted students’ career interest during their participation in an off-school support programme designed and supervised by a team of STEM educators and STEM professionals from a university. Gifted students were provided opportunities and tasks to experience STEM career topics that are not included in the school syllabus, and to experience how to think and work like a STEM professional in their learning. Participants involved 40 primary school students joining the intervention programme outside the normal school setting. Research methods included adopting the STEM career interest survey and drawing tasks supplemented with writing before and after the programme, as well as interviews before the end of the programme. The semi-structured interviews focused on students’ views regarding STEM professionals; what’s it like to learn with a STEM professional; what’s it like to work and think like a STEM professional; and students’ STEM identity and career interest. The changes in gifted students’ STEM career interest and its well-recognised significant contributors, for example, STEM stereotypes, self-efficacy for STEM activities, and STEM outcome expectation, were collectively examined from the pre- and post-survey using T-test. Thematic analysis was conducted for the interview records to explore how studying with STEM professional intervention can help students understand STEM careers; build STEM identity; as well as how to think and work like a STEM professional. Results indicated a significant difference in STEM career interest before and after the intervention. The influencing mechanism was also identified from the measurement of the related contributors and the analysis of drawings and interviews. The potential of off-school support programme supervised by STEM educators and professionals to develop gifted students’ STEM career interest is argued to be further unleashed in future research and practice.

Keywords: gifted students, STEM career, STEM education, STEM professionals

Procedia PDF Downloads 48
352 Examining the Mediating and Moderating Role of Relationships in the Association between Poverty and Children’s Subjective Well-Being

Authors: Esther Yin-Nei Cho

Abstract:

There is inconsistency among studies about whether there is an association between poverty and the subjective wellbeing of children. Some have found a positive association, though its magnitude could be limited, others have shown no association. One possible explanation for this inconsistency is that household income, an often-adopted measure of child poverty, may not accurately and stably reflect the actual life experience of children. Some studies have suggested, however, that material deprivation covering various dimensions of children’s lives could be a better measure of child poverty. Another possible explanation for the inconsistency is that the link between poverty and subjective wellbeing of children may not be that straightforward, as there could be underlying mechanisms, such as mediation and moderation, influencing its direction or strength. While a mediator refers to the mechanism through which an independent variable affects a dependent variable, a moderator changes the direction or strength of the relationship between an independent variable and a dependent variable. As suggested by empirical evidence, family relationships and friendships could be potential mediators or moderators of the link between poverty and subjective well-being: poverty affects relationships; relationships are an important element in children’s subjective well-being; and economic status affects child outcomes, though not necessarily subjective wellbeing, through relationships. Since the potential links have not been adequately understood, this study fills this gap by examining the possible role of family relationships and friendships as mediators or moderators between poverty (using child-derived material deprivation as measure) and the subjective wellbeing of children. Improving subjective wellbeing is increasingly considered as a policy goal. The finding of no or a limited association between poverty and subjective wellbeing of children could be a justification for less effort to improve poverty in this regard. But if the observed magnitude of that association is due to some underlying mechanisms at work, the effect of poverty may be underestimated and the potentially useful strategies that take into account both poverty and other mediators or moderators for improving children’s subjective well-being may be overlooked. Multiple mediation, and multiple moderation models, based on regression analyses, are performed to a sample of approximately 1,600 children, who are aged 10 to 15, from the wellbeing survey conducted by The Children’s Society in England from 2010 to 2011. Results show that the effect of children’s material deprivation on their subjective well-being is mediated by their family relationships and friendships. Moreover, family relationships are a significant moderator. It is found that the negative impact of child deprivation on subjective wellbeing could be exacerbated if family relationships are not going well, while good family relationships may prevent the further decline in subjective well-being. Policy implications of the findings are discussed. In particular, policy measures that focus on strengthening the family relationships or nurturing home environment through supporting household’s economic security and parental time with children could promote the subjective wellbeing of children.

Keywords: child poverty, mediation, moderation, subjective well-being of children

Procedia PDF Downloads 299
351 Always Keep in Control: The Pattern of TV Policy Changes in China

Authors: Shan Jiang

Abstract:

China is a country with a distinct cultural system. The Chinese Communist Party (CCP) is the central factor for everything, which naturally includes culture. There are quite a lot of cultural policies in China. The same goes for TV dramas. This paper traces the evolution of Chinese TV drama policy since 1986, examines the realistic situation behind the changes, and explores the structure and role of the government in shaping the process. Using historical documents and media reports, it first analyzes four key time nodes: 1986, 2003, 2012, and 2022. It shows how the policy shifts from restricting private production to opening up to public participation, from imposing one censorship to another, and from promoting some content to restricting some other area. It finds that the policy process is not simply rectilinear but rather wandering between deregulation and strengthening control. Secondly, it divides the policies into "basic" policies that establish the overall layout and more refined "strategic" policies that respond to more refined needs. It argues that the "basic" policy process is caused by China's political, economic, and cultural system reform, and then the "strategic" policy process is affected by more environmental factors, such as the government's follow-up development strategy, industrial development, technological innovation, and specific situations. Thirdly, it analysis the main body of the 104 policies from 2000 to 2021 and puts these subjects into China's power structure and cultural system, revealing that the policy issuers are all under the highest leadership of the Chinese Central Committee. Further, the paper challenges the typical description of Chinese cultural policy, which focuses on state control exclusively, identifies the forces within and outside the system that participate in or affect the policy-making process, and reveals the inter-subjective mechanism of policy change. In conclusion, the paper reveals that China's TV drama policy is under the unified leadership of the Party and the government, which greatly guarantees the consistency of the overall direction of cultural policy, that is, the right to speak firmly in the hands. The forces within the system can sometimes promote policy changes due to common development needs. However, folk discourse is only the object of control: when it breeds a certain amount of industrial space, the government will strengthen control over this space, suppress its potential "adverse effects", and instead provide protection and create conditions for the cultivation and growth of its mainstream discourse. However, the policy combination of basic policy and strategic policy, while having a strong effect and emergency capacity, also inhibits the innovation and diversification of the TV drama market. However, the state's substantial regulation will continue to exist in the future.

Keywords: TV Policy, China, policy process, cultural policy, culture management

Procedia PDF Downloads 60
350 Sukh Initiative: A Family Planning Reproductive Health Project for Squatter Settlement of Karachi, Pakistan

Authors: Arshad Hussain

Abstract:

Background: Sukh Initiative is a multi-donor funded, family planning and reproductive health project, primed by Aman Healthcare Services; implemented through a consortium of local and international organizations, in a selected one million underserved peri-urban population of Karachi, Sindh; which aims at increasing modern contraceptive prevalence rate by 15 percentage point. Objective: To empower women to access contraception by increasing knowledge, improving quality of services and expanding the basket of choices; contributing to the goals of FP2020. Methods: A five years project has a multi-pronged approach with door to door services by LHWs and CHWs in an LHWs covered population and provision of quality FP/RH services both at public and private health care facilities. The project engages youth (12-16 years) both with community and at secondary schools to mentor them for responsible adulthood with life skilled base initiative. A 24/7 availability of youth and FP helpline service provides counselling, referrals in addition with a follow-up mechanism. Results: 131,810 MWRAs were reached by 191 community health workers through 29,693 of community support group meetings and 166,775 house hold visits. These MWRAs were counselled on FP related myths and misconception and referred to 216 providers trained for quality family planning services and maintaining average 64% quality scores in 43 public health and 35 private facilities in the project area. Of those referred 26% MWRAs opted modern contraception with 17.56% in LARCs and 41% PPFP as compared to baseline. Aman TeleHealth is linked with 24/7 counselling, referrals and post services follow-ups to clients, showing 14% proportion of FP call volume. Sukh has a unique role in engaging all partners on youth SRHR issues through family life education sessions, 30 higher sec. schools in Sukh area have been provided LSBE to 16,000 students (aged 15-17), and in community approximately 10, 496 girls and boys have received SRHR information. Conclusion: Through individual counselling, access to quality family planning services and involvement of stakeholders, Suk created an enabling environment to rapid increase in family planning in the project intervention area.

Keywords: family planning and reproductive health, married women with reproductive age, urban squatter, Pakistan

Procedia PDF Downloads 289
349 A Perspective of Digital Formation in the Solar Community as a Prototype for Finding Sustainable Algorithmic Conditions on Earth

Authors: Kunihisa Kakumoto

Abstract:

“Purpose”: Global environmental issues are now being raised in a global dimension. By predicting sprawl phenomena beyond the limits of nature with algorithms, we can expect to protect our social life within the limits of nature. It turns out that the sustainable state of the planet now consists in maintaining a balance between the capabilities of nature and the possibilities of our social life. The amount of water on earth is finite. Sustainability is therefore highly dependent on water capacity. A certain amount of water is stored in the forest by planting and green space, and the amount of water can be considered in relation to the green space. CO2 is also absorbed by green plants. "Possible measurements and methods": The concept of the solar community has been introduced in technical papers on the occasion of many international conferences. The solar community concept is based on data collected from one solar model house. This algorithmic study simulates the amount of water stored by lush green vegetation. In addition, we calculated and compared the amount of CO2 emissions from the Taiyo Community and the amount of CO2 reduction from greening. Based on the trial calculation results of these solar communities, we are simulating the sustainable state of the earth as an algorithm trial calculation result. We believe that we should also consider the composition of this solar community group using digital technology as control technology. "Conclusion": We consider the solar community as a prototype for finding sustainable conditions for the planet. The role of water is very important as the supply capacity of water is limited. However, the circulation of social life is not constructed according to the mechanism of nature. This simulation trial calculation is explained using the total water supply volume as an example. According to this process, algorithmic calculations consider the total capacity of the water supply and the population and habitable numbers of the area. Green vegetated land is very important to keep enough water. Green vegetation is also very important to maintain CO2 balance. A simulation trial calculation is possible from the relationship between the CO2 emissions of the solar community and the amount of CO2 reduction due to greening. In order to find this total balance and sustainable conditions, the algorithmic simulation calculation takes into account lush vegetation and total water supply. Research to find sustainable conditions is done by simulating an algorithmic model of the solar community as a prototype. In this one prototype example, it's balanced. The activities of our social life must take place within the permissive limits of natural mechanisms. Of course, we aim for a more ideal balance by utilizing auxiliary digital control technology such as AI.

Keywords: solar community, sustainability, prototype, algorithmic simulation

Procedia PDF Downloads 39
348 Psychotherapeutic Narratives and the Importance of Truth

Authors: Spencer Jay Knafelc

Abstract:

Some mental health practitioners and theorists have suggested that we approach remedying psychological problems by centering and intervening upon patients’ narrations. Such theorists and their corresponding therapeutic approaches see persons as narrators of their lives, where the stories they tell constitute and reflect their sense-making of the world. Psychological problems, according to these approaches to therapy, are often the result of problematic narratives. The solution is the construction of more salubrious narratives through therapy. There is trouble lurking within the history of these narrative approaches. These thinkers tend to denigrate the importance of truth, insisting that narratives are not to be thought of as aiming at truth, and thus the truth of our self-narratives is not important. There are multiple motivations for the tendency to eschew truth’s importance within the tradition of narrative approaches to therapy. The most plausible and interesting motivation comes from the observation that, in general, all dominant approaches to therapy are equally effective. The theoretical commitments of each approach are quite different and are often ostensibly incompatible (psychodynamic therapists see psychological problems as resulting from unconscious conflict and repressed desires, Cognitive-Behavioral approaches see them as resulting from distorted cognitions). This strongly suggests that there must be some cases in which therapeutic efficacy does not depend on truth and that insisting that patient’s therapeutic narratives be true in all instances is a mistake. Lewis’ solution is to suggest that narratives are metaphors. Lewis’ account appreciates that there are many ways to tell a story and that many different approaches to mental health treatment can be appropriate without committing us to any contradictions, providing us with an ostensibly coherent way to treat narratives as non-literal, instead of seeing them as tools that can be more or less apt. Here, it is argued that Lewis’ metaphor approach fails. Narratives do not have the right kind of structure to be metaphors. Still, another way to understand Lewis’ view might be that self-narratives, especially when articulated in the language of any specific approach, should not be taken literally. This is an idea at the core of the narrative theorists’ tendency to eschew the importance of the ordinary understanding of truth. This very tendency will be critiqued. The view defended in this paper more accurately captures the nature of self-narratives. The truth of one’s self-narrative is important. Not only do people care about having the right conception of their abilities, who they are, and the way the world is, but self-narratives are composed of beliefs, and the nature of belief is to aim at truth. This view also allows the recognition of the importance of developing accurate representations of oneself and reality for one’s psychological well-being. It is also argued that in many cases, truth factors in as a mechanism of change over the course of therapy. Therapeutic benefit can be achieved by coming to have a better understanding of the nature of oneself and the world. Finally, the view defended here allows for the recognition of the nature of the tension between values: truth and efficacy. It is better to recognize this tension and develop strategies to navigate it as opposed to insisting that it doesn’t exist.

Keywords: philosophy, narrative, psychotherapy, truth

Procedia PDF Downloads 75
347 Litigating Innocence in the Era of Forensic Law: The Problem of Wrongful Convictions in the Absence of Effective Post-Conviction Remedies in South Africa

Authors: Tapiwa Shumba

Abstract:

The right to fairness and access to appeals and reviews enshrined under the South African Constitution seeks to ensure that justice is served. In essence, the constitution and the law have put in place mechanisms to ensure that a miscarriage of justice through wrongful convictions does not occur. However, once convicted and sentenced on appeal the procedural safeguards seem to resign as if to say, the accused has met his fate. The challenge with this construction is that even within an ideally perfect legal system wrongful convictions would still occur. Therefore, it is not so much of the failings of a legal system that demand attention but mechanisms to redress the results of such failings where evidence becomes available that a wrongful conviction occurred. In this context, this paper looks at the South African criminal procedural mechanisms for litigating innocence post-conviction. The discussion focuses on the role of section 327 of the South African Criminal Procedure Act and its apparent shortcomings in providing an avenue for victims of miscarriages to litigate their innocence by adducing new evidence at any stage during their wrongful incarceration. By looking at developments in other jurisdiction such as the United Kingdom, where South African criminal procedure draws much of its history, and the North Carolina example which in itself was inspired by the UK Criminal Cases Review Commission, this paper is able to make comparisons and draw invaluable lessons for the South African criminal justice system. Lessons from these foreign jurisdictions show that South African post-conviction criminal procedures need reform in line with constitutional values of human dignity, equality before the law, openness and transparency. The paper proposes an independent review of the current processes to assess the current post-conviction procedures under section 327. The review must look into the effectiveness of the current system and how it can be improved in line with new substantive legal provisions creating access to DNA evidence for post-conviction exonerations. Although the UK CCRC body should not be slavishly followed, its operations and the process leading to its establishment certainly provide a good point of reference and invaluable lessons for the South African criminal justice system seeing that South African law on this aspect has generally followed the English approach except that current provisions under section 327 are a mirror of the discredited system of the UK’s previous dispensation. A new independent mechanism that treats innocent victims of the criminal justice system with dignity away from the current political process is proposed to enable the South African criminal justice to benefit fully from recent and upcoming advances in science and technology.

Keywords: innocence, forensic law, post-conviction remedies, South African criminal justice system, wrongful conviction

Procedia PDF Downloads 218
346 Understanding the Reasons for Flooding in Chennai and Strategies for Making It Flood Resilient

Authors: Nivedhitha Venkatakrishnan

Abstract:

Flooding in urban areas in India has become a usual ritual phenomenon and a nightmare to most cities, which is a consequence of man-made disruption resulting in disaster. The City planning in India falls short of withstanding hydro generated disasters. This has become a barrier and challenge in the process of development put forth by urbanization, high population density, expanding informal settlements, environment degradation from uncollected and untreated waste that flows into natural drains and water bodies, this has disrupted the natural mechanism of hazard protection such as drainage channels, wetlands and floodplains. The magnitude and the impact of the mishap was high because of the failure of development policies, strategies, plans that the city had adopted. In the current scenario, cities are becoming the home for future, with economic diversification bringing in more investment into cities especially in domains of Urban infrastructure, planning and design. The uncertainty of the Urban futures in these low elevated coastal zones faces an unprecedented risk and threat. The study on focuses on three major pillars of resilience such as Recover, Resist and Restore. This process of getting ready to handle the situation bridges the gap between disaster response management and risk reduction requires a shift in paradigm. The study involved a qualitative research and a system design approach (framework). The initial stages involved mapping out of the urban water morphology with respect to the spatial growth gave an insight of the water bodies that have gone missing over the years during the process of urbanization. The major finding of the study was missing links between traditional water harvesting network was a major reason resulting in a manmade disaster. The research conceptualized the ideology of a sponge city framework which would guide the growth through institutional frameworks at different levels. The next stage was on understanding the implementation process at various stage to ensure the shift in paradigm. Demonstration of the concepts at a neighborhood level where, how, what are the functions and benefits of each component. Quantifying the design decision with rainwater harvest, surface runoff and how much water is collected and how it could be collected, stored and reused. The study came with further recommendation for Water Mitigation Spaces that will revive the traditional harvesting network.

Keywords: flooding, man made disaster, resilient city, traditional harvesting network, waterbodies

Procedia PDF Downloads 123
345 Incidence of Breast Cancer and Enterococcus Infection: A Retrospective Analysis

Authors: Matthew Cardeiro, Amalia D. Ardeljan, Lexi Frankel, Dianela Prado Escobar, Catalina Molnar, Omar M. Rashid

Abstract:

Introduction: Enterococci comprise the natural flora of nearly all animals and are ubiquitous in food manufacturing and probiotics. However, its role in the microbiome remains controversial. The gut microbiome has shown to play an important role in immunology and cancer. Further, recent data has suggested a relationship between gut microbiota and breast cancer. These studies have shown that the gut microbiome of patients with breast cancer differs from that of healthy patients. Research regarding enterococcus infection and its sequala is limited, and further research is needed in order to understand the relationship between infection and cancer. Enterococcus may prevent the development of breast cancer (BC) through complex immunologic and microbiotic adaptations following an enterococcus infection. This study investigated the effect of enterococcus infection and the incidence of BC. Methods: A retrospective study (January 2010- December 2019) was provided by a Health Insurance Portability and Accountability Act (HIPAA) compliant national database and conducted using a Humans Health Insurance Database. International Classification of Disease (ICD) 9th and 10th codes, Current Procedural Terminology (CPT), and National Drug Codes were used to identify BC diagnosis and enterococcus infection. Patients were matched for age, sex, Charlson Comorbidity Index (CCI), antibiotic treatment, and region of residence. Chi-squared, logistic regression, and odds ratio were implemented to assess the significance and estimate relative risk. Results: 671 out of 28,518 (2.35%) patients with a prior enterococcus infection and 1,459 out of 28,518 (5.12%) patients without enterococcus infection subsequently developed BC, and the difference was statistically significant (p<2.2x10⁻¹⁶). Logistic regression also indicated enterococcus infection was associated with a decreased incidence of BC (RR=0.60, 95% CI [0.57, 0.63]). Treatment for enterococcus infection was analyzed and controlled for in both enterococcus infected and noninfected populations. 398 out of 11,523 (3.34%) patients with a prior enterococcus infection and treated with antibiotics were compared to 624 out of 11,523 (5.41%) patients with no history of enterococcus infection (control) and received antibiotic treatment. Both populations subsequently developed BC. Results remained statistically significant (p<2.2x10-16) with a relative risk of 0.57 (95% CI [0.54, 0.60]). Conclusion & Discussion: This study shows a statistically significant correlation between enterococcus infection and a decrease incidence of breast cancer. Further exploration is needed to identify and understand not only the role of enterococcus in the microbiome but also the protective mechanism(s) and impact enterococcus infection may have on breast cancer development. Ultimately, further research is needed in order to understand the complex and intricate relationship between the microbiome, immunology, bacterial infections, and carcinogenesis.

Keywords: breast cancer, enterococcus, immunology, infection, microbiome

Procedia PDF Downloads 149
344 Ergonomic Assessment of Workplace Environment of Flour Mill Workers

Authors: Jayshree P. Zend, Ashatai B. Pawar

Abstract:

The study was carried out in Parbhani district of Maharashtra state, India with the objectives to study environmental problems faced by flour mill workers, prevalence of work-related health hazards and the physiological cost of workers while performing work in flour mill in traditional method as well as improved method. The use of flour presser, dust controlling bag and noise and dust controlling mask developed by AICRP College of Home Science, VNMKV, Parbhani was considered as an improved method. This investigation consisted survey and experiment which was conducted in the respective locations of flour mills. Healthy, non-smoking 30 flour mill workers ranged between the age group of 20-50 yrs comprising 16 female and 14 male working at flour mill for 4-8 hrs/ day and 6 days/ week and had minimum five years experience of work in flour mill were selected for the study. Pulmonary function test of flour mill workers was carried out by trained technician at Dr. ShankarraoChavan Government Medical College, Nanded by using Electronic Spirometer. The data regarding heart rate (resting, working and recovery), energy expenditure, musculoskeletal problems and occupational health hazards and accidents were recorded by using pretested questionnaire. Scientific equipment used in the experiment were polar sport test heart rate monitor, Hygrometer, Goniometer, Dialed Thermometer, Sound Level Meter, Lux Meter, Ambient Air Sampler and Air Quality Monitor. The collected data were subjected to appropriate statistical analysis such as 't' test and correlation coefficient test. Results indicated that improved method i.e. use of noise and dust controlling mask, flour presser and dust controlling bag were effective in reducing physiological cost of work of flour mill workers. Lung function test of flour mill workers showed decreased values of all parameters, hence the results of present study support paying attention to use of personal protective noise and dust controlling mask by flour mill workers and also to the working conditions in flour mill especially ventilation and illumination level needs to be enhanced in flour mill. The study also emphasizes the need to develop some mechanism for lifting load of grains and unloading in the hopper. It is also suggested that the flour mill workers should use flour presser suitable to their height to avoid frequent bending and should use dust controlling bag to flour outlet of machine to reduce inhalable flour dust level in the flour mill.

Keywords: physiological cost, energy expenditure, musculoskeletal problems

Procedia PDF Downloads 376
343 Antiulcer Potential of Heme Oxygenase-1 Inducers

Authors: Gaweł Magdalena, Lipkowska Anna, Olbert Magdalena, Frąckiewicz Ewelina, Librowski Tadeusz, Nowak Gabriel, Pilc Andrzej

Abstract:

Heme oxygenase-1 (HO-1), also known as heat shock protein 32 (HSP32), has been shown to be implicated in cytoprotection in various organs. Its activation plays a significant role in acute and chronic inflammation, protecting cells from oxidative injury and apoptosis. This inducible isoform of HO catalyzes the first and rate-limiting step in heme degradation to produce equimolar quantities of biologically active products: carbon monoxide (CO), free iron and biliverdin. CO has been reported to possess anti-apoptotic properties. Moreover, it inhibits the production of proinflammatory cytokines and stimulates the synthesis of the anti-inflammatory interleukin-10 (IL-10), as well as promotes vasodilatation at sites of inflammation. The second product of catalytic HO-1 activity, free cytotoxic iron, is promptly sequestered into the iron storage protein ferritin, which lowers the pro-oxidant state of the cell. The third product, biliverdin, is subsequently converted by biliverdin reductase into the bile pigment bilirubin, the most potent endogenous antioxidant among the constituents of human serum, which modulates immune effector functions and suppresses inflammatory response. Furthermore, being one of the so-called stress proteins, HO-1 adaptively responds to different stressors, such as reactive oxygen species (ROS), inflammatory cytokines and heavy metals and thus protects cells against such conditions as ischemia, hemorrhagic shock, heat shock or hypoxia. It is suggested that pharmacologic modulation of HO-1 may represent an effective strategy for prevention of stress and drug-induced gastrointestinal toxicity. HO-1 is constitutively expressed in normal gastric, intestinal and colonic mucosa and up-regulated during inflammation. It has been proven that HO-1 up-regulated by hemin, heme and cobalt-protoporphyrin ameliorates experimental colitis. In addition, the up-regulation of HO-1 partially explains the mechanism of action of 5-aminosalicylic acid (5-ASA), which is used clinically as an anti-colitis agent. In 2009 Ueda et al. has reported for the first time that mucosal protection by Polaprezinc, a chelate compound of zinc and L-carnosine used as an anti-ulcer drug in Japan, is also attributed to induction of HO-1 in the stomach. Since then, inducers of HO-1 are desired subject of research, as they may constitute therapeutically effective anti-ulcer drugs.

Keywords: heme oxygenase-1, gastric lesions, gastroprotection, Polaprezinc

Procedia PDF Downloads 485
342 Force Sensing Resistor Testing of Hand Forces and Grasps during Daily Functional Activities in the Covid-19 Pandemic

Authors: Monique M. Keller, Roline Barnes, Corlia Brandt

Abstract:

Introduction Scientific evidence on the hand forces and the types of grasps measurement during daily tasks are lacking, leaving a gap in the field of hand rehabilitation and robotics. Measuring the grasp forces and types produced by the individual fingers during daily functional tasks is valuable to inform and grade rehabilitation practices for second to fifth metacarpal fractures with robust scientific evidence. Feix et al, 2016 identified the most extensive and complete grasp study that resulted in the GRASP taxonomy. Covid-19 virus changed data collection across the globe and safety precautions in research are essential to ensure the health of participants and researchers. Methodology A cross-sectional study investigated six healthy adults aged 20 to 59 years, pilot participants’ hand forces during 105 tasks. The tasks were categorized into five sections namely, personal care, transport and moving around, home environment and inside, gardening and outside, and office. The predominant grasp of each task was identified guided by the GRASP Taxonomy. Grasp forces were measured with 13mm force-sensing resistors glued onto a glove attached to each of the dominant and non-dominant hand’s individual fingers. Testing equipment included Flexiforce 13millimetres FSR .5" circle, calibrated prior to testing, 10k 1/4w resistors, Arduino pro mini 5.0v – compatible, Esp-01-kit, Arduino uno r3 – compatible board, USB ab cable - 1m, Ftdi ft232 mini USB to serial, Sil 40 inline connectors, ribbon cable combo male header pins, female to female, male to female, two gloves, glue to attach the FSR to glove, Arduino software programme downloaded on a laptop. Grip strength measurements with Jamar dynamometer prior to testing and after every 25 daily tasks were taken to will avoid fatigue and ensure reliability in testing. Covid-19 precautions included wearing face masks at all times, screening questionnaires, temperatures taken, wearing surgical gloves before putting on the testing gloves 1.5 metres long wires attaching the FSR to the Arduino to maintain social distance. Findings Predominant grasps observed during 105 tasks included, adducted thumb (17), lateral tripod (10), prismatic three fingers (12), small diameter (9), prismatic two fingers (9), medium wrap (7), fixed hook (5), sphere four fingers (4), palmar (4), parallel extension (4), index finger extension (3), distal (3), power sphere (2), tripod (2), quadpod (2), prismatic four fingers (2), lateral (2), large-diameter (2), ventral (2), precision sphere (1), palmar pinch (1), light tool (1), inferior pincher (1), and writing tripod (1). Range of forces applied per category, personal care (1-25N), transport and moving around (1-9 N), home environment and inside (1-41N), gardening and outside (1-26.5N), and office (1-20N). Conclusion Scientifically measurements of finger forces with careful consideration to types of grasps used in daily tasks should guide rehabilitation practices and robotic design to ensure a return to the full participation of the individual into the community.

Keywords: activities of daily living (ADL), Covid-19, force-sensing resistors, grasps, hand forces

Procedia PDF Downloads 167
341 A 1T1R Nonvolatile Memory with Al/TiO₂/Au and Sol-Gel Processed Barium Zirconate Nickelate Gate in Pentacene Thin Film Transistor

Authors: Ke-Jing Lee, Cheng-Jung Lee, Yu-Chi Chang, Li-Wen Wang, Yeong-Her Wang

Abstract:

To avoid the cross-talk issue of only resistive random access memory (RRAM) cell, one transistor and one resistor (1T1R) architecture with a TiO₂-based RRAM cell connected with solution barium zirconate nickelate (BZN) organic thin film transistor (OTFT) device is successfully demonstrated. The OTFT were fabricated on a glass substrate. Aluminum (Al) as the gate electrode was deposited via a radio-frequency (RF) magnetron sputtering system. The barium acetate, zirconium n-propoxide, and nickel II acetylacetone were synthesized by using the sol-gel method. After the BZN solution was completely prepared using the sol-gel process, it was spin-coated onto the Al/glass substrate as the gate dielectric. The BZN layer was baked at 100 °C for 10 minutes under ambient air conditions. The pentacene thin film was thermally evaporated on the BZN layer at a deposition rate of 0.08 to 0.15 nm/s. Finally, gold (Au) electrode was deposited using an RF magnetron sputtering system and defined through shadow masks as both the source and drain. The channel length and width of the transistors were 150 and 1500 μm, respectively. As for the manufacture of 1T1R configuration, the RRAM device was fabricated directly on drain electrodes of TFT device. A simple metal/insulator/metal structure, which consisting of Al/TiO₂/Au structures, was fabricated. First, Au was deposited to be a bottom electrode of RRAM device by RF magnetron sputtering system. Then, the TiO₂ layer was deposited on Au electrode by sputtering. Finally, Al was deposited as the top electrode. The electrical performance of the BZN OTFT was studied, showing superior transfer characteristics with the low threshold voltage of −1.1 V, good saturation mobility of 5 cm²/V s, and low subthreshold swing of 400 mV/decade. The integration of the BZN OTFT and TiO₂ RRAM devices was finally completed to form 1T1R configuration with low power consumption of 1.3 μW, the low operation current of 0.5 μA, and reliable data retention. Based on the I-V characteristics, the different polarities of bipolar switching are found to be determined by the compliance current with the different distribution of the internal oxygen vacancies used in the RRAM and 1T1R devices. Also, this phenomenon can be well explained by the proposed mechanism model. It is promising to make the 1T1R possible for practical applications of low-power active matrix flat-panel displays.

Keywords: one transistor and one resistor (1T1R), organic thin-film transistor (OTFT), resistive random access memory (RRAM), sol-gel

Procedia PDF Downloads 329
340 A Concept in Addressing the Singularity of the Emerging Universe

Authors: Mahmoud Reza Hosseini

Abstract:

The universe is in a continuous expansion process, resulting in the reduction of its density and temperature. Also, by extrapolating back from its current state, the universe at its early times has been studied known as the big bang theory. According to this theory, moments after creation, the universe was an extremely hot and dense environment. However, its rapid expansion due to nuclear fusion led to a reduction in its temperature and density. This is evidenced through the cosmic microwave background and the universe structure at a large scale. However, extrapolating back further from this early state reaches singularity which cannot be explained by modern physics and the big bang theory is no longer valid. In addition, one can expect a nonuniform energy distribution across the universe from a sudden expansion. However, highly accurate measurements reveal an equal temperature mapping across the universe which is contradictory to the big bang principles. To resolve this issue, it is believed that cosmic inflation occurred at the very early stages of the birth of the universe According to the cosmic inflation theory, the elements which formed the universe underwent a phase of exponential growth due to the existence of a large cosmological constant. The inflation phase allows the uniform distribution of energy so that an equal maximum temperature could be achieved across the early universe. Also, the evidence of quantum fluctuations of this stage provides a means for studying the types of imperfections the universe would begin with. Although well-established theories such as cosmic inflation and the big bang together provide a comprehensive picture of the early universe and how it evolved into its current state, they are unable to address the singularity paradox at the time of universe creation. Therefore, a practical model capable of describing how the universe was initiated is needed. This research series aims at addressing the singularity issue by introducing an energy conversion mechanism. This is accomplished by establishing a state of energy called a “neutral state”, with an energy level which is referred to as “base energy” capable of converting into other states. Although it follows the same principles, the unique quanta state of the base energy allows it to be distinguishable from other states and have a uniform distribution at the ground level. Although the concept of base energy can be utilized to address the singularity issue, to establish a complete picture, the origin of the base energy should be also identified. This matter is the subject of the first study in the series “A Conceptual Study for Investigating the Creation of Energy and Understanding the Properties of Nothing” which is discussed in detail. Therefore, the proposed concept in this research series provides a road map for enhancing our understating of the universe's creation from nothing and its evolution and discusses the possibility of base energy as one of the main building blocks of this universe.

Keywords: big bang, cosmic inflation, birth of universe, energy creation

Procedia PDF Downloads 57
339 Neuronal Mechanisms of Observational Motor Learning in Mice

Authors: Yi Li, Yinan Zheng, Ya Ke, Yungwing Ho

Abstract:

Motor learning is a process that frequently happens among humans and rodents, which is defined as the changes in the capability to perform a skill that is conformed to have a relatively permanent improvement through practice or experience. There are many ways to learn a behavior, among which is observational learning. Observational learning is the process of learning by watching the behaviors of others, for example, a child imitating parents, learning a new sport by watching the training videos or solving puzzles by watching the solutions. Many research explores observational learning in humans and primates. However, the neuronal mechanism of which, especially observational motor learning, was uncertain. It’s well accepted that mirror neurons are essential in the observational learning process. These neurons fire when the primate performs a goal-directed action and sees someone else demonstrating the same action, which suggests they have high firing activity both completing and watching the behavior. The mirror neurons are assumed to mediate imitation or play a critical and fundamental role in action understanding. They are distributed in many brain areas of primates, i.e., posterior parietal cortex (PPC), premotor cortex (M2), and primary motor cortex (M1) of the macaque brain. However, few researchers report the existence of mirror neurons in rodents. To verify the existence of mirror neurons and the possible role in motor learning in rodents, we performed customised string-pulling behavior combined with multiple behavior analysis methods, photometry, electrophysiology recording, c-fos staining and optogenetics in healthy mice. After five days of training, the demonstrator (demo) mice showed a significantly quicker response and shorter time to reach the string; fast, steady and accurate performance to pull down the string; and more precisely grasping the beads. During three days of observation, the mice showed more facial motions when the demo mice performed behaviors. On the first training day, the observer reduced the number of trials to find and pull the string. However, the time to find beads and pull down string were unchanged in the successful attempts on the first day and other training days, which indicated successful action understanding but failed motor learning through observation in mice. After observation, the post-hoc staining revealed that the c-fos expression was increased in the cognitive-related brain areas (medial prefrontal cortex) and motor cortices (M1, M2). In conclusion, this project indicated that the observation led to a better understanding of behaviors and activated the cognitive and motor-related brain areas, which suggested the possible existence of mirror neurons in these brain areas.

Keywords: observation, motor learning, string-pulling behavior, prefrontal cortex, motor cortex, cognitive

Procedia PDF Downloads 63
338 Prediction of Terrorist Activities in Nigeria using Bayesian Neural Network with Heterogeneous Transfer Functions

Authors: Tayo P. Ogundunmade, Adedayo A. Adepoju

Abstract:

Terrorist attacks in liberal democracies bring about a few pessimistic results, for example, sabotaged public support in the governments they target, disturbing the peace of a protected environment underwritten by the state, and a limitation of individuals from adding to the advancement of the country, among others. Hence, seeking for techniques to understand the different factors involved in terrorism and how to deal with those factors in order to completely stop or reduce terrorist activities is the topmost priority of the government in every country. This research aim is to develop an efficient deep learning-based predictive model for the prediction of future terrorist activities in Nigeria, addressing low-quality prediction accuracy problems associated with the existing solution methods. The proposed predictive AI-based model as a counterterrorism tool will be useful by governments and law enforcement agencies to protect the lives of individuals in society and to improve the quality of life in general. A Heterogeneous Bayesian Neural Network (HETBNN) model was derived with Gaussian error normal distribution. Three primary transfer functions (HOTTFs), as well as two derived transfer functions (HETTFs) arising from the convolution of the HOTTFs, are namely; Symmetric Saturated Linear transfer function (SATLINS ), Hyperbolic Tangent transfer function (TANH), Hyperbolic Tangent sigmoid transfer function (TANSIG), Symmetric Saturated Linear and Hyperbolic Tangent transfer function (SATLINS-TANH) and Symmetric Saturated Linear and Hyperbolic Tangent Sigmoid transfer function (SATLINS-TANSIG). Data on the Terrorist activities in Nigeria gathered through questionnaires for the purpose of this study were used. Mean Square Error (MSE), Mean Absolute Error (MAE) and Test Error are the forecast prediction criteria. The results showed that the HETFs performed better in terms of prediction and factors associated with terrorist activities in Nigeria were determined. The proposed predictive deep learning-based model will be useful to governments and law enforcement agencies as an effective counterterrorism mechanism to understand the parameters of terrorism and to design strategies to deal with terrorism before an incident actually happens and potentially causes the loss of precious lives. The proposed predictive AI-based model will reduce the chances of terrorist activities and is particularly helpful for security agencies to predict future terrorist activities.

Keywords: activation functions, Bayesian neural network, mean square error, test error, terrorism

Procedia PDF Downloads 139