Search results for: applying
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2252

Search results for: applying

302 The Environmental Impact Assessment of Land Use Planning (Case Study: Tannery Industry in Al-Garma District)

Authors: Husam Abdulmuttaleb Hashim

Abstract:

The environmental pollution problems represent a great challenge to the world, threatening to destroy all the evolution that mankind has reached, the organizations and associations that cares about environment are trying to warn the world from the forthcoming danger resulted from excessive use of nature resources and consuming it without looking to the damage happened as a result of unfair use of it. Most of the urban centers suffers from the environmental pollution problems and health, economic, and social dangers resulted from this pollution, and while the land use planning is responsible for distributing different uses in urban centers and controlling the interactions between these uses to reach a homogeneous and perfect state for the different activities in cities, the occurrence of environmental problems in the shade of existing land use planning operation refers to the disorder or insufficiency in this operation which leads to presence of such problems, and this disorder lays in lack of sufficient importance to the environmental considerations during the land use planning operations and setting up the master plan, so the research start to study this problem and finding solutions for it, the research assumes that using accurate and scientific methods in early stages of land use planning operation will prevent occurring of environmental pollution problems in the future, the research aims to study and show the importance of the environmental impact assessment method (EIA) as an important planning tool to investigate and predict the pollution ranges of the land use that has a polluting pattern in land use planning operation. This research encompasses the concept of environmental assessment and its kinds and clarifies environmental impact assessment and its contents, the research also dealt with urban planning concept and land use planning, it also dealt with the current situation of the case study (Al-Garma district) and the land use planning in it and explain the most polluting use on the environment which is the industrial land use represented in the tannery industries and then there was a stating of current situation of this land use and explaining its contents and environmental impacts resulted from it, and then we analyzed the tests applied by the researcher for water and soil, and perform environmental evaluation through applying environmental impact assessment matrix using the direct method to reveal the pollution ranges on the ambient environment of industrial land use, and we also applied the environmental and site limits and standards by using (GIS) and (AUTOCAD) to select the site of the best alternative of the industrial region in Al-Garma district after the research approved the unsuitability of its current site location for the environmental and site limitations, the research conducted some conclusions and recommendations regard clarifying the concluded facts and to set the proper solutions.

Keywords: EIA, pollution, tannery industry, land use planning

Procedia PDF Downloads 447
301 Effects of Plasma Technology in Biodegradable Films for Food Packaging

Authors: Viviane P. Romani, Bradley D. Olsen, Vilásia G. Martins

Abstract:

Biodegradable films for food packaging have gained growing attention due to environmental pollution caused by synthetic films and the interest in the better use of resources from nature. Important research advances were made in the development of materials from proteins, polysaccharides, and lipids. However, the commercial use of these new generation of sustainable materials for food packaging is still limited due to their low mechanical and barrier properties that could compromise the food quality and safety. Thus, strategies to improve the performance of these materials have been tested, such as chemical modifications, incorporation of reinforcing structures and others. Cold plasma is a versatile, fast and environmentally friendly technology. It consists of a partially ionized gas containing free electrons, ions, and radicals and neutral particles able to react with polymers and start different reactions, leading to the polymer degradation, functionalization, etching and/or cross-linking. In the present study, biodegradable films from fish protein prepared through the casting technique were plasma treated using an AC glow discharge equipment. The reactor was preliminary evacuated to ~7 Pa and the films were exposed to air plasma for 2, 5 and 8 min. The films were evaluated by their mechanical and water vapor permeability (WVP) properties and changes in the protein structure were observed using Scanning Electron Microscopy (SEM) and X-ray diffraction (XRD). Potential cross-links and elimination of surface defects by etching might be the reason for the increase in tensile strength and decrease in the elongation at break observed. Among the times of plasma application tested, no differences were observed when higher times of exposure were used. The X-ray pattern showed a broad peak at 2θ = 19.51º that corresponds to the distance of 4.6Å by applying the Bragg’s law. This distance corresponds to the average backbone distance within the α-helix. Thus, the changes observed in the films might indicate that the helical configuration of fish protein was disturbed by plasma treatment. SEM images showed surface damage in the films with 5 and 8 min of plasma treatment, indicating that 2 min was the most adequate time of treatment. It was verified that plasma removes water from the films once weight loss of 4.45% was registered for films treated during 2 min. However, after 24 h in 50% of relative humidity, the water lost was recovered. WVP increased from 0.53 to 0.65 g.mm/h.m².kPa after plasma treatment during 2 min, that is desired for some foods applications which require water passage through the packaging. In general, the plasma technology affects the properties and structure of fish protein films. Since this technology changes the surface of polymers, these films might be used to develop multilayer materials, as well as to incorporate active substances in the surface to obtain active packaging.

Keywords: fish protein films, food packaging, improvement of properties, plasma treatment

Procedia PDF Downloads 161
300 Single-Case Experimental Design: Exploratory Pilot Study on the Feasibility and Effect of Virtual Reality for Pain and Anxiety Management During Care

Authors: Corbel Camille, Le Cerf Flora, Corveleyn Xavier

Abstract:

Introduction: Aging is a physiological phenomenon accompanied by anatomical and cognitive changes leading to anxiety and pain. This could have significant impacts on quality of life, life expectancy, and the progression of cognitive disorders. Virtual Reality Intervention (VRI) is increasingly recognized as a non-pharmacological approach to alleviate pain and anxiety in children and young adults. However, while recent studies have explored the feasibility of applying VRI in the older population, confirmation through studies is still required to establish its benefits in various contexts. Objective: This pilot study, following a clinical trial methodology international recommendation for VRI in healthcare, aims to evaluate the feasibility and effects of using VRI with a 101-year-old woman residing in a nursing home undergoing weekly painful and anxious wound dressing changes. Methods: Following the international recommendations, this study focused on feasibility and preliminary results. A Single Case Experimental Design protocol consists of two distinct phases: control (Phase A) and personalized VRI (Phase B), each lasting for 6 sessions. Data were collected before, during and after the care, using measures of pain (Algoplus and numerical scale), anxiety (Hospital anxiety scale and numerical scale), VRI experience (semi-structured interview) and physiological measures. Results: The results suggest that the utilization of VRI is both feasible and well-tolerated by the participant. VRI contributed to a decrease in pain and anxiety during care sessions, with a more significant impact on pain compared to anxiety, which showed a gradual and slight decrease. Physiological data, particularly those related to stress, also indicate a reduction in physiological activity during VRI. Conclusion: This pilot study confirms the feasibility and benefits of using virtual reality in managing pain and anxiety in an older adult in a nursing home. In light of these results, it is essential that future studies focus on setting up randomized controlled trials (RCTs). These studies should involve a representative number of older adults to ensure generalizable data. This rigorous, controlled methodology will enable us to assess the effectiveness of virtual reality more accurately in various care settings, measure its impact on clinical parameters such as pain and anxiety, and explore the long-term implications of this intervention.

Keywords: anxiety reduction, nursing home, older adult, pain management, virtual reality

Procedia PDF Downloads 57
299 Overcoming Obstacles in UHTHigh-protein Whey Beverages by Microparticulation Process: Scientific and Technological Aspects

Authors: Shahram Naghizadeh Raeisi, Ali Alghooneh, Seyed Jalal Razavi Zahedkolaei

Abstract:

Herein, a shelf stable (no refrigeration required) UHT processed, aseptically packaged whey protein drink was formulated by using a new strategy in microparticulate process. Applying thermal and two-dimensional mechanical treatments simultaneously, a modified protein (MWPC-80) was produced. Then the physical, thermal and thermodynamic properties of MWPC-80 were assessed using particle size analysis, dynamic temperature sweep (DTS), and differential scanning calorimetric (DSC) tests. Finally, using MWPC-80, a new RTD beverage was formulated, and shelf stability was assessed for three months at ambient temperature (25 °C). Non-isothermal dynamic temperature sweep was performed, and the results were analyzed by a combination of classic rate equation, Arrhenius equation, and time-temperature relationship. Generally, results showed that temperature dependency of the modified sample was significantly (Pvalue<0.05) less than the control one contained WPC-80. The changes in elastic modulus of the MWPC did not show any critical point at all the processed stages, whereas, the control sample showed two critical points during heating (82.5 °C) and cooling (71.10 °C) stages. Thermal properties of samples (WPC-80 & MWPC-80) were assessed using DSC with 4 °C /min heating speed at 20-90 °C heating range. Results did not show any thermal peak in MWPC DSC curve, which suggested high thermal resistance. On the other hands, WPC-80 sample showed a significant thermal peak with thermodynamic properties of ∆G:942.52 Kj/mol ∆H:857.04 Kj/mole and ∆S:-1.22Kj/mole°K. Dynamic light scattering was performed and results showed 0.7 µm and 15 nm average particle size for MWPC-80 and WPC-80 samples, respectively. Moreover, particle size distribution of MWPC-80 and WPC-80 were Gaussian-Lutresian and normal, respectively. After verification of microparticulation process by DTS, PSD and DSC analyses, a 10% why protein beverage (10% w/w/ MWPC-80, 0.6% w/w vanilla flavoring agent, 0.1% masking flavor, 0.05% stevia natural sweetener and 0.25% citrate buffer) was formulated and UHT treatment was performed at 137 °C and 4 s. Shelf life study did not show any jellification or precipitation of MWPC-80 contained beverage during three months storage at ambient temperature, whereas, WPC-80 contained beverage showed significant precipitation and jellification after thermal processing, even at 3% w/w concentration. Consumer knowledge on nutritional advantages of whey protein increased the request for using this protein in different food systems especially RTD beverages. These results could make a huge difference in this industry.

Keywords: high protein whey beverage, micropartiqulation, two-dimentional mechanical treatments, thermodynamic properties

Procedia PDF Downloads 67
298 A Content Analysis of the Introduction to the Philosophy of Religion Literature Published in the West between 1950-2010 in Terms of Definition, Method and Subjects

Authors: Fatih Topaloğlu

Abstract:

Although philosophy is inherently a theoretical and intellectual activity, it should not be denied that environmental conditions influence the formation and shaping of philosophical thought. In this context, it should be noted that the Philosophy of Religion has been influential in the debates in the West, especially since the beginning of the 20th century, and that this influence has dimensions that cannot be limited to academic or intellectual fields. The issues and problems that fall within the field of interest of Philosophy of Religion are followed with interest by a significant proportion of society through popular publications. Philosophy of Religion has its share in many social, economic, cultural, scientific, political and ethical developments. Philosophy of Religion, in the most general sense, can be defined as a philosophical approach to religion or a philosophical way of thinking and discussing religion. Philosophy of Religion tries to explain the epistemological foundations of concepts such as belief and faith that shape religious life by revealing their meaning for the individual. Thus, Philosophy of Religion tries to evaluate the effect of beliefs on the individual's values, judgments and behaviours with a comprehensive and critical eye. The Philosophy of Religion, which tries to create new solutions and perspectives by applying the methods of philosophy to religious problems, tries to solve these problems not by referring to the holy book or religious teachings but by logical proofs obtained through the possibilities of reason and evidence filtered through the filter of criticism. Although there is no standard method for doing Philosophy of Religion, it can be said that an approach that can be expressed as thinking about religion in a rational, objective, and consistent way is generally accepted. The evaluations made within the scope of Philosophy of Religion have two stages. The first is the definition stage, and the second is the evaluation stage. In the first stage, the data of different scientific disciplines, especially other religious sciences, are utilized to define the issues objectively. In the second stage, philosophical evaluations are made based on this foundation. During these evaluations, the issue of how the relationship between religion and philosophy should be established is extremely sensitive. The main thesis of this paper is that the Philosophy of Religion, as a branch of philosophy, has been affected by the conditions caused by the historical experience through which it has passed and has differentiated its subjects and the methods it uses to realize its philosophical acts over time under the influence of these conditions. This study will attempt to evaluate the validity of this study based on the "Introduction to Philosophy of Religion" literature, which we assume reflects this differentiation. As a result of this examination will aim to reach some factual conclusions about the nature of both philosophical and religious thought, to determine the phases that the Philosophy of Religion as a discipline has gone through since the day it emerged, and to investigate the possibilities of a holistic view of the field.

Keywords: content analysis, culture, history, philosophy of religion, method

Procedia PDF Downloads 53
297 Characteristics of the Rocks Glacier Deposits in the Southern Carpathians, Romania

Authors: Petru Urdea

Abstract:

As a distinct part of the mountain system, the rock glacier system is a particularly periglacial debris system. Being an open system, it works in a manner of interconnection with others subsystems like glacial, cliffs, rocky slopes sand talus slope subsystems, which are sources of sediments. One characteristic is that for long periods of time it is like a storage unit for debris, and ice, and temporary for snow and water. In the Southern Carpathians 306 rock glaciers were identified. The vast majority of these rock glaciers, are talus rock glaciers, 74%, and 26%, are debris rock glaciers. In the area occupied by granites and granodiorites are present, 49% of all the rock glaciers, representing 61% of the area occupied by Southern Carpathians rock glaciers. This lithological dependence also leaves its mark on the specifics of the deposits, everything bearing the imprint of the particular way the rocks respond to the physical weathering processes, all in a periglacial regime. If in the domain of granites and granodiorites the blocks are large, - of metric order, even 10 m3 - , in the domain of the metamorphic rocks only gneisses can cut similar sizes. Amphibolites, amphibolitic schists, micaschists, sericite-chlorite schists and phyllites crop out in much smaller blocks, of decimetric order, mostly in the form of slabs. In the case of rock glaciers made up of large blocks, with a strcture of open-works type, the density and volume of voids between the blocks is greater, the smaller debris generating more compact structures with fewer voids. All these influences the thermal regime, associated with a certain type of air circulation during the seasons and the emergence of permafrost formation conditions. The rock glaciers are fed by rock falls, rock avalanches, debris flows, avalanches, so that the structure is heterogeneous, which is also reflected in the detailed topography of the rock glaciers. This heterogeneity is also influenced by the spatial assembly of the rock bodies in the supply area and, an element that cannot be omitted, the behavior of the rocks during periglacial weathering. The production of small gelifracts determines the filling of voids and the appearance of more compact structures, with effects on the creep process. In general, surface deposits are coarser, those in depth are finer, their characteristics being detectable by applying geophysical methods. The electrical tomography (ERT) and georadar (GPR) investigations carried out in the Făgăraş Mountains, Retezat and the Parâng Mountains, each with a different lithological specificity, allowed the identification of some differentiations, including the presence of permafrost bodies.

Keywords: rock glaciers deposits, structure, lithology, permafrost, Southern Carpathians, Romania

Procedia PDF Downloads 20
296 Prismatic Bifurcation Study of a Functionally Graded Dielectric Elastomeric Tube Using Linearized Incremental Theory of Deformations

Authors: Sanjeet Patra, Soham Roychowdhury

Abstract:

In recent times, functionally graded dielectric elastomer (FGDE) has gained significant attention within the realm of soft actuation due to its dual capacity to exert highly localized stresses while maintaining its compliant characteristics on application of electro-mechanical loading. Nevertheless, the full potential of dielectric elastomer (DE) has not been fully explored due to their susceptibility to instabilities when subjected to electro-mechanical loads. As a result, study and analysis of such instabilities becomes crucial for the design and realization of dielectric actuators. Prismatic bifurcation is a type of instability that has been recognized in a DE tube. Though several studies have reported on the analysis for prismatic bifurcation in an isotropic DE tube, there is an insufficiency in studies related to prismatic bifurcation of FGDE tubes. Therefore, this paper aims to determine the onset of prismatic bifurcations on an incompressible FGDE tube when subjected to electrical loading across the thickness of the tube and internal pressurization. The analysis has been conducted by imposing two axial boundary conditions on the tube, specifically axially free ends and axially clamped ends. Additionally, the rigidity modulus of the tube has been linearly graded in the direction of thickness where the inner surface of the tube has a lower stiffness than the outer surface. The static equilibrium equations for deformation of the axisymmetric tube are derived and solved using numerical technique. The condition for prismatic bifurcation of the axisymmetric static equilibrium solutions has been obtained by using the linearized incremental constitutive equations. Two modes of bifurcations, corresponding to two different non-circular cross-sectional geometries, have been explored in this study. The outcomes reveal that the FGDE tubes experiences prismatic bifurcation before the Hessian criterion of failure is satisfied. It is observed that the lower mode of bifurcation can be triggered at a lower critical voltage as compared to the higher mode of bifurcation. Furthermore, the tubes with larger stiffness gradient require higher critical voltages for triggering the bifurcation. Moreover, with the increase in stiffness gradient, a linear variation of the critical voltage is observed with the thickness of the tube. It has been found that on applying internal pressure to a tube with low thickness, the tube becomes less susceptible to bifurcations. A thicker tube with axially free end is found to be more stable than the axially clamped end tube at higher mode of bifurcation.

Keywords: critical voltage, functionally graded dielectric elastomer, linearized incremental approach, modulus of rigidity, prismatic bifurcation

Procedia PDF Downloads 75
295 Designing a Socio-Technical System for Groundwater Resources Management, Applying Smart Energy and Water Meter

Authors: S. Mahdi Sadatmansouri, Maryam Khalili

Abstract:

World, nowadays, encounters serious water scarcity problem. During the past few years, by advent of Smart Energy and Water Meter (SEWM) and its installation at the electro-pumps of the water wells, one had believed that it could be the golden key to address the groundwater resources over-pumping issue. In fact, implementation of these Smart Meters managed to control the water table drawdown for short; but it was not a sustainable approach. SEWM has been considered as law enforcement facility at first; however, for solving a complex socioeconomic problem like shared groundwater resources management, more than just enforcement is required: participation to conserve common resources. The well owners or farmers, as water consumers, are the main and direct stakeholders of this system and other stakeholders could be government sectors, investors, technology providers, privet sectors or ordinary people. Designing a socio-technical system not only defines the role of each stakeholder but also can lubricate the communication to reach the system goals while benefits of each are considered and provided. Farmers, as the key participators for solving groundwater problem, do not trust governments but they would trust a fair system in which responsibilities, privileges and benefits are clear. Technology could help this system remained impartial and productive. Social aspects provide rules, regulations, social objects and etc. for the system and help it to be more human-centered. As the design methodology, Design Thinking provides probable solutions for the challenging problems and ongoing conflicts; it could enlighten the way in which the final system could be designed. Using Human Centered Design approach of IDEO helps to keep farmers in the center of the solution and provides a vision by which stakeholders’ requirements and needs are addressed effectively. Farmers would be considered to trust the system and participate in their groundwater resources management if they find the rules and tools of the system fair and effective. Besides, implementation of the socio-technical system could change farmers’ behavior in order that they concern more about their valuable shared water resources as well as their farm profit. This socio-technical system contains nine main subsystems: 1) Measurement and Monitoring system, 2) Legislation and Governmental system, 3) Information Sharing system, 4) Knowledge based NGOs, 5) Integrated Farm Management system (using IoT), 6) Water Market and Water Banking system, 7) Gamification, 8) Agribusiness ecosystem, 9) Investment system.

Keywords: human centered design, participatory management, smart energy and water meter (SEWM), social object, socio-technical system, water table drawdown

Procedia PDF Downloads 291
294 Colored Image Classification Using Quantum Convolutional Neural Networks Approach

Authors: Farina Riaz, Shahab Abdulla, Srinjoy Ganguly, Hajime Suzuki, Ravinesh C. Deo, Susan Hopkins

Abstract:

Recently, quantum machine learning has received significant attention. For various types of data, including text and images, numerous quantum machine learning (QML) models have been created and are being tested. Images are exceedingly complex data components that demand more processing power. Despite being mature, classical machine learning still has difficulties with big data applications. Furthermore, quantum technology has revolutionized how machine learning is thought of, by employing quantum features to address optimization issues. Since quantum hardware is currently extremely noisy, it is not practicable to run machine learning algorithms on it without risking the production of inaccurate results. To discover the advantages of quantum versus classical approaches, this research has concentrated on colored image data. Deep learning classification models are currently being created on Quantum platforms, but they are still in a very early stage. Black and white benchmark image datasets like MNIST and Fashion MINIST have been used in recent research. MNIST and CIFAR-10 were compared for binary classification, but the comparison showed that MNIST performed more accurately than colored CIFAR-10. This research will evaluate the performance of the QML algorithm on the colored benchmark dataset CIFAR-10 to advance QML's real-time applicability. However, deep learning classification models have not been developed to compare colored images like Quantum Convolutional Neural Network (QCNN) to determine how much it is better to classical. Only a few models, such as quantum variational circuits, take colored images. The methodology adopted in this research is a hybrid approach by using penny lane as a simulator. To process the 10 classes of CIFAR-10, the image data has been translated into grey scale and the 28 × 28-pixel image containing 10,000 test and 50,000 training images were used. The objective of this work is to determine how much the quantum approach can outperform a classical approach for a comprehensive dataset of color images. After pre-processing 50,000 images from a classical computer, the QCNN model adopted a hybrid method and encoded the images into a quantum simulator for feature extraction using quantum gate rotations. The measurements were carried out on the classical computer after the rotations were applied. According to the results, we note that the QCNN approach is ~12% more effective than the traditional classical CNN approaches and it is possible that applying data augmentation may increase the accuracy. This study has demonstrated that quantum machine and deep learning models can be relatively superior to the classical machine learning approaches in terms of their processing speed and accuracy when used to perform classification on colored classes.

Keywords: CIFAR-10, quantum convolutional neural networks, quantum deep learning, quantum machine learning

Procedia PDF Downloads 125
293 British Female Muslim Converts: An Investigation into Their De-Conversions from Islam

Authors: Mona Alyedreessy

Abstract:

This study, which is based on a qualitative study sample of thirty-four British converts from different ages, ethnicities, social classes, areas and religious backgrounds in London, investigates the common challenges, problems and abuse in the name of Islam that many British female Muslim converts experienced during their time as Muslims, which caused them to leave the faith. It is an important study, as it creates an awareness of the weaknesses found in western Muslim societies and in various Islamic educational programs that causes people to leave Islam and contribute towards its negative reputation in the media. The women in this study shared common problems regarding gender and racial discrimination, identity development, feminism, marriage, parenting, Muslim culture, isolation, extremism, belonging and practising Islam in both Muslim and non-Muslim societies with differing sacrifices and consequences that caused them to de-convert. The study argues that many of the personal, religious and social problems female Muslim converts experience are due to a lack of knowledge about Islam and their rights as Muslim women, which often results in them being vulnerable and influenced by the opinions, attitudes and actions of uneducated, abusive, non-practising and extremist Muslims. For example, it was found that young female converts in particular were often taken advantage of and manipulated into believing that many negative actions displayed by patriarchal Muslim husbands were a part of Islam. This created much confusion, especially when their husbands used specific Quran texts and Hadiths to justify their abuse, authority and attitudes that made them miserable. As a result and based on the positive experiences of some converts, the study found that obtaining a broad Islamic education that started with an intimate study of the Prophet Muhammad’s biography alongside being guided by the teachings of western Muslim scholars contributed greatly towards a more enjoyable conversion journey, as women were able to identify and avoid problematic Muslims and abuse in the name of Islam. This in turn helped to create a healthier family unit and Muslim society. Those who enjoyed being Muslims were able to create a balanced western Muslim identity by negotiating and applying their own morals and western values to their understanding of The Prophet’s biography and The Quran and integrated Islamic values into their own secular western environments that were free from foreign cultural practices. The outcomes of the study also highlight some effective modern approaches to da’wah based on the teachings of The Prophet Mohammad and other prophets for young Arab and Asian Muslims who marry, study and live among non-Muslims and converts.

Keywords: abuse, apostasy, converts, Muslims

Procedia PDF Downloads 227
292 A Hybrid Artificial Intelligence and Two Dimensional Depth Averaged Numerical Model for Solving Shallow Water and Exner Equations Simultaneously

Authors: S. Mehrab Amiri, Nasser Talebbeydokhti

Abstract:

Modeling sediment transport processes by means of numerical approach often poses severe challenges. In this way, a number of techniques have been suggested to solve flow and sediment equations in decoupled, semi-coupled or fully coupled forms. Furthermore, in order to capture flow discontinuities, a number of techniques, like artificial viscosity and shock fitting, have been proposed for solving these equations which are mostly required careful calibration processes. In this research, a numerical scheme for solving shallow water and Exner equations in fully coupled form is presented. First-Order Centered scheme is applied for producing required numerical fluxes and the reconstruction process is carried out toward using Monotonic Upstream Scheme for Conservation Laws to achieve a high order scheme.  In order to satisfy C-property of the scheme in presence of bed topography, Surface Gradient Method is proposed. Combining the presented scheme with fourth order Runge-Kutta algorithm for time integration yields a competent numerical scheme. In addition, to handle non-prismatic channels problems, Cartesian Cut Cell Method is employed. A trained Multi-Layer Perceptron Artificial Neural Network which is of Feed Forward Back Propagation (FFBP) type estimates sediment flow discharge in the model rather than usual empirical formulas. Hydrodynamic part of the model is tested for showing its capability in simulation of flow discontinuities, transcritical flows, wetting/drying conditions and non-prismatic channel flows. In this end, dam-break flow onto a locally non-prismatic converging-diverging channel with initially dry bed conditions is modeled. The morphodynamic part of the model is verified simulating dam break on a dry movable bed and bed level variations in an alluvial junction. The results show that the model is capable in capturing the flow discontinuities, solving wetting/drying problems even in non-prismatic channels and presenting proper results for movable bed situations. It can also be deducted that applying Artificial Neural Network, instead of common empirical formulas for estimating sediment flow discharge, leads to more accurate results.

Keywords: artificial neural network, morphodynamic model, sediment continuity equation, shallow water equations

Procedia PDF Downloads 184
291 Threading Professionalism Through Occupational Therapy Curriculum: A Framework and Resources

Authors: Ashley Hobson, Ashley Efaw

Abstract:

Professionalism is an essential skill for clinicians, particularly for Occupational Therapy Providers (OTPs). The World Federation of Occupational Therapy (WFOT) Guiding Principles for Ethical Occupational Therapy and American Occupational Therapy Association (AOTA) Code of Ethics establishes expectations for professionalism among OTPs, emphasizing its importance in the field. However, the teaching and assessment of professionalism vary across OTP programs. The flexibility provided by the country standards allows programs to determine their own approaches to meeting these standards, resulting in inconsistency. Educators in both academic and fieldwork settings face challenges in objectively assessing and providing feedback on student professionalism. Although they observe instances of unprofessional behavior, there is no standardized assessment measure to evaluate professionalism in OTP students. While most students are committed to learning and applying professionalism skills, they enter OTP programs with varying levels of proficiency in this area. Consequently, they lack a uniform understanding of professionalism and lack an objective means to self-assess their current skills and identify areas for growth. It is crucial to explicitly teach professionalism, have students to self-assess their professionalism skills, and have OTP educators assess student professionalism. This approach is necessary for fostering students' professionalism journeys. Traditionally, there has been no objective way for students to self-assess their professionalism or for educators to provide objective assessments and feedback. To establish a uniform approach to professionalism, the authors incorporated professionalism content into our curriculum. Utilizing an operational definition of professionalism, the authors integrated professionalism into didactic, fieldwork, and capstone courses. The complexity of the content and the professionalism skills expected of students increase each year to ensure students graduate with the skills to practice in accordance with the WFOT Guiding Principles for Ethical Occupational Therapy Practice and AOTA Code of Ethics. Two professionalism assessments were developed based on the expectations outlined in the both documents. The Professionalism Self-Assessment allows students to evaluate their professionalism, reflect on their performance, and set goals. The Professionalism Assessment for Educators is a modified version of the same tool designed for educators. The purpose of this workshop is to provide educators with a framework and tools for assessing student professionalism. The authors discuss how to integrate professionalism content into OTP curriculum and utilize professionalism assessments to provide constructive feedback and equitable learning opportunities for OTP students in academic, fieldwork, and capstone settings. By adopting these strategies, educators can enhance the development of professionalism among OTP students, ensuring they are well-prepared to meet the demands of the profession.

Keywords: professionalism, assessments, student learning, student preparedness, ethical practice

Procedia PDF Downloads 37
290 Effect of Starch and Plasticizer Types and Fiber Content on Properties of Polylactic Acid/Thermoplastic Starch Blend

Authors: Rangrong Yoksan, Amporn Sane, Nattaporn Khanoonkon, Chanakorn Yokesahachart, Narumol Noivoil, Khanh Minh Dang

Abstract:

Polylactic acid (PLA) is the most commercially available bio-based and biodegradable plastic at present. PLA has been used in plastic related industries including single-used containers, disposable and environmentally friendly packaging owing to its renewability, compostability, biodegradability, and safety. Although PLA demonstrates reasonably good optical, physical, mechanical, and barrier properties comparable to the existing petroleum-based plastics, its brittleness and mold shrinkage as well as its price are the points to be concerned for the production of rigid and semi-rigid packaging. Blending PLA with other bio-based polymers including thermoplastic starch (TPS) is an alternative not only to achieve a complete bio-based plastic, but also to reduce the brittleness, shrinkage during molding and production cost of the PLA-based products. TPS is a material produced mainly from starch which is cheap, renewable, biodegradable, compostable, and non-toxic. It is commonly prepared by a plasticization of starch under applying heat and shear force. Although glycerol has been reported as one of the most plasticizers used for preparing TPS, its migration caused the surface stickiness of the TPS products. In some cases, mixed plasticizers or natural fibers have been applied to impede the retrogradation of starch or reduce the migration of glycerol. The introduction of fibers into TPS-based materials could reinforce the polymer matrix as well. Therefore, the objective of the present research is to study the effect of starch type (i.e. native starch and phosphate starch), plasticizer type (i.e. glycerol and xylitol with a weight ratio of glycerol to xylitol of 100:0, 75:25, 50:50, 25:75, and 0:100), and fiber content (i.e. in the range of 1-25 % wt) on properties of PLA/TPS blend and composite. PLA/TPS blends and composites were prepared using a twin-screw extruder and then converted into dumbbell-shaped specimens using an injection molding machine. The PLA/TPS blends prepared by using phosphate starch showed higher tensile strength and stiffness than the blends prepared by using the native one. In contrast, the blends from native starch exhibited higher extensibility and heat distortion temperature (HDT) than those from the modified starch. Increasing xylitol content resulted in enhanced tensile strength, stiffness, and water resistance, but decreased extensibility and HDT of the PLA/TPS blend. Tensile properties and hydrophobicity of the blend could be improved by incorporating silane treated-jute fibers.

Keywords: polylactic acid, thermoplastic starch, Jute fiber, composite, blend

Procedia PDF Downloads 419
289 Modelling the Antecedents of Supply Chain Enablers in Online Groceries Using Interpretive Structural Modelling and MICMAC Analysis

Authors: Rose Antony, Vivekanand B. Khanapuri, Karuna Jain

Abstract:

Online groceries have transformed the way the supply chains are managed. These are facing numerous challenges in terms of product wastages, low margins, long breakeven to achieve and low market penetration to mention a few. The e-grocery chains need to overcome these challenges in order to survive the competition. The purpose of this paper is to carry out a structural analysis of the enablers in e-grocery chains by applying Interpretive Structural Modeling (ISM) and MICMAC analysis in the Indian context. The research design is descriptive-explanatory in nature. The enablers have been identified from the literature and through semi-structured interviews conducted among the managers having relevant experience in e-grocery supply chains. The experts have been contacted through professional/social networks by adopting a purposive snowball sampling technique. The interviews have been transcribed, and manual coding is carried using open and axial coding method. The key enablers are categorized into themes, and the contextual relationship between these and the performance measures is sought from the Industry veterans. Using ISM, the hierarchical model of the enablers is developed and MICMAC analysis identifies the driver and dependence powers. Based on the driver-dependence power the enablers are categorized into four clusters namely independent, autonomous, dependent and linkage. The analysis found that information technology (IT) and manpower training acts as key enablers towards reducing the lead time and enhancing the online service quality. Many of the enablers fall under the linkage cluster viz., frequent software updating, branding, the number of delivery boys, order processing, benchmarking, product freshness and customized applications for different stakeholders, depicting these as critical in online food/grocery supply chains. Considering the perishability nature of the product being handled, the impact of the enablers on the product quality is also identified. Hence, study aids as a tool to identify and prioritize the vital enablers in the e-grocery supply chain. The work is perhaps unique, which identifies the complex relationships among the supply chain enablers in fresh food for e-groceries and linking them to the performance measures. It contributes to the knowledge of supply chain management in general and e-retailing in particular. The approach focus on the fresh food supply chains in the Indian context and hence will be applicable in developing economies context, where supply chains are evolving.

Keywords: interpretive structural modelling (ISM), India, online grocery, retail operations, supply chain management

Procedia PDF Downloads 201
288 Critical Analysis of International Protections for Children from Sexual Abuse and Examination of Indian Legal Approach

Authors: Ankita Singh

Abstract:

Sex trafficking and child pornography are those kinds of borderless crimes which can not be effectively prevented only through the laws and efforts of one country because it requires a proper and smooth collaboration among countries. Eradication of international human trafficking syndicates, criminalisation of international cyber offenders, and effective ban on child pornography is not possible without applying effective universal laws; hence, continuous collaboration of all countries is much needed to adopt and routinely update these universal laws. Congregation of countries on an international platform is very necessary from time to time, where they can simultaneously adopt international agendas and create powerful universal laws to prevent sex trafficking and child pornography in this modern digital era. In the past, some international steps have been taken through The Convention on the Rights of the Child (CRC) and through The Optional Protocol to the Convention on the Rights of the Child on the Sale of Children, Child Prostitution, and Child Pornography, but in reality, these measures are quite weak and are not capable in effectively protecting children from sexual abuse in this modern & highly advanced digital era. The uncontrolled growth of artificial intelligence (AI) and its misuse, lack of proper legal jurisdiction over foreign child abusers and difficulties in their extradition, improper control over international trade of digital child pornographic content, etc., are some prominent issues which can only be controlled through some new, effective and powerful universal laws. Due to a lack of effective international standards and a lack of improper collaboration among countries, Indian laws are also not capable of taking effective actions against child abusers. This research will be conducted through both doctrinal as well as empirical methods. Various literary sources will be examined, and a questionnaire survey will be conducted to analyse the effectiveness of international standards and Indian laws against child pornography. Participants in this survey will be Indian University students. In this work, the existing international norms made for protecting children from sexual abuse will be critically analysed. It will explore why effective and strong collaboration between countries is required in modern times. It will be analysed whether existing international steps are enough to protect children from getting trafficked or being subjected to pornography, and if these steps are not found to be sufficient enough, then suggestions will be given on how international standards and protections can be made more effective and powerful in this digital era. The approach of India towards the existing international standards, the Indian laws to protect children from being subjected to pornography, and the contributions & capabilities of India in strengthening the international standards will also be analysed.

Keywords: child pornography, prevention of children from sexual offences act, the optional protocol to the convention on the rights of the child on the sale of children, child prostitution and child pornography, the convention on the rights of the child

Procedia PDF Downloads 38
287 The Development of Traffic Devices Using Natural Rubber in Thailand

Authors: Weeradej Cheewapattananuwong, Keeree Srivichian, Godchamon Somchai, Wasin Phusanong, Nontawat Yoddamnern

Abstract:

Natural rubber used for traffic devices in Thailand has been developed and researched for several years. When compared with Dry Rubber Content (DRC), the quality of Rib Smoked Sheet (RSS) is better. However, the cost of admixtures, especially CaCO₃ and sulphur, is higher than the cost of RSS itself. In this research, Flexible Guideposts and Rubber Fender Barriers (RFB) are taken into consideration. In case of flexible guideposts, the materials used are both RSS and DRC60%, but for RFB, only RSS is used due to the controlled performance tests. The objective of flexible guideposts and RFB is to decrease a number of accidents, fatal rates, and serious injuries. Functions of both devices are to save road users and vehicles as well as to absorb impact forces from vehicles so as to decrease of serious road accidents. This leads to the mitigation methods to remedy the injury of motorists, form severity to moderate one. The solution is to find the best practice of traffic devices using natural rubber under the engineering concepts. In addition, the performances of materials, such as tensile strength and durability, are calculated for the modulus of elasticity and properties. In the laboratory, the simulation of crashes, finite element of materials, LRFD, and concrete technology methods are taken into account. After calculation, the trials' compositions of materials are mixed and tested in the laboratory. The tensile test, compressive test, and weathering or durability test are followed and based on ASTM. Furthermore, the Cycle-Repetition Test of Flexible Guideposts will be taken into consideration. The final decision is to fabricate all materials and have a real test section in the field. In RFB test, there will be 13 crash tests, 7 Pickup Truck tests, and 6 Motorcycle Tests. The test of vehicular crashes happens for the first time in Thailand, applying the trial and error methods; for example, the road crash test under the standard of NCHRP-TL3 (100 kph) is changed to the MASH 2016. This is owing to the fact that MASH 2016 is better than NCHRP in terms of speed, types, and weight of vehicles and the angle of crash. In the processes of MASH, Test Level 6 (TL-6), which is composed of 2,270 kg Pickup Truck, 100 kph, and 25 degree of crash-angle is selected. The final test for real crash will be done, and the whole system will be evaluated again in Korea. The researchers hope that the number of road accidents will decrease, and Thailand will be no more in the top tenth ranking of road accidents in the world.

Keywords: LRFD, load and resistance factor design, ASTM, american society for testing and materials, NCHRP, national cooperation highway research program, MASH, manual for assessing safety hardware

Procedia PDF Downloads 126
286 The Significance of Islamic Concept of Good Faith to Cure Flaws in Public International Law

Authors: M. A. H. Barry

Abstract:

The concept of Good faith (husn al-niyyah) and fair-dealing (Nadl) are the fundamental guiding elements in all contracts and other agreements under Islamic law. The preaching of Al-Quran and Prophet Muhammad’s (Peace Be upon Him) firmly command people to act in good faith in all dealings. There are several Quran verses and the Prophet’s saying which stressed the significance of dealing honestly and fairly in all transactions. Under the English law, the good faith is not considered a fundamental requirement for the formation of a legal contract. However, the concept of Good Faith in private contracts is recognized by the civil law system and in Article 7(1) of the Convention on International Sale of Goods (CISG-Vienna Convention-1980). It took several centuries for the international trading community to recognize the significance of the concept of good faith for the international sale of goods transactions. Nevertheless, the recognition of good faith in Civil law is only confined for the commercial contracts. Subsequently to the CISG, this concept has made inroads into the private international law. There are submissions in favour of applying the good faith concept to public international law based on tacit recognition by the international conventions and International Tribunals. However, under public international law the concept of good faith is not recognized as a source of rights or obligations. This weakens the spirit of the good faith concept, particularly when determining the international disputes. This also creates a fundamental flaw because the absence of good faith application means the breaches tainted by bad faith are tolerated. The objective of this research is to evaluate, examine and analyze the application of the concept of good faith in the modern laws and identify its limitation, in comparison with Islamic concept of good faith. This paper also identifies the problems and issues connected with the non-application of this concept to public international law. This research consists of three key components (1) the preliminary inquiry (2) subject analysis and discovery of research results, and (3) examining the challenging problems, and concluding with proposals. The preliminary inquiry is based on both the primary and secondary sources. The same sources are used for the subject analysis. This research also has both inductive and deductive features. The Islamic concept of good faith covers all situations and circumstances where the bad faith causes unfairness to the affected parties, especially the weak parties. Under the Islamic law, the concept of good faith is a source of rights and obligations as Islam prohibits any person committing wrongful or delinquent acts in any dealing whether in a private or public life. This rule is applicable not only for individuals but also for institutions, states, and international organizations. This paper explains how the unfairness is caused by non-recognition of the good faith concept as a source of rights or obligations under public international law and provides legal and non-legal reasons to show why the Islamic formulation is important.

Keywords: good faith, the civil law system, the Islamic concept, public international law

Procedia PDF Downloads 143
285 Applying the Global Trigger Tool in German Hospitals: A Retrospective Study in Surgery and Neurosurgery

Authors: Mareen Brosterhaus, Antje Hammer, Steffen Kalina, Stefan Grau, Anjali A. Roeth, Hany Ashmawy, Thomas Gross, Marcel Binnebosel, Wolfram T. Knoefel, Tanja Manser

Abstract:

Background: The identification of critical incidents in hospitals is an essential component of improving patient safety. To date, various methods have been used to measure and characterize such critical incidents. These methods are often viewed by physicians and nurses as external quality assurance, and this creates obstacles to the reporting events and the implementation of recommendations in practice. One way to overcome this problem is to use tools that directly involve staff in measuring indicators of quality and safety of care in the department. One such instrument is the global trigger tool (GTT), which helps physicians and nurses identify adverse events by systematically reviewing randomly selected patient records. Based on so-called ‘triggers’ (warning signals), indications of adverse events can be given. While the tool is already used internationally, its implementation in German hospitals has been very limited. Objectives: This study aimed to assess the feasibility and potential of the global trigger tool for identifying adverse events in German hospitals. Methods: A total of 120 patient records were randomly selected from two surgical, and one neurosurgery, departments of three university hospitals in Germany over a period of two months per department between January and July, 2017. The records were reviewed using an adaptation of the German version of the Institute for Healthcare Improvement Global Trigger Tool to identify triggers and adverse event rates per 1000 patient days and per 100 admissions. The severity of adverse events was classified using the National Coordinating Council for Medication Error Reporting and Prevention. Results: A total of 53 adverse events were detected in the three departments. This corresponded to adverse event rates of 25.5-72.1 per 1000 patient-days and from 25.0 to 60.0 per 100 admissions across the three departments. 98.1% of identified adverse events were associated with non-permanent harm without (Category E–71.7%) or with (Category F–26.4%) the need for prolonged hospitalization. One adverse event (1.9%) was associated with potentially permanent harm to the patient. We also identified practical challenges in the implementation of the tool, such as the need for adaptation of the global trigger tool to the respective department. Conclusions: The global trigger tool is feasible and an effective instrument for quality measurement when adapted to the departmental specifics. Based on our experience, we recommend a continuous use of the tool thereby directly involving clinicians in quality improvement.

Keywords: adverse events, global trigger tool, patient safety, record review

Procedia PDF Downloads 246
284 A Review of Gas Hydrate Rock Physics Models

Authors: Hemin Yuan, Yun Wang, Xiangchun Wang

Abstract:

Gas hydrate is drawing attention due to the fact that it has an enormous amount all over the world, which is almost twice the conventional hydrocarbon reserves, making it a potential alternative source of energy. It is widely distributed in permafrost and continental ocean shelves, and many countries have launched national programs for investigating the gas hydrate. Gas hydrate is mainly explored through seismic methods, which include bottom simulating reflectors (BSR), amplitude blanking, and polarity reverse. These seismic methods are effective at finding the gas hydrate formations but usually contain large uncertainties when applying to invert the micro-scale petrophysical properties of the formations due to lack of constraints. Rock physics modeling links the micro-scale structures of the rocks to the macro-scale elastic properties and can work as effective constraints for the seismic methods. A number of rock physics models have been proposed for gas hydrate modeling, which addresses different mechanisms and applications. However, these models are generally not well classified, and it is confusing to determine the appropriate model for a specific study. Moreover, since the modeling usually involves multiple models and steps, it is difficult to determine the source of uncertainties. To solve these problems, we summarize the developed models/methods and make four classifications of the models according to the hydrate micro-scale morphology in sediments, the purpose of reservoir characterization, the stage of gas hydrate generation, and the lithology type of hosting sediments. Some sub-categories may overlap each other, but they have different priorities. Besides, we also analyze the priorities of different models, bring up the shortcomings, and explain the appropriate application scenarios. Moreover, by comparing the models, we summarize a general workflow of the modeling procedure, which includes rock matrix forming, dry rock frame generating, pore fluids mixing, and final fluid substitution in the rock frame. These procedures have been widely used in various gas hydrate modeling and have been confirmed to be effective. We also analyze the potential sources of uncertainties in each modeling step, which enables us to clearly recognize the potential uncertainties in the modeling. In the end, we explicate the general problems of the current models, including the influences of pressure and temperature, pore geometry, hydrate morphology, and rock structure change during gas hydrate dissociation and re-generation. We also point out that attenuation is also severely affected by gas hydrate in sediments and may work as an indicator to map gas hydrate concentration. Our work classifies rock physics models of gas hydrate into different categories, generalizes the modeling workflow, analyzes the modeling uncertainties and potential problems, which can facilitate the rock physics characterization of gas hydrate bearding sediments and provide hints for future studies.

Keywords: gas hydrate, rock physics model, modeling classification, hydrate morphology

Procedia PDF Downloads 152
283 Ethicality of Algorithmic Pricing and Consumers’ Resistance

Authors: Zainab Atia, Hongwei He, Panagiotis Sarantopoulos

Abstract:

Over the past few years, firms have witnessed a massive increase in sophisticated algorithmic deployment, which has become quite pervasive in today’s modern society. With the wide availability of data for retailers, the ability to track consumers using algorithmic pricing has become an integral option in online platforms. As more companies are transforming their businesses and relying more on massive technological advancement, pricing algorithmic systems have brought attention and given rise to its wide adoption, with many accompanying benefits and challenges to be found within its usage. With the overall aim of increasing profits by organizations, algorithmic pricing is becoming a sound option by enabling suppliers to cut costs, allowing better services, improving efficiency and product availability, and enhancing overall consumer experiences. The adoption of algorithms in retail has been pioneered and widely used in literature across varied fields, including marketing, computer science, engineering, economics, and public policy. However, what is more, alarming today is the comprehensive understanding and focus of this technology and its associated ethical influence on consumers’ perceptions and behaviours. Indeed, due to algorithmic ethical concerns, consumers are found to be reluctant in some instances to share their personal data with retailers, which reduces their retention and leads to negative consumer outcomes in some instances. This, in its turn, raises the question of whether firms can still manifest the acceptance of such technologies by consumers while minimizing the ethical transgressions accompanied by their deployment. As recent modest research within the area of marketing and consumer behavior, the current research advances the literature on algorithmic pricing, pricing ethics, consumers’ perceptions, and price fairness literature. With its empirical focus, this paper aims to contribute to the literature by applying the distinction of the two common types of algorithmic pricing, dynamic and personalized, while measuring their relative effect on consumers’ behavioural outcomes. From a managerial perspective, this research offers significant implications that pertain to providing a better human-machine interactive environment (whether online or offline) to improve both businesses’ overall performance and consumers’ wellbeing. Therefore, by allowing more transparent pricing systems, businesses can harness their generated ethical strategies, which fosters consumers’ loyalty and extend their post-purchase behaviour. Thus, by defining the correct balance of pricing and right measures, whether using dynamic or personalized (or both), managers can hence approach consumers more ethically while taking their expectations and responses at a critical stance.

Keywords: algorithmic pricing, dynamic pricing, personalized pricing, price ethicality

Procedia PDF Downloads 86
282 Effect of Pulsed Electrical Field on the Mechanical Properties of Raw, Blanched and Fried Potato Strips

Authors: Maria Botero-Uribe, Melissa Fitzgerald, Robert Gilbert, Kim Bryceson, Jocelyn Midgley

Abstract:

French fry manufacturing involves a series of processes in which structural properties of potatoes are modified to produce crispy french fries which consumers enjoy. In addition to the traditional french fry manufacturing process, the industry is applying a relatively new process called pulsed electrical field (PEF) to the whole potatoes. There is a wealth of information on the technical treatment conditions of PEF, however, there is a lack of information about its effect on the structural properties that affect texture and its synergistic interactions with the other manufacturing steps of french fry production. The effect of PEF on starch gelatinisation properties of Russet Burbank potato was measured using a Differential Scanning Calorimeter. Cation content (K+, Ca2+ and Mg2+) was determined by inductively coupled plasma optical emission spectrophotometry. Firmness, and toughness of raw and blanched potatoes were determined in an uniaxial compression test. Moisture content was determined in a vacuum oven and oil content was measured using the soxhlet system with hexane. The final texture of the french fries – crispness - was determined using a three bend point test. Triangle tests were conducted to determine if consumers were able to perceive sensory differences between French fries that were PEF treated and those without treatment. The concentration of K+, Ca2+ and Mg2+ decreased significantly in the raw potatoes after the PEF treatment. The PEF treatment significantly increased modulus of elasticity, compression strain, compression force and toughness in the raw potato. The PEF-treated raw potato were firmer and stiffer, and its structure integrity held together longer, resisted higher force before fracture and stretched further than the untreated ones. The strain stress relationship exhibited by the PEF-treated raw potato could be due to an increase in the permeability of the plasmalema and tonoplasm allowing Ca2+ and Mg2+ cations to reach the cell wall and middle lamella, and be available for cross linking with the pectin molecule. The PEF-treated raw potato exhibited a slightly higher onset gelatinisation temperatures, similar peak temperatures and lower gelatinisation ranges than the untreated raw potatoes. The final moisture content of the french fries was not significantly affected by the PEF treatment. Oil content in the PEF- treated potatoes was lower than the untreated french fries, however, not statistically significant at 5 %. The PEF treatment did not have an overall significant effect on french fry crispness (modulus of elasticity), flexure stress or strain. The triangle tests show that most consumers could not detect a difference between French fries that received a PEF treatment from those that did not.

Keywords: french fries, mechanical properties, PEF, potatoes

Procedia PDF Downloads 232
281 Single Cell Sorter Driven by Resonance Vibration of Cell Culture Substrate

Authors: Misa Nakao, Yuta Kurashina, Chikahiro Imashiro, Kenjiro Takemura

Abstract:

The Research Goal: With the growing demand for regenerative medicine, an effective mass cell culture process is required. In a repetitive subculture process for proliferating cells, preparing single cell suspension which does not contain any cell aggregates is highly required because cell aggregates often raise various undesirable phenomena, e.g., apoptosis and decrease of cell proliferation. Since cell aggregates often occur in cell suspension during conventional subculture processes, this study proposes a single cell sorter driven by a resonance vibration of a cell culture substrate. The Method and the Result: The single cell sorter is simply composed of a cell culture substrate and a glass pipe vertically placed against the cell culture substrate with a certain gap corresponding to a cell diameter. The cell culture substrate is made of biocompatible stainless steel with a piezoelectric ceramic disk glued to the bottom side. Applying AC voltage to the piezoelectric ceramic disk, an out-of-plane resonance vibration with a single nodal circle of the cell culture substrate can be excited at 5.5 kHz. By doing so, acoustic radiation force is emitted, and then cell suspension containing only single cells is pumped into the pipe and collected. This single cell sorter is effective to collect single cells selectively in spite of its quite simple structure. We collected C2C12 myoblast cell suspension by the single cell sorter with the vibration amplitude of 12 µmp-p and evaluated the ratio of single cells in number against the entire cells in the suspension. Additionally, we cultured the collected cells for 72 hrs and measured the number of cells after the cultivation in order to evaluate their proliferation. As a control sample, we also collected cell suspension by conventional pipetting, and evaluated the ratio of single cells and the number of cells after the 72-hour cultivation. The ratio of single cells in the cell suspension collected by the single cell sorter was 98.2%. This ratio was 9.6% higher than that collected by conventional pipetting (statistically significant). Moreover, the number of cells cultured for 72 hrs after the collection by the single cell sorter yielded statistically more cells than that collected by pipetting, resulting in a 13.6% increase in proliferated cells. These results suggest that the cell suspension collected by the single cell sorter driven by the resonance vibration hardly contains cell aggregates whose diameter is larger than the gap between the cell culture substrate and the pipe. Consequently, the cell suspension collected by the single cell sorter maintains high cell proliferation. Conclusions: In this study, we developed a single cell sorter capable of sorting and pumping single cells by a resonance vibration of a cell culture substrate. The experimental results show the single cell sorter collects single cell suspension which hardly contains cell aggregates. Furthermore, the collected cells show higher proliferation than that of cells collected by conventional pipetting. This means the resonance vibration of the cell culture substrate can benefit us with the increase in efficiency of mass cell culture process for clinical applications.

Keywords: acoustic radiation force, cell proliferation, regenerative medicine, resonance vibration, single cell sorter

Procedia PDF Downloads 261
280 Light-Controlled Gene Expression in Yeast

Authors: Peter. M. Kusen, Georg Wandrey, Christopher Probst, Dietrich Kohlheyer, Jochen Buchs, Jorg Pietruszkau

Abstract:

Light as a stimulus provides the capability to develop regulation techniques for customizable gene expression. A great advantage is the extremely flexible and accurate dosing that can be performed in a non invasive and sterile manner even for high throughput technologies. Therefore, light regulation in a multiwell microbioreactor system was realized providing the opportunity to control gene expression with outstanding complexity. A light-regulated gene expression system in Saccharomyces cerevisiae was designed applying the strategy of caged compounds. These compounds are photo-labile protected and therefore biologically inactive regulator molecules which can be reactivated by irradiation with certain light conditions. The “caging” of a repressor molecule which is consumed after deprotection was essential to create a flexible expression system. Thereby, gene expression could be temporally repressed by irradiation and subsequent release of the active repressor molecule. Afterwards, the repressor molecule is consumed by the yeast cells leading to reactivation of gene expression. A yeast strain harboring a construct with the corresponding repressible promoter in combination with a fluorescent marker protein was applied in a Photo-BioLector platform which allows individual irradiation as well as online fluorescence and growth detection. This device was used to precisely control the repression duration by adjusting the amount of released repressor via different irradiation times. With the presented screening platform the regulation of complex expression procedures was achieved by combination of several repression/derepression intervals. In particular, a stepwise increase of temporally-constant expression levels was demonstrated which could be used to study concentration dependent effects on cell functions. Also linear expression rates with variable slopes could be shown representing a possible solution for challenging protein productions, whereby excessive production rates lead to misfolding or intoxication. Finally, the very flexible regulation enabled accurate control over the expression induction, although we used a repressible promoter. Summing up, the continuous online regulation of gene expression has the potential to synchronize gene expression levels to optimize metabolic flux, artificial enzyme cascades, growth rates for co cultivations and many other applications addicted to complex expression regulation. The developed light-regulated expression platform represents an innovative screening approach to find optimization potential for production processes.

Keywords: caged-compounds, gene expression regulation, optogenetics, photo-labile protecting group

Procedia PDF Downloads 324
279 Developing and Shake Table Testing of Semi-Active Hydraulic Damper as Active Interaction Control Device

Authors: Ming-Hsiang Shih, Wen-Pei Sung, Shih-Heng Tung

Abstract:

Semi-active control system for structure under excitation of earthquake provides with the characteristics of being adaptable and requiring low energy. DSHD (Displacement Semi-Active Hydraulic Damper) was developed by our research team. Shake table test results of this DSHD installed in full scale test structure demonstrated that this device brought its energy-dissipating performance into full play for test structure under excitation of earthquake. The objective of this research is to develop a new AIC (Active Interaction Control Device) and apply shake table test to perform its dissipation of energy capability. This new proposed AIC is converting an improved DSHD (Displacement Semi-Active Hydraulic Damper) to AIC with the addition of an accumulator. The main concept of this energy-dissipating AIC is to apply the interaction function of affiliated structure (sub-structure) and protected structure (main structure) to transfer the input seismic force into sub-structure to reduce the structural deformation of main structure. This concept is tested using full-scale multi-degree of freedoms test structure, installed with this proposed AIC subjected to external forces of various magnitudes, for examining the shock absorption influence of predictive control, stiffness of sub-structure, synchronous control, non-synchronous control and insufficient control position. The test results confirm: (1) this developed device is capable of diminishing the structural displacement and acceleration response effectively; (2) the shock absorption of low precision of semi-active control method did twice as much seismic proof efficacy as that of passive control method; (3) active control method may not exert a negative influence of amplifying acceleration response of structure; (4) this AIC comes into being time-delay problem. It is the same problem of ordinary active control method. The proposed predictive control method can overcome this defect; (5) condition switch is an important characteristics of control type. The test results show that synchronism control is very easy to control and avoid stirring high frequency response. This laboratory results confirm that the device developed in this research is capable of applying the mutual interaction between the subordinate structure and the main structure to be protected is capable of transforming the quake energy applied to the main structure to the subordinate structure so that the objective of minimizing the deformation of main structural can be achieved.

Keywords: DSHD (Displacement Semi-Active Hydraulic Damper), AIC (Active Interaction Control Device), shake table test, full scale structure test, sub-structure, main-structure

Procedia PDF Downloads 514
278 Applying Quadrant Analysis in Identifying Business-to-Business Customer-Driven Improvement Opportunities in Third Party Logistics Industry

Authors: Luay Jum'a

Abstract:

Many challenges are facing third-party logistics (3PL) providers in the domestic and global markets which create a volatile decision making environment. All these challenges such as managing changes in consumer behaviour, demanding expectations from customers and time compressions have turned into complex problems for 3PL providers. Since the movement towards increased outsourcing outpaces movement towards insourcing, the need to achieve a competitive advantage over competitors in 3PL market increases. This trend continues to grow over the years and as a result, areas of strengths and improvements are highlighted through the analysis of the LSQ factors that lead to B2B customers’ satisfaction which become a priority for 3PL companies. Consequently, 3PL companies are increasingly focusing on the most important issues from the perspective of their customers and relying more on this value of information in making their managerial decisions. Therefore, this study is concerned with providing guidance for improving logistics service quality (LSQ) levels in the context of 3PL industry in Jordan. The study focused on the most important factors in LSQ and used a managerial tool that guides 3PL companies in making LSQ improvements based on a quadrant analysis of two main dimensions: LSQ declared importance and LSQ inferred importance. Although, a considerable amount of research has been conducted to investigate the relationship between logistics service quality (LSQ) and customer satisfaction, there remains a lack of developing managerial tools to aid in the process of LSQ improvement decision-making. Moreover, the main advantage for the companies to use 3PL service providers as a trend is due to the realised percentage of cost reduction on the total cost of logistics operations and the incremental improvement in customer service. In this regard, having a managerial tool that help 3PL service providers in managing the LSQ factors portfolio effectively and efficiently would be a great investment for service providers. One way of suggesting LSQ improvement actions for 3PL service providers is via the adoption of analysis tools that perform attribute categorisation such as Importance–Performance matrix. In mind of the above, it can be stated that the use of quadrant analysis will provide a valuable opportunity for 3PL service providers to identify improvement opportunities as customer service attributes or factors importance are identified in two different techniques that complete each other. Moreover, the data were collected through conducting a survey and 293 questionnaires were returned from business-to-business (B2B) customers of 3PL companies in Jordan. The results showed that the LSQ factors vary in their importance and 3PL companies should focus on some LSQ factors more than other factors. Moreover, ordering procedures, timeliness/responsiveness LSQ factors considered being crucial in 3PL businesses and therefore they need to have more focus and development by 3PL service providers in the Jordanian market.

Keywords: logistics service quality, managerial decisions, quadrant analysis, third party logistics service provider

Procedia PDF Downloads 124
277 Simo-syl: A Computer-Based Tool to Identify Language Fragilities in Italian Pre-Schoolers

Authors: Marinella Majorano, Rachele Ferrari, Tamara Bastianello

Abstract:

The recent technological advance allows for applying innovative and multimedia screen-based assessment tools to test children's language and early literacy skills, monitor their growth over the preschool years, and test their readiness for primary school. Several are the advantages that a computer-based assessment tool offers with respect to paper-based tools. Firstly, computer-based tools which provide the use of games, videos, and audio may be more motivating and engaging for children, especially for those with language difficulties. Secondly, computer-based assessments are generally less time-consuming than traditional paper-based assessments: this makes them less demanding for children and provides clinicians and researchers, but also teachers, with the opportunity to test children multiple times over the same school year and, thus, to monitor their language growth more systematically. Finally, while paper-based tools require offline coding, computer-based tools sometimes allow obtaining automatically calculated scores, thus producing less subjective evaluations of the assessed skills and provide immediate feedback. Nonetheless, using computer-based assessment tools to test meta-phonological and language skills in children is not yet common practice in Italy. The present contribution aims to estimate the internal consistency of a computer-based assessment (i.e., the Simo-syl assessment). Sixty-three Italian pre-schoolers aged between 4;10 and 5;9 years were tested at the beginning of the last year of the preschool through paper-based standardised tools in their lexical (Peabody Picture Vocabulary Test), morpho-syntactical (Grammar Repetition Test for Children), meta-phonological (Meta-Phonological skills Evaluation test), and phono-articulatory skills (non-word repetition). The same children were tested through Simo-syl assessment on their phonological and meta-phonological skills (e.g., recognise syllables and vowels and read syllables and words). The internal consistency of the computer-based tool was acceptable (Cronbach's alpha = .799). Children's scores obtained in the paper-based assessment and scores obtained in each task of the computer-based assessment were correlated. Significant and positive correlations emerged between all the tasks of the computer-based assessment and the scores obtained in the CMF (r = .287 - .311, p < .05) and in the correct sentences in the RCGB (r = .360 - .481, p < .01); non-word repetition standardised test significantly correlates with the reading tasks only (r = .329 - .350, p < .05). Further tasks should be included in the current version of Simo-syl to have a comprehensive and multi-dimensional approach when assessing children. However, such a tool represents a good chance for the teachers to early identifying language-related problems even in the school environment.

Keywords: assessment, computer-based, early identification, language-related skills

Procedia PDF Downloads 179
276 Evaluation of Regional Anaesthesia Practice in Plastic Surgery: A Retrospective Cross-Sectional Study

Authors: Samar Mousa, Ryan Kerstein, Mohanad Adam

Abstract:

Regional anaesthesia has been associated with favourable outcomes in patients undergoing a wide range of surgeries. Beneficial effects have been demonstrated in terms of postoperative respiratory and cardiovascular endpoints, 7-day survival, time to ambulation and hospital discharge, and postoperative analgesia. Our project aimed at assessing the regional anaesthesia practice in the plastic surgery department of Buckinghamshire trust and finding out ways to improve the service in collaboration with the anaesthesia team. It is a retrospective study associated with a questionnaire filled out by plastic surgeons and anaesthetists to get the full picture behind the numbers. The study period was between 1/3/2022 and 23/5/2022 (12 weeks). The operative notes of all patients who had an operation under plastic surgery, whether emergency or elective, were reviewed. The criteria of suitable candidates for the regional block were put by the consultant anaesthetists as follows: age above 16, single surgical site (arm, forearm, leg, foot), no drug allergy, no pre-existing neuropathy, no bleeding disorders, not on ant-coagulation, no infection to the site of the block. For 12 weeks, 1061 operations were performed by plastic surgeons. Local cases were excluded leaving 319 cases. Of the 319, 102 patients were suitable candidates for regional block after applying the previously mentioned criteria. However, only seven patients had their operations under the regional block, and the rest had general anaesthesia that could have been easily avoided. An online questionnaire was filled out by both plastic surgeons and anaesthetists of different training levels to find out the reasons behind the obvious preference for general over regional anaesthesia, even if this was against the patients’ interest. The questionnaire included the following points: training level, time taken to give GA or RA, factors that influence the decision, percentage of RA candidates that had GA, reasons behind this percentage, recommendations. Forty-four clinicians filled out the questionnaire, among which were 23 plastic surgeons and 21 anaesthetists. As regards the training level, there were 21 consultants, 4 associate specialists, 9 registrars, and 10 senior house officers. The actual percentage of patients who were good candidates for RA but had GA instead is 93%. The replies estimated this percentage as between 10-30%. 29% of the respondents thought that this percentage is because of surgeons’ preference to have GA rather than RA for their operations without medical support for the decision. 37% of the replies thought that anaesthetists prefer giving GA even if the patient is a suitable candidate for RA. 22.6% of the replies thought that patients refused to have RA, and 11.3% had other causes. The recommendations were in 5 main accesses, which are protocols and pathways for regional blocks, more training opportunities for anaesthetists on regional blocks, providing a separate block room in the hospital, better communication between surgeons and anaesthetists, patient education about the benefits of regional blocks.

Keywords: regional anaesthesia, regional block, plastic surgery, general anaesthesia

Procedia PDF Downloads 81
275 Pulsed-Wave Doppler Ultrasonographic Assessment of the Maximum Blood Velocity in Common Carotid Artery in Horses after Administration of Ketamine and Acepromazine

Authors: Saman Ahani, Aboozar Dehghan, Roham Vali, Hamid Salehian, Amin Ebrahimi

Abstract:

Pulsed-wave (PW) doppler ultrasonography is a non-invasive, relatively accurate imaging technique that can measure blood speed. The imaging could be obtained via the common carotid artery, as one of the main vessels supplying the blood of vital organs. In horses, factors such as susceptibility to depression of the cardiovascular system and their large muscular mass have rendered them vulnerable to changes in blood speed. One of the most important factors causing blood velocity changes is the administration of anesthetic drugs, including Ketamine and Acepromazine. Thus, in this study, the Pulsed-wave doppler technique was performed to assess the highest blood velocity in the common carotid artery following administration of Ketamine and Acepromazine. Six male and six female healthy Kurdish horses weighing 351 ± 46 kg (mean ± SD) and aged 9.2 ± 1.7 years (mean ± SD) were housed under animal welfare guidelines. After fasting for six hours, the normal blood flow velocity in the common carotid artery was measured using a Pulsed-wave doppler ultrasonography machine (BK Medical, Denmark), and a high-frequency linear transducer (12 MHz) without applying any sedative drugs as a control group. The same procedure was repeated after each individual received the following medications: 1.1, 2.2 mg/kg Ketamine (Pfizer, USA), and 0.5, 1 mg/kg Acepromizine (RACEHORSE MEDS, Ukraine), with an interval of 21 days between the administration of each dose and/or drug. The ultrasonographic study was done five (T5) and fifteen (T15) minutes after injecting each dose intravenously. Lastly, the statistical analysis was performed using SPSS software version 22 for Windows and a P value less than 0.05 was considered to be statistically significant. Five minutes after administration of Ketamine (1.1, 2.2 mg/kg) in both male and female horses, the blood velocity decreased to 38.44, 34.53 cm/s in males, and 39.06, 34.10 cm/s in females in comparison to the control group (39.59 and 40.39 cm/s in males and females respectively) while administration of 0.5 mg/kg Acepromazine led to a significant rise (73.15 and 55.80 cm/s in males and females respectively) (p<0.05). It means that the most drastic change in blood velocity, regardless of gender, refers to the latter dose/drug. In both medications and both genders, the increase in doses led to a decrease in blood velocity compared to the lower dose of the same drug. In all experiments in this study, the blood velocity approached its normal value at T15. In another study comparing the blood velocity changes affected by Ketamine and Acepromazine through femoral arteries, the most drastic changes were attributed to Ketamine; however, in this experiment, the maximum blood velocity was observed following administration of Acepromazine via the common carotid artery. Therefore, further experiments using the same medications are suggested using Pulsed-wave doppler measuring the blood velocity changes in both femoral and common carotid arteries simultaneously.

Keywords: Acepromazine, common carotid artery, horse, ketamine, pulsed-wave doppler ultrasonography

Procedia PDF Downloads 124
274 Using Soil Texture Field Observations as Ordinal Qualitative Variables for Digital Soil Mapping

Authors: Anne C. Richer-De-Forges, Dominique Arrouays, Songchao Chen, Mercedes Roman Dobarco

Abstract:

Most of the digital soil mapping (DSM) products rely on machine learning (ML) prediction models and/or the use or pedotransfer functions (PTF) in which calibration data come from soil analyses performed in labs. However, many other observations (often qualitative, nominal, or ordinal) could be used as proxies of lab measurements or as input data for ML of PTF predictions. DSM and ML are briefly described with some examples taken from the literature. Then, we explore the potential of an ordinal qualitative variable, i.e., the hand-feel soil texture (HFST) estimating the mineral particle distribution (PSD): % of clay (0-2µm), silt (2-50µm) and sand (50-2000µm) in 15 classes. The PSD can also be measured by lab measurements (LAST) to determine the exact proportion of these particle-sizes. However, due to cost constraints, HFST are much more numerous and spatially dense than LAST. Soil texture (ST) is a very important soil parameter to map as it is controlling many of the soil properties and functions. Therefore, comes an essential question: is it possible to use HFST as a proxy of LAST for calibration and/or validation of DSM predictions of ST? To answer this question, the first step is to compare HFST with LAST on a representative set where both information are available. This comparison was made on ca 17,400 samples representative of a French region (34,000 km2). The accuracy of HFST was assessed, and each HFST class was characterized by a probability distribution function (PDF) of its LAST values. This enables to randomly replace HFST observations by LAST values while respecting the PDF previously calculated and results in a very large increase of observations available for the calibration or validation of PTF and ML predictions. Some preliminary results are shown. First, the comparison between HFST classes and LAST analyses showed that accuracies could be considered very good when compared to other studies. The causes of some inconsistencies were explored and most of them were well explained by other soil characteristics. Then we show some examples applying these relationships and the increase of data to several issues related to DSM. The first issue is: do the PDF functions that were established enable to use HSFT class observations to improve the LAST soil texture prediction? For this objective, we replaced all HFST for topsoil by values from the PDF 100 time replicates). Results were promising for the PTF we tested (a PTF predicting soil water holding capacity). For the question related to the ML prediction of LAST soil texture on the region, we did the same kind of replacement, but we implemented a 10-fold cross-validation using points where we had LAST values. We obtained only preliminary results but they were rather promising. Then we show another example illustrating the potential of using HFST as validation data. As in numerous countries, the HFST observations are very numerous; these promising results pave the way to an important improvement of DSM products in all the countries of the world.

Keywords: digital soil mapping, improvement of digital soil mapping predictions, potential of using hand-feel soil texture, soil texture prediction

Procedia PDF Downloads 218
273 The Validation of RadCalc for Clinical Use: An Independent Monitor Unit Verification Software

Authors: Junior Akunzi

Abstract:

In the matter of patient treatment planning quality assurance in 3D conformational therapy (3D-CRT) and volumetric arc therapy (VMAT or RapidArc), the independent monitor unit verification calculation (MUVC) is an indispensable part of the process. Concerning 3D-CRT treatment planning, the MUVC can be performed manually applying the standard ESTRO formalism. However, due to the complex shape and the amount of beams in advanced treatment planning technic such as RapidArc, the manual independent MUVC is inadequate. Therefore, commercially available software such as RadCalc can be used to perform the MUVC in complex treatment planning been. Indeed, RadCalc (version 6.3 LifeLine Inc.) uses a simplified Clarkson algorithm to compute the dose contribution for individual RapidArc fields to the isocenter. The purpose of this project is the validation of RadCalc in 3D-CRT and RapidArc for treatment planning dosimetry quality assurance at Antoine Lacassagne center (Nice, France). Firstly, the interfaces between RadCalc and our treatment planning systems (TPS) Isogray (version 4.2) and Eclipse (version13.6) were checked for data transfer accuracy. Secondly, we created test plans in both Isogray and Eclipse featuring open fields, wedges fields, and irregular MLC fields. These test plans were transferred from TPSs according to the radiotherapy protocol of DICOM RT to RadCalc and the linac via Mosaiq (version 2.5). Measurements were performed in water phantom using a PTW cylindrical semiflex ionisation chamber (0.3 cm³, 31010) and compared with the TPSs and RadCalc calculation. Finally, 30 3D-CRT plans and 40 RapidArc plans created with patients CT scan were recalculated using the CT scan of a solid PMMA water equivalent phantom for 3D-CRT and the Octavius II phantom (PTW) CT scan for RapidArc. Next, we measure the doses delivered into these phantoms for each plan with a 0.3 cm³ PTW 31010 cylindrical semiflex ionisation chamber (3D-CRT) and 0.015 cm³ PTW PinPoint ionisation chamber (Rapidarc). For our test plans, good agreements were found between calculation (RadCalc and TPSs) and measurement (mean: 1.3%; standard deviation: ± 0.8%). Regarding the patient plans, the measured doses were compared to the calculation in RadCalc and in our TPSs. Moreover, RadCalc calculations were compared to Isogray and Eclispse ones. Agreements better than (2.8%; ± 1.2%) were found between RadCalc and TPSs. As for the comparison between calculation and measurement the agreement for all of our plans was better than (2.3%; ± 1.1%). The independent MU verification calculation software RadCal has been validated for clinical use and for both 3D-CRT and RapidArc techniques. The perspective of this project includes the validation of RadCal for the Tomotherapy machine installed at centre Antoine Lacassagne.

Keywords: 3D conformational radiotherapy, intensity modulated radiotherapy, monitor unit calculation, dosimetry quality assurance

Procedia PDF Downloads 213