Search results for: interpretive structural modeling
554 Acceleration of Adsorption Kinetics by Coupling Alternating Current with Adsorption Process onto Several Adsorbents
Authors: A. Kesraoui, M. Seffen
Abstract:
Applications of adsorption onto activated carbon for water treatment are well known. The process has been demonstrated to be widely effective for removing dissolved organic substances from wastewaters, but this treatment has a major drawback is the high operating cost. The main goal of our research work is to improve the retention capacity of Tunisian biomass for the depollution of industrial wastewater and retention of pollutants considered toxic. The biosorption process is based on the retention of molecules and ions onto a solid surface composed of biological materials. The evaluation of the potential use of these materials is important to propose as an alternative to the adsorption process generally expensive, used to remove organic compounds. Indeed, these materials are very abundant in nature and are low cost. Certainly, the biosorption process is effective to remove the pollutants, but it presents a slow kinetics. The improvement of the biosorption rates is a challenge to make this process competitive with respect to oxidation and adsorption onto lignocellulosic fibers. In this context, the alternating current appears as a new alternative, original and a very interesting phenomenon in the acceleration of chemical reactions. Our main goal is to increase the retention acceleration of dyes (indigo carmine, methylene blue) and phenol by using a new alternative: alternating current. The adsorption experiments have been performed in a batch reactor by adding some of the adsorbents in 150 mL of pollutants solution with the desired concentration and pH. The electrical part of the mounting comprises a current source which delivers an alternating current voltage of 2 to 15 V. It is connected to a voltmeter that allows us to read the voltage. In a 150 mL capacity cell, we plunged two zinc electrodes and the distance between two Zinc electrodes has been 4 cm. Thanks to alternating current, we have succeeded to improve the performance of activated carbon by increasing the speed of the indigo carmine adsorption process and reducing the treatment time. On the other hand, we have studied the influence of the alternating current on the biosorption rate of methylene blue onto Luffa cylindrica fibers and the hybrid material (Luffa cylindrica-ZnO). The results showed that the alternating current accelerated the biosorption rate of methylene blue onto the Luffa cylindrica and the Luffa cylindrica-ZnO hybrid material and increased the adsorbed amount of methylene blue on both adsorbents. In order to improve the removal of phenol, we performed the coupling between the alternating current and the biosorption onto two adsorbents: Luffa cylindrica and the hybrid material (Luffa cylindrica-ZnO). In fact, the alternating current has succeeded to improve the performance of adsorbents by increasing the speed of the adsorption process and the adsorption capacity and reduce the processing time.Keywords: adsorption, alternating current, dyes, modeling
Procedia PDF Downloads 160553 Sterilization Effects of Low Concentration of Hydrogen Peroxide Solution on 3D Printed Biodegradable Polyurethane Nanocomposite Scaffold for Heart Valve Regeneration
Authors: S. E. Mohmad-Saberi, W. Song, N. Oliver, M. Adrian, T.C. Hsu, A. Darbyshire
Abstract:
Biodegradable polyurethane (PU) has emerged as a potential material to promote repair and regeneration of damaged/diseased tissues in heart valve regeneration due to its excellent biomechanical profile. Understanding the effects of sterilization on their properties is vital since they are more sensitive and more critical of porous structures compared to bulk ones. In this study, the effects of low concentration of hydrogen peroxide (H₂O₂) solution sterilization has been investigated to determine whether the procedure would be efficient and non-destructive to porous three-dimensional (3D) elastomeric nanocomposite, polyhedral oligomeric silsesquioxane-terminated poly (ethylene-diethylene glycol succinate-sebacate) urea-urethane (POSS-EDSS-PU) scaffold. All the samples were tested for sterility following sterilization using phosphate buffer saline (PBS) as control and 5 % v/v H₂O₂ solution. The samples were incubated in tryptic soy broth for the cultivation of microorganisms under agitation at 37˚C for 72 hours. The effects of the 5 % v/v H₂O₂ solution sterilization were evaluated in terms of morphology, chemical and mechanical properties using scanning electron microscopy (SEM), Fourier transform infrared (FTIR) and tensile tester apparatus. Toxicity effects of the 5 % v/v H₂O₂ solution decontamination were studied by in vitro cytotoxicity test, where the cellular responses of human dermal fibroblast (HDF) were examined. A clear, uncontaminated broth using 5 % v/v H₂O₂ solution method indicated efficient sterilization after 3 days, while the non-sterilized control shows clouding broth indicated contamination. The morphology of 3D POSS-EDSS-PU scaffold appeared to have similar morphology after sterilization with 5 % v/v H₂O₂ solution regarding of pore size and surface. FTIR results show that the sterilized samples and non-sterilized control share the same spectra pattern, confirming no significant alterations over the surface chemistry. For the mechanical properties of the H₂O₂ solution-treated scaffolds, the tensile strain was not significantly decreased, however, become significantly stiffer after the sterilization. No cytotoxic effects were observed after the 5 % v/v H₂O₂ solution sterilization as confirmed by cell viability assessed by Alamar Blue assay. The results suggest that low concentration of 5 % v/v hydrogen peroxide solution can be used as an alternative method for sterilizing biodegradable 3D porous scaffold with micro/nano-architecture without structural deformation. This study provides the understanding of the sterilization effects on biomechanical profile and cell proliferation of 3D POSS-EDSS-PU scaffolds.Keywords: biodegradable, hydrogen peroxide solution, POSS-EDSS-PU, sterilization
Procedia PDF Downloads 160552 Telepsychiatry for Asian Americans
Authors: Jami Wang, Brian Kao, Davin Agustines
Abstract:
COVID-19 highlighted the active discrimination against the Asian American population easily seen through media, social tension, and increased crimes against the specific population. It is well known that long-term racism can also have a large impact on both emotional and psychological well-being. However, the healthcare disparity during this time also revealed how the Asian American community lacked the research data, political support, and medical infrastructure for this particular population. During a time when Asian American fear for safety with decreasing mental health, telepsychiatry is particularly promising. COVID-19 demonstrated how well psychiatry could integrate with telemedicine, with psychiatry being the second most utilized telemedicine visits. However, the Asian American community did not utilize the telepsychiatry resources as much as other groups. Because of this, we wanted to understand why the patient population who was affected the most by COVID-19 mentally did not seek out care. To do this, we decided to study the top top telepsychiatry platforms. The current top telepsychiatry companies in the United States include Teladoc and BetterHelp. In the Teladoc mental health sector, they only had 4 available languages (English, Spanish, French, and Danis,) with none of them being an Asian language. In a similar manner, Teladoc’s top competitor in the telepsychiatry space, BetterHelp, only listed a total of 3 Asian languages, including Mandarin, Japanese, and Malaysian. However, this is still a short list considering they have over 20 languages available. The shortage of available physicians that speak multiple languages is concerning, as it could be difficult for the Asian American community to relate with. There are limited mental health resources that cater to their likely cultural needs, further exacerbating the structural racism and institutional barriers to appropriate care. It is important to note that these companies do provide interpreters to comply with the nondiscrimination and language assistance federal law. However, interactions with an interpreter are not only more time-consuming but also less personal than talking directly with a physician. Psychiatry is the field that emphasizes interpersonal relationships. The trust between a physician and the patient is critical in developing patient rapport to guide in better understanding the clinical picture and treating the patient appropriately. The language barrier creates an additional barrier between the physician and patient. Because Asian Americans are one of the largest growing patient population bases, these telehealth companies have much to gain by catering to the Asian American market. Without providing adequate access to bilingual and bicultural physicians, the current system will only further exacerbate the growing disparity. The healthcare community and telehealth companies need to recognize that the Asian American population is a severely underserved population in mental health and has much to gain from telepsychiatry. The lack of language is one of many reasons why there is a disparity for Asian Americans in the mental health space.Keywords: telemedicine, psychiatry, Asian American, disparity
Procedia PDF Downloads 105551 Experimental Uniaxial Tensile Characterization of One-Dimensional Nickel Nanowires
Authors: Ram Mohan, Mahendran Samykano, Shyam Aravamudhan
Abstract:
Metallic nanowires with sub-micron and hundreds of nanometer diameter have a diversity of applications in nano/micro-electromechanical systems (NEMS/MEMS). Characterizing the mechanical properties of such sub-micron and nano-scale metallic nanowires are tedious; require sophisticated and careful experimentation to be performed within high-powered microscopy systems (scanning electron microscope (SEM), atomic force microscope (AFM)). Also, needed are nanoscale devices for placing the nanowires; loading them with the intended conditions; obtaining the data for load–deflection during the deformation within the high-powered microscopy environment poses significant challenges. Even picking the grown nanowires and placing them correctly within a nanoscale loading device is not an easy task. Mechanical characterizations through experimental methods for such nanowires are still very limited. Various techniques at different levels of fidelity, resolution, and induced errors have been attempted by material science and nanomaterial researchers. The methods for determining the load, deflection within the nanoscale devices also pose a significant problem. The state of the art is thus still at its infancy. All these factors result and is seen in the wide differences in the characterization curves and the reported properties in the current literature. In this paper, we discuss and present our experimental method, results, and discussions of uniaxial tensile loading and the development of subsequent stress–strain characteristics curves for Nickel nanowires. Nickel nanowires in the diameter range of 220–270 nm were obtained in our laboratory via an electrodeposition method, which is a solution based, template method followed in our present work for growing 1-D Nickel nanowires. Process variables such as the presence of magnetic field, its intensity; and varying electrical current density during the electrodeposition process were found to influence the morphological and physical characteristics including crystal orientation, size of the grown nanowires1. To further understand the correlation and influence of electrodeposition process variables, associated formed structural features of our grown Nickel nanowires to their mechanical properties, careful experiments within scanning electron microscope (SEM) were conducted. Details of the uniaxial tensile characterization, testing methodology, nanoscale testing device, load–deflection characteristics, microscopy images of failure progression, and the subsequent stress–strain curves are discussed and presented.Keywords: uniaxial tensile characterization, nanowires, electrodeposition, stress-strain, nickel
Procedia PDF Downloads 406550 The Growth Role of Natural Gas Consumption for Developing Countries
Authors: Tae Young Jin, Jin Soo Kim
Abstract:
Carbon emissions have emerged as global concerns. Intergovernmental Panel of Climate Change (IPCC) have published reports about Green House Gases (GHGs) emissions regularly. United Nations Framework Convention on Climate Change (UNFCCC) have held a conference yearly since 1995. Especially, COP21 held at December 2015 made the Paris agreement which have strong binding force differently from former COP. The Paris agreement was ratified as of 4 November 2016, they finally have legal binding. Participating countries set up their own Intended Nationally Determined Contributions (INDC), and will try to achieve this. Thus, carbon emissions must be reduced. The energy sector is one of most responsible for carbon emissions and fossil fuels particularly are. Thus, this paper attempted to examine the relationship between natural gas consumption and economic growth. To achieve this, we adopted the Cobb-Douglas production function that consists of natural gas consumption, economic growth, capital, and labor using dependent panel analysis. Data were preprocessed with Principal Component Analysis (PCA) to remove cross-sectional dependency which can disturb the panel results. After confirming the existence of time-trended component of each variable, we moved to cointegration test considering cross-sectional dependency and structural breaks to describe more realistic behavior of volatile international indicators. The cointegration test result indicates that there is long-run equilibrium relationship between selected variables. Long-run cointegrating vector and Granger causality test results show that while natural gas consumption can contribute economic growth in the short-run, adversely affect in the long-run. From these results, we made following policy implications. Since natural gas has positive economic effect in only short-run, the policy makers in developing countries must consider the gradual switching of major energy source, from natural gas to sustainable energy source. Second, the technology transfer and financing business suggested by COP must be accelerated. Acknowledgement—This work was supported by the Energy Efficiency & Resources Core Technology Program of the Korea Institute of Energy Technology Evaluation and Planning (KETEP) granted financial resource from the Ministry of Trade, Industry & Energy, Republic of Korea (No. 20152510101880) and by the National Research Foundation of Korea Grant funded by the Korean Government (NRF-205S1A3A2046684).Keywords: developing countries, economic growth, natural gas consumption, panel data analysis
Procedia PDF Downloads 234549 Application of Micro-Tunneling Technique to Rectify Tilted Structures Constructed on Cohesive Soil
Authors: Yasser R. Tawfic, Mohamed A. Eid
Abstract:
Foundation differential settlement and supported structure tilting is an occasionally occurred engineering problem. This may be caused by overloading, changes in ground soil properties or unsupported nearby excavations. Engineering thinking points directly toward the logic solution for such problem by uplifting the settled side. This can be achieved with deep foundation elements such as micro-piles and macro-piles™, jacked piers and helical piers, jet grouted soil-crete columns, compaction grout columns, cement grouting or with chemical grouting, or traditional pit underpinning with concrete and mortar. Although, some of these techniques offer economic, fast and low noise solutions, many of them are quite the contrary. For tilted structures, with limited inclination, it may be much easier to cause a balancing settlement on the less-settlement side which shall be done carefully in a proper rate. This principal has been applied in Leaning Tower of Pisa stabilization with soil extraction from the ground surface. In this research, the authors attempt to introduce a new solution with a different point of view. So, micro-tunneling technique is presented in here as an intended ground deformation cause. In general, micro-tunneling is expected to induce limited ground deformations. Thus, the researchers propose to apply the technique to form small size ground unsupported holes to produce the target deformations. This shall be done in four phases: •Application of one or more micro-tunnels, regarding the existing differential settlement value, under the raised side of the tilted structure. •For each individual tunnel, the lining shall be pulled out from both sides (from jacking and receiving shafts) in slow rate. •If required, according to calculations and site records, an additional surface load can be applied on the raised foundation side. •Finally, a strengthening soil grouting shall be applied for stabilization after adjustment. A finite element based numerical model is presented to simulate the proposed construction phases for different tunneling positions and tunnels group. For each case, the surface settlements are calculated and induced plasticity points are checked. These results show the impact of the suggested procedure on the tilted structure and its feasibility. Comparing results also show the importance of the position selection and tunnels group gradual effect. Thus, a new engineering solution is presented to one of the structural and geotechnical engineering challenges.Keywords: differential settlement, micro-tunneling, soil-structure interaction, tilted structures
Procedia PDF Downloads 208548 Solid State Drive End to End Reliability Prediction, Characterization and Control
Authors: Mohd Azman Abdul Latif, Erwan Basiron
Abstract:
A flaw or drift from expected operational performance in one component (NAND, PMIC, controller, DRAM, etc.) may affect the reliability of the entire Solid State Drive (SSD) system. Therefore, it is important to ensure the required quality of each individual component through qualification testing specified using standards or user requirements. Qualification testing is time-consuming and comes at a substantial cost for product manufacturers. A highly technical team, from all the eminent stakeholders is embarking on reliability prediction from beginning of new product development, identify critical to reliability parameters, perform full-blown characterization to embed margin into product reliability and establish control to ensure the product reliability is sustainable in the mass production. The paper will discuss a comprehensive development framework, comprehending SSD end to end from design to assembly, in-line inspection, in-line testing and will be able to predict and to validate the product reliability at the early stage of new product development. During the design stage, the SSD will go through intense reliability margin investigation with focus on assembly process attributes, process equipment control, in-process metrology and also comprehending forward looking product roadmap. Once these pillars are completed, the next step is to perform process characterization and build up reliability prediction modeling. Next, for the design validation process, the reliability prediction specifically solder joint simulator will be established. The SSD will be stratified into Non-Operating and Operating tests with focus on solder joint reliability and connectivity/component latent failures by prevention through design intervention and containment through Temperature Cycle Test (TCT). Some of the SSDs will be subjected to the physical solder joint analysis called Dye and Pry (DP) and Cross Section analysis. The result will be feedbacked to the simulation team for any corrective actions required to further improve the design. Once the SSD is validated and is proven working, it will be subjected to implementation of the monitor phase whereby Design for Assembly (DFA) rules will be updated. At this stage, the design change, process and equipment parameters are in control. Predictable product reliability at early product development will enable on-time sample qualification delivery to customer and will optimize product development validation, effective development resource and will avoid forced late investment to bandage the end-of-life product failures. Understanding the critical to reliability parameters earlier will allow focus on increasing the product margin that will increase customer confidence to product reliability.Keywords: e2e reliability prediction, SSD, TCT, solder joint reliability, NUDD, connectivity issues, qualifications, characterization and control
Procedia PDF Downloads 174547 Exploring Tweeters’ Concerns and Opinions about FIFA Arab Cup 2021: An Investigation Study
Authors: Md. Rafiul Biswas, Uzair Shah, Mohammad Alkayal, Zubair Shah, Othman Althawadi, Kamila Swart
Abstract:
Background: Social media platforms play a significant role in the mediated consumption of sport, especially so for sport mega-event. The characteristics of Twitter data (e.g., user mentions, retweets, likes, #hashtag) accumulate the users in one ground and spread information widely and quickly. Analysis of Twitter data can reflect the public attitudes, behavior, and sentiment toward a specific event on a larger scale than traditional surveys. Qatar is going to be the first Arab country to host the mega sports event FIFA World Cup 2022 (Q22). Qatar has hosted the FIFA Arab Cup 2021 (FAC21) to serve as a preparation for the mega-event. Objectives: This study investigates public sentiments and experiences about FAC21 and provides an insight to enhance the public experiences for the upcoming Q22. Method: FCA21-related tweets were downloaded using Twitter Academic research API between 01 October 2021 to 18 February 2022. Tweets were divided into three different periods: before T1 (01 Oct 2021 to 29 Nov 2021), during T2 (30 Nov 2021 -18 Dec 2021), and after the FAC21 T3 (19 Dec 2021-18 Feb 2022). The collected tweets were preprocessed in several steps to prepare for analysis; (1) removed duplicate and retweets, (2) removed emojis, punctuation, and stop words (3) normalized tweets using word lemmatization. Then, rule-based classification was applied to remove irrelevant tweets. Next, the twitter-XLM-roBERTa-base model from Huggingface was applied to identify the sentiment in the tweets. Further, state-of-the-art BertTopic modeling will be applied to identify trending topics over different periods. Results: We downloaded 8,669,875 Tweets posted by 2728220 unique users in different languages. Of those, 819,813 unique English tweets were selected in this study. After splitting into three periods, 541630, 138876, and 139307 were from T1, T2, and T3, respectively. Most of the sentiments were neutral, around 60% in different periods. However, the rate of negative sentiment (23%) was high compared to positive sentiment (18%). The analysis indicates negative concerns about FAC21. Therefore, we will apply BerTopic to identify public concerns. This study will permit the investigation of people’s expectations before FAC21 (e.g., stadium, transportation, accommodation, visa, tickets, travel, and other facilities) and ascertain whether these were met. Moreover, it will highlight public expectations and concerns. The findings of this study can assist the event organizers in enhancing implementation plans for Q22. Furthermore, this study can support policymakers with aligning strategies and plans to leverage outstanding outcomes.Keywords: FIFA Arab Cup, FIFA, Twitter, machine learning
Procedia PDF Downloads 100546 New Hardy Type Inequalities of Two-Dimensional on Time Scales via Steklov Operator
Authors: Wedad Albalawi
Abstract:
The mathematical inequalities have been the core of mathematical study and used in almost all branches of mathematics as well in various areas of science and engineering. The inequalities by Hardy, Littlewood and Polya were the first significant composition of several science. This work presents fundamental ideas, results and techniques, and it has had much influence on research in various branches of analysis. Since 1934, various inequalities have been produced and studied in the literature. Furthermore, some inequalities have been formulated by some operators; in 1989, weighted Hardy inequalities have been obtained for integration operators. Then, they obtained weighted estimates for Steklov operators that were used in the solution of the Cauchy problem for the wave equation. They were improved upon in 2011 to include the boundedness of integral operators from the weighted Sobolev space to the weighted Lebesgue space. Some inequalities have been demonstrated and improved using the Hardy–Steklov operator. Recently, a lot of integral inequalities have been improved by differential operators. Hardy inequality has been one of the tools that is used to consider integrity solutions of differential equations. Then, dynamic inequalities of Hardy and Coposon have been extended and improved by various integral operators. These inequalities would be interesting to apply in different fields of mathematics (functional spaces, partial differential equations, mathematical modeling). Some inequalities have been appeared involving Copson and Hardy inequalities on time scales to obtain new special version of them. A time scale is an arbitrary nonempty closed subset of the real numbers. Then, the dynamic inequalities on time scales have received a lot of attention in the literature and has become a major field in pure and applied mathematics. There are many applications of dynamic equations on time scales to quantum mechanics, electrical engineering, neural networks, heat transfer, combinatorics, and population dynamics. This study focuses on Hardy and Coposon inequalities, using Steklov operator on time scale in double integrals to obtain special cases of time-scale inequalities of Hardy and Copson on high dimensions. The advantage of this study is that it uses the one-dimensional classical Hardy inequality to obtain higher dimensional on time scale versions that will be applied in the solution of the Cauchy problem for the wave equation. In addition, the obtained inequalities have various applications involving discontinuous domains such as bug populations, phytoremediation of metals, wound healing, maximization problems. The proof can be done by introducing restriction on the operator in several cases. The concepts in time scale version such as time scales calculus will be used that allows to unify and extend many problems from the theories of differential and of difference equations. In addition, using chain rule, and some properties of multiple integrals on time scales, some theorems of Fubini and the inequality of H¨older.Keywords: time scales, inequality of hardy, inequality of coposon, steklov operator
Procedia PDF Downloads 95545 The Evaluation of the Cognitive Training Program for Older Adults with Mild Cognitive Impairment: Protocol of a Randomized Controlled Study
Authors: Hui-Ling Yang, Kuei-Ru Chou
Abstract:
Background: Studies show that cognitive training can effectively delay cognitive failure. However, there are several gaps in the previous studies of cognitive training in mild cognitive impairment: 1) previous studies enrolled mostly healthy older adults, with few recruiting older adults with cognitive impairment; 2) they also had limited generalizability and lacked long-term follow-up data and measurements of the activities of daily living functional impact. Moreover, only 37% were randomized controlled trials (RCT). 3) Limited cognitive training has been specifically developed for mild cognitive impairment. Objective: This study sought to investigate the changes in cognitive function, activities of daily living and degree of depressive symptoms in older adults with mild cognitive impairment after cognitive training. Methods: This double-blind randomized controlled study has a 2-arm parallel group design. Study subjects are older adults diagnosed with mild cognitive impairment in residential care facilities. 124 subjects will be randomized by the permuted block randomization, into intervention group (Cognitive training, CT), or active control group (Passive information activities, PIA). Therapeutic adherence, sample attrition rate, medication compliance and adverse events will be monitored during the study period, and missing data analyzed using intent-to-treat analysis (ITT). Results: Training sessions of the CT group are 45 minutes/day, 3 days/week, for 12 weeks (36 sessions each). The training of active control group is the same as CT group (45min/day, 3days/week, for 12 weeks, for a total of 36 sessions). The primary outcome is cognitive function, using the Mini-Mental Status Examination (MMSE); the secondary outcome indicators are: 1) activities of daily living, using the Lawton’s Instrumental Activities of Daily Living (IADLs) and 2) degree of depressive symptoms, using the Geriatric Depression Scale-Short form (GDS-SF). Latent growth curve modeling will be used in the repeated measures statistical analysis to estimate the trajectory of improvement by examining the rate and pattern of change in cognitive functions, activities of daily living and degree of depressive symptoms for intervention efficacy over time, and the effects will be evaluated immediate post-test, 3 months, 6 months and one year after the last session. Conclusions: We constructed a rigorous CT program adhering to the Consolidated Standards of Reporting Trials (CONSORT) reporting guidelines. We expect to determine the improvement in cognitive function, activities of daily living and degree of depressive symptoms of older adults with mild cognitive impairment after using the CT.Keywords: mild cognitive impairment, cognitive training, randomized controlled study
Procedia PDF Downloads 448544 Effects of Temperature and the Use of Bacteriocins on Cross-Contamination from Animal Source Food Processing: A Mathematical Model
Authors: Benjamin Castillo, Luis Pastenes, Fernando Cerdova
Abstract:
The contamination of food by microbial agents is a common problem in the industry, especially regarding the elaboration of animal source products. Incorrect manipulation of the machinery or on the raw materials can cause a decrease in production or an epidemiological outbreak due to intoxication. In order to improve food product quality, different methods have been used to reduce or, at least, to slow down the growth of the pathogens, especially deteriorated, infectious or toxigenic bacteria. These methods are usually carried out under low temperatures and short processing time (abiotic agents), along with the application of antibacterial substances, such as bacteriocins (biotic agents). This, in a controlled and efficient way that fulfills the purpose of bacterial control without damaging the final product. Therefore, the objective of the present study is to design a secondary mathematical model that allows the prediction of both the biotic and abiotic factor impact associated with animal source food processing. In order to accomplish this objective, the authors propose a three-dimensional differential equation model, whose components are: bacterial growth, release, production and artificial incorporation of bacteriocins and changes in pH levels of the medium. These three dimensions are constantly being influenced by the temperature of the medium. Secondly, this model adapts to an idealized situation of cross-contamination animal source food processing, with the study agents being both the animal product and the contact surface. Thirdly, the stochastic simulations and the parametric sensibility analysis are compared with referential data. The main results obtained from the analysis and simulations of the mathematical model were to discover that, although bacterial growth can be stopped in lower temperatures, even lower ones are needed to eradicate it. However, this can be not only expensive, but counterproductive as well in terms of the quality of the raw materials and, on the other hand, higher temperatures accelerate bacterial growth. In other aspects, the use and efficiency of bacteriocins are an effective alternative in the short and medium terms. Moreover, an indicator of bacterial growth is a low-level pH, since lots of deteriorating bacteria are lactic acids. Lastly, the processing times are a secondary agent of concern when the rest of the aforementioned agents are under control. Our main conclusion is that when acclimating a mathematical model within the context of the industrial process, it can generate new tools that predict bacterial contamination, the impact of bacterial inhibition, and processing method times. In addition, the mathematical modeling proposed logistic input of broad application, which can be replicated on non-meat food products, other pathogens or even on contamination by crossed contact of allergen foods.Keywords: bacteriocins, cross-contamination, mathematical model, temperature
Procedia PDF Downloads 144543 Investigating Early Markers of Alzheimer’s Disease Using a Combination of Cognitive Tests and MRI to Probe Changes in Hippocampal Anatomy and Functionality
Authors: Netasha Shaikh, Bryony Wood, Demitra Tsivos, Michael Knight, Risto Kauppinen, Elizabeth Coulthard
Abstract:
Background: Effective treatment of dementia will require early diagnosis, before significant brain damage has accumulated. Memory loss is an early symptom of Alzheimer’s disease (AD). The hippocampus, a brain area critical for memory, degenerates early in the course of AD. The hippocampus comprises several subfields. In contrast to healthy aging where CA3 and dentate gyrus are the hippocampal subfields with most prominent atrophy, in AD the CA1 and subiculum are thought to be affected early. Conventional clinical structural neuroimaging is not sufficiently sensitive to identify preferential atrophy in individual subfields. Here, we will explore the sensitivity of new magnetic resonance imaging (MRI) sequences designed to interrogate medial temporal regions as an early marker of Alzheimer’s. As it is likely a combination of tests may predict early Alzheimer’s disease (AD) better than any single test, we look at the potential efficacy of such imaging alone and in combination with standard and novel cognitive tasks of hippocampal dependent memory. Methods: 20 patients with mild cognitive impairment (MCI), 20 with mild-moderate AD and 20 age-matched healthy elderly controls (HC) are being recruited to undergo 3T MRI (with sequences designed to allow volumetric analysis of hippocampal subfields) and a battery of cognitive tasks (including Paired Associative Learning from CANTAB, Hopkins Verbal Learning Test and a novel hippocampal-dependent abstract word memory task). AD participants and healthy controls are being tested just once whereas patients with MCI will be tested twice a year apart. We will compare subfield size between groups and correlate subfield size with cognitive performance on our tasks. In the MCI group, we will explore the relationship between subfield volume, cognitive test performance and deterioration in clinical condition over a year. Results: Preliminary data (currently on 16 participants: 2 AD; 4 MCI; 9 HC) have revealed subfield size differences between subject groups. Patients with AD perform with less accuracy on tasks of hippocampal-dependent memory, and MCI patient performance and reaction times also differ from healthy controls. With further testing, we hope to delineate how subfield-specific atrophy corresponds with changes in cognitive function, and characterise how this progresses over the time course of the disease. Conclusion: Novel sequences on a MRI scanner such as those in route in clinical use can be used to delineate hippocampal subfields in patients with and without dementia. Preliminary data suggest that such subfield analysis, perhaps in combination with cognitive tasks, may be an early marker of AD.Keywords: Alzheimer's disease, dementia, memory, cognition, hippocampus
Procedia PDF Downloads 573542 A Comprehensive Finite Element Model for Incremental Launching of Bridges: Optimizing Construction and Design
Authors: Mohammad Bagher Anvari, Arman Shojaei
Abstract:
Incremental launching, a widely adopted bridge erection technique, offers numerous advantages for bridge designers. However, accurately simulating and modeling the dynamic behavior of the bridge during each step of the launching process proves to be tedious and time-consuming. The perpetual variation of internal forces within the deck during construction stages adds complexity, exacerbated further by considerations of other load cases, such as support settlements and temperature effects. As a result, there is an urgent need for a reliable, simple, economical, and fast algorithmic solution to model bridge construction stages effectively. This paper presents a novel Finite Element (FE) model that focuses on studying the static behavior of bridges during the launching process. Additionally, a simple method is introduced to normalize all quantities in the problem. The new FE model overcomes the limitations of previous models, enabling the simulation of all stages of launching, which conventional models fail to achieve due to underlying assumptions. By leveraging the results obtained from the new FE model, this study proposes solutions to improve the accuracy of conventional models, particularly for the initial stages of bridge construction that have been neglected in previous research. The research highlights the critical role played by the first span of the bridge during the initial stages, a factor often overlooked in existing studies. Furthermore, a new and simplified model termed the "semi-infinite beam" model, is developed to address this oversight. By utilizing this model alongside a simple optimization approach, optimal values for launching nose specifications are derived. The practical applications of this study extend to optimizing the nose-deck system of incrementally launched bridges, providing valuable insights for practical usage. In conclusion, this paper introduces a comprehensive Finite Element model for studying the static behavior of bridges during incremental launching. The proposed model addresses limitations found in previous approaches and offers practical solutions to enhance accuracy. The study emphasizes the importance of considering the initial stages and introduces the "semi-infinite beam" model. Through the developed model and optimization approach, optimal specifications for launching nose configurations are determined. This research holds significant practical implications and contributes to the optimization of incrementally launched bridges, benefiting both the construction industry and bridge designers.Keywords: incremental launching, bridge construction, finite element model, optimization
Procedia PDF Downloads 103541 L2 Anxiety, Proficiency, and L2 Willingness to Communicate in the Classroom, Outside the Classroom, and in Digital Setting: Insights from Ethiopian Preparatory Schools
Authors: Merih Welay Welesilassie, Marianne Nikolov
Abstract:
Research into second and foreign language (L2) acquisitions has demonstrated that L2 anxiety, perceived proficiency, and L2 willingness to communicate (L2WTC) profoundly impact language learning outcomes. However, the complex interplay between these variables has yet to be fully explored, as these factors are dynamic and context-specific and can vary across different learners and learning environments. This study, therefore, utilized a cross-sectional quantitative survey research design to scrutinise the causal relationships between L2 anxiety, English proficiency, and L2WTC of 609 Ethiopian preparatory school students. The model for the L2WTC, both inside and outside the classroom, has been expanded to include an additional sub-scale known as the L2WTC in a digital setting. Moreover, in contrast to the commonly recognised debilitative-focused L2 anxiety, the construct of L2 anxiety has been divided into facilitative and debilitative anxiety. This method allows to measure not only the presence or absence of anxiety but also evaluate if anxiety helps or hinders the L2 learning experience. A self-assessment proficiency measure was also developed specifically for Ethiopian high school students. The study treated facilitative and debilitative anxiety as independent variables while considering self-assessed English proficiency and L2WTC in the classroom, outside the classroom, and in digital settings as dependent variables. Additionally, self-assessed English proficiency was used as an independent variable to predict L2WTC in these three settings. The proposed model, including these variables, was tested using structural equation modelling (SEM). According to the descriptive analysis, the mean scores of L2WTC in the three settings were generally low, ranging from 2.30 to 2.84. Debilitative anxiety casts a shadow on the positive aspects of anxiety. Self-assessed English proficiency was also too low. According to SEM, debilitative anxiety displayed a statistically significant negative impact on L2WTC inside the classroom, outside the classroom, in digital settings, and in self-assessed levels of English proficiency. In contrast, facilitative anxiety was found to positively contribute to L2WTC outside the classroom, in digital settings, and in self-assessed English proficiency. Self-assessed English proficiency made a statistically significant and positive contribution to L2WTC within the classroom, outside the classroom, and in digital contexts. L2WTC inside the classroom was found to positively contribute to L2WTC outside the classrooms and in digital contexts. The findings were systematically compared with existing studies, and the pedagogical implications, limitations, and potential avenues for future research were elucidated. The outcomes of the study have the potential to significantly contribute to the advancement of theoretical and empirical knowledge about improving English education, learning, and communication not only in Ethiopia but also in similar EFL contexts, thereby providing valuable insights for educators, researchers, and policymakers.Keywords: debilitative anxiety, facilitative anxiety, L2 willingness to communicate, self-assessed English proficiency
Procedia PDF Downloads 14540 A Mixed Method Approach for Modeling Entry Capacity at Rotary Intersections
Authors: Antonio Pratelli, Lorenzo Brocchini, Reginald Roy Souleyrette
Abstract:
A rotary is a traffic circle intersection where vehicles entering from branches give priority to circulating flow. Vehicles entering the intersection from converging roads move around the central island and weave out of the circle into their desired exiting branch. This creates merging and diverging conflicts among any entry and its successive exit, i.e., a section. Therefore, rotary capacity models are usually based on the weaving of the different movements in any section of the circle, and the maximum rate of flow value is then related to each weaving section of the rotary. Nevertheless, the single-section capacity value does not lead to the typical performance characteristics of the intersection, such as the entry average delay which is directly linked to its level of service. From another point of view, modern roundabout capacity models are based on the limitation of the flow entering from the single entrance due to the amount of flow circulating in front of the entrance itself. Modern roundabouts capacity models generally lead also to a performance evaluation. This paper aims to incorporate a modern roundabout capacity model into an old rotary capacity method to obtain from the latter the single input capacity and ultimately achieve the related performance indicators. Put simply; the main objective is to calculate the average delay of each single roundabout entrance to apply the most common Highway Capacity Manual, or HCM, criteria. The paper is organized as follows: firstly, the rotary and roundabout capacity models are sketched, and it has made a brief introduction to the model combination technique with some practical instances. The successive section is deserved to summarize the TRRL old rotary capacity model and the most recent HCM-7th modern roundabout capacity model. Then, the two models are combined through an iteration-based algorithm, especially set-up and linked to the concept of roundabout total capacity, i.e., the value reached due to a traffic flow pattern leading to the simultaneous congestion of all roundabout entrances. The solution is the average delay for each entrance of the rotary, by which is estimated its respective level of service. In view of further experimental applications, at this research stage, a collection of existing rotary intersections operating with the priority-to-circle rule has already started, both in the US and in Italy. The rotaries have been selected by direct inspection of aerial photos through a map viewer, namely Google Earth. Each instance has been recorded by location, general urban or rural, and its main geometrical patterns. Finally, conclusion remarks are drawn, and a discussion on some further research developments has opened.Keywords: mixed methods, old rotary and modern roundabout capacity models, total capacity algorithm, level of service estimation
Procedia PDF Downloads 86539 Building an Opinion Dynamics Model from Experimental Data
Authors: Dino Carpentras, Paul J. Maher, Caoimhe O'Reilly, Michael Quayle
Abstract:
Opinion dynamics is a sub-field of agent-based modeling that focuses on people’s opinions and their evolutions over time. Despite the rapid increase in the number of publications in this field, it is still not clear how to apply these models to real-world scenarios. Indeed, there is no agreement on how people update their opinion while interacting. Furthermore, it is not clear if different topics will show the same dynamics (e.g., more polarized topics may behave differently). These problems are mostly due to the lack of experimental validation of the models. Some previous studies started bridging this gap in the literature by directly measuring people’s opinions before and after the interaction. However, these experiments force people to express their opinion as a number instead of using natural language (and then, eventually, encoding it as numbers). This is not the way people normally interact, and it may strongly alter the measured dynamics. Another limitation of these studies is that they usually average all the topics together, without checking if different topics may show different dynamics. In our work, we collected data from 200 participants on 5 unpolarized topics. Participants expressed their opinions in natural language (“agree” or “disagree”). We also measured the certainty of their answer, expressed as a number between 1 and 10. However, this value was not shown to other participants to keep the interaction based on natural language. We then showed the opinion (and not the certainty) of another participant and, after a distraction task, we repeated the measurement. To make the data compatible with opinion dynamics models, we multiplied opinion and certainty to obtain a new parameter (here called “continuous opinion”) ranging from -10 to +10 (using agree=1 and disagree=-1). We firstly checked the 5 topics individually, finding that all of them behaved in a similar way despite having different initial opinions distributions. This suggested that the same model could be applied for different unpolarized topics. We also observed that people tend to maintain similar levels of certainty, even when they changed their opinion. This is a strong violation of what is suggested from common models, where people starting at, for example, +8, will first move towards 0 instead of directly jumping to -8. We also observed social influence, meaning that people exposed with “agree” were more likely to move to higher levels of continuous opinion, while people exposed with “disagree” were more likely to move to lower levels. However, we also observed that the effect of influence was smaller than the effect of random fluctuations. Also, this configuration is different from standard models, where noise, when present, is usually much smaller than the effect of social influence. Starting from this, we built an opinion dynamics model that explains more than 80% of data variance. This model was also able to show the natural conversion of polarization from unpolarized states. This experimental approach offers a new way to build models grounded on experimental data. Furthermore, the model offers new insight into the fundamental terms of opinion dynamics models.Keywords: experimental validation, micro-dynamics rule, opinion dynamics, update rule
Procedia PDF Downloads 109538 Assessing the Material Determinants of Cavity Polariton Relaxation using Angle-Resolved Photoluminescence Excitation Spectroscopy
Authors: Elizabeth O. Odewale, Sachithra T. Wanasinghe, Aaron S. Rury
Abstract:
Cavity polaritons form when molecular excitons strongly couple to photons in carefully constructed optical cavities. These polaritons, which are hybrid light-matter states possessing a unique combination of photonic and excitonic properties, present the opportunity to manipulate the properties of various semiconductor materials. The systematic manipulation of materials through polariton formation could potentially improve the functionalities of many optoelectronic devices such as lasers, light-emitting diodes, photon-based quantum computers, and solar cells. However, the prospects of leveraging polariton formation for novel devices and device operation depend on more complete connections between the properties of molecular chromophores, and the hybrid light-matter states they form, which remains an outstanding scientific goal. Specifically, for most optoelectronic applications, it is paramount to understand how polariton formation affects the spectra of light absorbed by molecules coupled strongly to cavity photons. An essential feature of a polariton state is its dispersive energy, which occurs due to the enhanced spatial delocalization of the polaritons relative to bare molecules. To leverage the spatial delocalization of cavity polaritons, angle-resolved photoluminescence excitation spectroscopy was employed in characterizing light emission from the polaritonic states. Using lasers of appropriate energies, the polariton branches were resonantly excited to understand how molecular light absorption changes under different strong light-matter coupling conditions. Since an excited state has a finite lifetime, the photon absorbed by the polariton decays non-radiatively into lower-lying molecular states, from which radiative relaxation to the ground state occurs. The resulting fluorescence is collected across several angles of excitation incidence. By modeling the behavior of the light emission observed from the lower-lying molecular state and combining this result with the output of angle-resolved transmission measurements, inferences are drawn about how the behavior of molecules changes when they form polaritons. These results show how the intrinsic properties of molecules, such as the excitonic lifetime, affect the rate at which the polaritonic states relax. While it is true that the lifetime of the photon mediates the rate of relaxation in a cavity, the results from this study provide evidence that the lifetime of the molecular exciton also limits the rate of polariton relaxation.Keywords: flourescece, molecules in cavityies, optical cavity, photoluminescence excitation, spectroscopy, strong coupling
Procedia PDF Downloads 73537 Numerical Investigation on Transient Heat Conduction through Brine-Spongy Ice
Authors: S. R. Dehghani, Y. S. Muzychka, G. F. Naterer
Abstract:
The ice accretion of salt water on cold substrates creates brine-spongy ice. This type of ice is a mixture of pure ice and liquid brine. A real case of creation of this type of ice is superstructure icing which occurs on marine vessels and offshore structures in cold and harsh conditions. Transient heat transfer through this medium causes phase changes between brine pockets and pure ice. Salt rejection during the process of transient heat conduction increases the salinity of brine pockets to reach a local equilibrium state. In this process the only effect of passing heat through the medium is not changing the sensible heat of the ice and brine pockets; latent heat plays an important role and affects the mechanism of heat transfer. In this study, a new analytical model for evaluating heat transfer through brine-spongy ice is suggested. This model considers heat transfer and partial solidification and melting together. Properties of brine-spongy ice are obtained using properties of liquid brine and pure ice. A numerical solution using Method of Lines discretizes the medium to reach a set of ordinary differential equations. Boundary conditions are chosen using one of the applicable cases of this type of ice; one side is considered as a thermally isolated surface, and the other side is assumed to be suddenly affected by a constant temperature boundary. All cases are evaluated in temperatures between -20 C and the freezing point of brine-spongy ice. Solutions are conducted using different salinities from 5 to 60 ppt. Time steps and space intervals are chosen properly to maintain the most stable and fast solution. Variation of temperature, volume fraction of brine and brine salinity versus time are the most important outputs of this study. Results show that transient heat conduction through brine-spongy ice can create a various range of salinity of brine pockets from the initial salinity to that of 180 ppt. The rate of variation of temperature is found to be slower for high salinity cases. The maximum rate of heat transfer occurs at the start of the simulation. This rate decreases as time passes. Brine pockets are smaller at portions closer to the colder side than that of the warmer side. A the start of the solution, the numerical solution tends to increase instabilities. This is because of sharp variation of temperature at the start of the process. Changing the intervals improves the unstable situation. The analytical model using a numerical scheme is capable of predicting thermal behavior of brine spongy ice. This model and numerical solutions are important for modeling the process of freezing of salt water and ice accretion on cold structures.Keywords: method of lines, brine-spongy ice, heat conduction, salt water
Procedia PDF Downloads 217536 Reproductive Governmentality in Mexico: Production, Control and Regulation of Contraceptive Practices in a Public Hospital
Authors: Ivan Orozco
Abstract:
Introduction: Forced contraception constitutes part of an effort to control the life and reproductive capacity of women through public health institutions. This phenomenon has affected many Mexican women historically and still persists nowadays. The notion of reproductive governmentality refers to the mechanisms through which different historical configurations of social actors (state institutions, churches, donor agents, NGOs, etc.) use legislative controls, economic incentives, moral mandates, direct coercion, and ethical incitements, to produce, monitor and control reproductive behaviors and practices. This research focuses on the use of these mechanisms by the Mexican State to control women's contraceptive practices in a public hospital. Method: An Institutional Ethnography was carried out, with the objective of knowing women's experiences from their own perspective, as they occur in their daily lives, but at the same time, discovering the structural elements that shape the discourses that promote women's contraception, even against their will. The fieldwork consisted in an observation of the dynamics between different participants within a public hospital and the conduction of interviews with the medical and nursing staff in charge of family planning services, as well as women attending the family planning office. Results: Public health institutions in Mexico are state tools to control and regulate reproduction. There are several strategies that are used for this purpose, for example, health personnel provide insufficient or misleading information to ensure that women agree to use contraceptives; health institutions provide economic incentives to the members of the health staff who reach certain goals in terms of contraceptive placement; young women are forced to go to the family planning service, regardless of the reason they went to the clinic; health campaigns are carried out, consisting of the application of contraceptives outside the health facilities, directly in the communities of people who visit the hospital less frequently. All these mechanisms seek for women to use contraceptives, from the women’s perspective; however, the reception of these discourses is ambiguous. While, for some women, the strategies become coercive mechanisms to use contraceptives against their will, for others, they represent an opportunity to take control over their reproductive lives. Conclusion: Since 1974, the Mexican government has implemented campaigns for the promotion of family planning methods as a means to control population growth. Although it is established in several legislations that the counselling must be carried out with a gender and human rights perspective, always respecting the autonomy of people, these research testify that health personnel uses different strategies to force some women to use contraceptive methods, thereby violating their reproductive rights.Keywords: feminist research, forced contraception, institutional ethnography, reproductive. governmentality
Procedia PDF Downloads 164535 Social Inequality and Inclusion Policies in India: Lessons Learned and the Way Forward
Authors: Usharani Rathinam
Abstract:
Although policies directing inclusion of marginalized were in effect, majority of chronically impoverished in India belonged to schedule caste and schedule tribes. Also, taking into account that poverty is gendered; destitute women belonged to lower social order whose need is not largely highlighted at policy level. This paper discusses on social relations poverty which highlights on how social order that existed structurally in the society can perpetuate chronic poverty, followed by a critical review on social inclusion policies of India, its merits and demerits in addressing chronic poverty. Multiple case study design is utilized to address this concern in four districts of India; Jhansi, Tikamgarh, Cuddalore and Anantapur. These four districts were selected by purposive sampling based on the criteria; the district should either be categorized as a backward district or should have a history of high poverty rate. Qualitative methods including eighty in-depth interviews, six focus group discussions, six social mapping procedures and three key informant interviews were conducted in 2011, at each of the locations. Analysis of the data revealed that irrespective of gender, schedule castes and schedule tribe participants were found to be chronically poor in all districts. Caste based discrimination is exhibited at both micro and macro levels; village and institutional levels. At village level, lower caste respondents had lesser access to public resources. Also, within institutional settings, due to confiscation, unequal access to resources is noticed, especially in fund distribution. This study found that half of the budget intended for schedule caste and schedule tribes were confiscated by upper caste administrative staffs. This implies that power based on social hierarchy marginalize lower caste participants from accessing better economic, social, and political benefits, that had led them to suffer long term poverty. This study also explored the traditional ties between caste, social structure and bonded labour as a cause of long-term poverty. Though equal access is being emphasized in constitutional rights, issues at micro level have not been reflected in formulation of these rights. Therefore, it is significant for a policy to consider the structural complexity and then focus on issues such as equal distribution of assets and infrastructural facilities that will reduce exclusion and foster long-term security in areas such as employment, markets and public distribution.Keywords: caste, inclusion policies, India, social order
Procedia PDF Downloads 206534 Design of Photonic Crystal with Defect Layer to Eliminate Interface Corrugations for Obtaining Unidirectional and Bidirectional Beam Splitting under Normal Incidence
Authors: Evrim Colak, Andriy E. Serebryannikov, Pavel V. Usik, Ekmel Ozbay
Abstract:
Working with a dielectric photonic crystal (PC) structure which does not include surface corrugations, unidirectional transmission and dual-beam splitting are observed under normal incidence as a result of the strong diffractions caused by the embedded defect layer. The defect layer has twice the period of the regular PC segments which sandwich the defect layer. Although the PC has even number of rows, the structural symmetry is broken due to the asymmetric placement of the defect layer with respect to the symmetry axis of the regular PC. The simulations verify that efficient splitting and occurrence of strong diffractions are related to the dispersion properties of the Floquet-Bloch modes of the photonic crystal. Unidirectional and bi-directional splitting, which are associated with asymmetric transmission, arise due to the dominant contribution of the first positive and first negative diffraction orders. The effect of the depth of the defect layer is examined by placing single defect layer in varying rows, preserving the asymmetry of PC. Even for deeply buried defect layer, asymmetric transmission is still valid even if the zeroth order is not coupled. This transmission is due to evanescent waves which reach to the deeply embedded defect layer and couple to higher order modes. In an additional selected performance, whichever surface is illuminated, i.e., in both upper and lower surface illumination cases, incident beam is split into two beams of equal intensity at the output surface where the intensity of the out-going beams are equal for both illumination cases. That is, although the structure is asymmetric, symmetric bidirectional transmission with equal transmission values is demonstrated and the structure mimics the behavior of symmetric structures. Finally, simulation studies including the examination of a coupled-cavity defect for two different permittivity values (close to the permittivity values of GaAs or Si and alumina) reveal unidirectional splitting for a wider band of operation in comparison to the bandwidth obtained in the case of a single embedded defect layer. Since the dielectric materials that are utilized are low-loss and weakly dispersive in a wide frequency range including microwave and optical frequencies, the studied structures should be scalable to the mentioned ranges.Keywords: asymmetric transmission, beam deflection, blazing, bi-directional splitting, defect layer, dual beam splitting, Floquet-Bloch modes, isofrequency contours, line defect, oblique incidence, photonic crystal, unidirectionality
Procedia PDF Downloads 184533 Designing Form, Meanings, and Relationships for Future Industrial Products. Case Study Observation of PAD
Authors: Elisabetta Cianfanelli, Margherita Tufarelli, Paolo Pupparo
Abstract:
The dialectical mediation between desires and objects or between mass production and consumption continues to evolve over time. This relationship is influenced both by variable geometries of contexts that are distant from the mere design of product form and by aspects rooted in the very definition of industrial design. In particular, the overcoming of macro-areas of innovation in the technological, social, cultural, formal, and morphological spheres, supported by recent theories in critical and speculative design, seems to be moving further and further away from the design of the formal dimension of advanced products. The articulated fabric of theories and practices that feed the definition of “hyperobjects”, and no longer objects describes a common tension in all areas of design and production of industrial products. The latter are increasingly detached from the design of the form and meaning of the same in mass productions, thus losing the quality of products capable of social transformation. For years we have been living in a transformative moment as regards the design process in the definition of the industrial product. We are faced with a dichotomy in which there is, on the one hand, a reactionary aversion to the new techniques of industrial production and, on the other hand, a sterile adoption of the techniques of mass production that we can now consider traditional. This ambiguity becomes even more evident when we talk about industrial products, and we realize that we are moving further and further away from the concepts of "form" as a synthesis of a design thought aimed at the aesthetic-emotional component as well as the functional one. The design of forms and their contents, as statutes of social acts, allows us to investigate the tension on mass production that crosses seasons, trends, technicalities, and sterile determinisms. The design culture has always determined the formal qualities of objects as a sum of aesthetic characteristics functional and structural relationships that define a product as a coherent unit. The contribution proposes a reflection and a series of practical experiences of research on the form of advanced products. This form is understood as a kaleidoscope of relationships through the search for an identity, the desire for democratization, and between these two, the exploration of the aesthetic factor. The study of form also corresponds to the study of production processes, technological innovations, the definition of standards, distribution, advertising, the vicissitudes of taste and lifestyles. Specifically, we will investigate how the genesis of new forms for new meanings introduces a change in the relative innovative production techniques. It becomes, therefore, fundamental to investigate, through the reflections and the case studies exposed inside the contribution, also the new techniques of production and elaboration of the forms of the products, as new immanent and determining element inside the planning process.Keywords: industrial design, product advanced design, mass productions, new meanings
Procedia PDF Downloads 123532 Experimental and Computational Fluid Dynamic Modeling of a Progressing Cavity Pump Handling Newtonian Fluids
Authors: Deisy Becerra, Edwar Perez, Nicolas Rios, Miguel Asuaje
Abstract:
Progressing Cavity Pump (PCP) is a type of positive displacement pump that is being awarded greater importance as capable artificial lift equipment in the heavy oil field. The most commonly PCP used is driven single lobe pump that consists of a single external helical rotor turning eccentrically inside a double internal helical stator. This type of pump was analyzed by the experimental and Computational Fluid Dynamic (CFD) approach from the DCAB031 model located in a closed-loop arrangement. Experimental measurements were taken to determine the pressure rise and flow rate with a flow control valve installed at the outlet of the pump. The flowrate handled was measured by a FLOMEC-OM025 oval gear flowmeter. For each flowrate considered, the pump’s rotational speed and power input were controlled using an Invertek Optidrive E3 frequency driver. Once a steady-state operation was attained, pressure rise measurements were taken with a Sper Scientific wide range digital pressure meter. In this study, water and three Newtonian oils of different viscosities were tested at different rotational speeds. The CFD model implementation was developed on Star- CCM+ using an Overset Mesh that includes the relative motion between rotor and stator, which is one of the main contributions of the present work. The simulations are capable of providing detailed information about the pressure and velocity fields inside the device in laminar and unsteady regimens. The simulations have a good agreement with the experimental data due to Mean Squared Error (MSE) in under 21%, and the Grid Convergence Index (GCI) was calculated for the validation of the mesh, obtaining a value of 2.5%. In this case, three different rotational speeds were evaluated (200, 300, 400 rpm), and it is possible to show a directly proportional relationship between the rotational speed of the rotor and the flow rate calculated. The maximum production rates for the different speeds for water were 3.8 GPM, 4.3 GPM, and 6.1 GPM; also, for the oil tested were 1.8 GPM, 2.5 GPM, 3.8 GPM, respectively. Likewise, an inversely proportional relationship between the viscosity of the fluid and pump performance was observed, since the viscous oils showed the lowest pressure increase and the lowest volumetric flow pumped, with a degradation around of 30% of the pressure rise, between performance curves. Finally, the Productivity Index (PI) remained approximately constant for the different speeds evaluated; however, between fluids exist a diminution due to the viscosity.Keywords: computational fluid dynamic, CFD, Newtonian fluids, overset mesh, PCP pressure rise
Procedia PDF Downloads 128531 Effect of Chronic Exposure to Diazinon on Glucose Homeostasis and Oxidative Stress in Pancreas of Rats and the Potential Role of Mesna in Ameliorating This Effect
Authors: Azza El-Medany, Jamila El-Medany
Abstract:
Residential and agricultural pesticide use is widespread in the world. Their extensive and indiscriminative use, in addition with their ability to interact with biological systems other than their primary targets constitute a health hazards to both humans and animals. The toxic effects of pesticides include alterations in metabolism; there is a lack of knowledge that organophosphates can cause pancreatic toxicity. The primary goal of this work is to study the effects of chronic exposure to Diazinon an organophosphate used in agriculture on pancreatic tissues and evaluate the ameliorating effect of Mesna as antioxidant on the toxicity of Diazinon on pancreatic tissues.40 adult male rats, their weight ranged between 300-350 g. The rats were classified into three groups; control (10 rats) was received corn oil at a dose of 1 0 mg/kg/day by gavage once a day for 2 months. Diazinon (15 rats) was received Diazinon at a dose of 10 mg/kg/day dissolved in corn oil by gavage once a day for 2 months. Treated group (15 rats), were received Mesna 180mg/kg once a week by gavage 15 minutes before administration of Diazinon for 2 months. At the end of the experiment, animals were anesthetized, blood samples were taken by cardiac puncture for glucose and insulin assays and pancreas was removed and divided into 3 portions; first portion for histopathological study; second portion for ultrastructural study; third portion for biochemical study using Elisa Kits including determination of malondialdehyde (MDA), tumor necrosis factor α (TNF-α), myeloperoxidase activity (MPO), interleukin 1β (IL-1β). A significant increase in the levels of MDA, TNF-α, MPO activity, IL-1β, serum glucose levels in the toxicated group with Diazinon were observed, while a significant reduction was noticed in GSH in serum insulin levels. After treatment with Mesna a significant reduction was observed in the previously mentioned parameters except that there was a significant rise in GSH in insulin levels. Histopathological and ultra-structural studies showed destruction in pancreatic tissues and β cells were the most affected cells among the injured islets as compared with the control group. The current study try to spot light about the effects of chronic exposure to pesticides on vital organs as pancreas also the role of oxidative stress that may be induced by them in evoking their toxicity. This study shows the role of antioxidant drugs in ameliorating or preventing the toxicity. This appears to be a promising approach that may be considered as a complementary treatment of pesticide toxicity.Keywords: Diazinon, reduced glutathione, myeloperoxidase activity, tumor necrosis factor α, Mesna
Procedia PDF Downloads 242530 High-Pressure Polymorphism of 4,4-Bipyridine Hydrobromide
Authors: Michalina Aniola, Andrzej Katrusiak
Abstract:
4,4-Bipyridine is an important compound often used in chemical practice and more recently frequently applied for designing new metal organic framework (MoFs). Here we present a systematic high-pressure study of its hydrobromide salt. 4,4-Bipyridine hydrobromide monohydrate, 44biPyHBrH₂O, at ambient-pressure is orthorhombic, space group P212121 (phase a). Its hydrostatic compression shows that it is stable to 1.32 GPa at least. However, the recrystallization above 0.55 GPa reveals a new hidden b-phase (monoclinic, P21/c). Moreover, when the 44biPyHBrH2O is heated to high temperature the chemical reactions of this compound in methanol solution can be observed. High-pressure experiments were performed using a Merrill-Bassett diamond-anvil cell (DAC), modified by mounting the anvils directly on the steel supports, and X-ray diffraction measurements were carried out on a KUMA and Excalibur diffractometer equipped with an EOS CCD detector. At elevated pressure, the crystal of 44biPyHBrH₂O exhibits several striking and unexpected features. No signs of instability of phase a were detected to 1.32 GPa, while phase b becomes stable at above 0.55 GPa, as evidenced by its recrystallizations. Phases a and b of 44biPyHBrH2O are partly isostructural: their unit-cell dimensions and the arrangement of ions and water molecules are similar. In phase b the HOH-Br- chains double the frequency of their zigzag motifs, compared to phase a, and the 44biPyH+ cations change their conformation. Like in all monosalts of 44biPy determined so far, in phase a the pyridine rings are twisted by about 30 degrees about bond C4-C4 and in phase b they assume energy-unfavorable planar conformation. Another unusual feature of 44biPyHBrH2O is that all unit-cell parameters become longer on the transition from phase a to phase b. Thus the volume drop on the transition to high-pressure phase b totally depends on the shear strain of the lattice. Higher temperature triggers chemical reactions of 44biPyHBrH2O with methanol. When the saturated methanol solution compound precipitated at 0.1 GPa and temperature of 423 K was required to dissolve all the sample, the subsequent slow recrystallization at isochoric conditions resulted in disalt 4,4-bipyridinium dibromide. For the 44biPyHBrH2O sample sealed in the DAC at 0.35 GPa, then dissolved at isochoric conditions at 473 K and recrystallized by slow controlled cooling, a reaction of N,N-dimethylation took place. It is characteristic that in both high-pressure reactions of 44biPyHBrH₂O the unsolvated disalt products were formed and that free base 44biPy and H₂O remained in the solution. The observed reactions indicate that high pressure destabilized ambient-pressure salts and favors new products. Further studies on pressure-induced reactions are carried out in order to better understand the structural preferences induced by pressure.Keywords: conformation, high-pressure, negative area compressibility, polymorphism
Procedia PDF Downloads 246529 Marketization of Higher Education in the UK and Its Impacts on Teaching Practitioners
Authors: Hossein Rezaie
Abstract:
Academic institutions, esp. universities, have been known as cradles of learning and teaching great thinkers while creating the type of knowledge that is supposed to be bereft of utilitarian motives. Nonetheless, it seems that such intellectual centers have entered into a competition with each other for attracting the attention of potential clients. The traditional values of (higher) education such as nurturing criticality and fostering intellectuality in students have been replaced with strategic planning, quality assurance, performance assessment, and academic audits. Not being immune from the whims and wishes of marketization, the system of higher education in the UK has been recalibrated by policy makers to address the demand and supply of student education, academic research and other university activities on the basis of monetary factors. As an immediate example in this vein, the Russell Group in the UK, which is comprised of 24 leading UK research universities, has explicitly expressed it policy on its official website as follows: ‘Russell Group universities are global businesses competing for staff, students and funding with the best in the world’. Furthermore, certain attempts have been made to corporatize the system of HE which have been manifested in remodeling of university governing bodies on corporate lines and developing measurement scales for indicating the performance of teaching practitioners. Nevertheless, it seems that such structural changes in policies toward the system of HE have bearing on the practices of practitioners and educators as well as the identity of students who are the customers of educational services. The effects of marketization have been examined mainly in terms of students’ perceptions and motivation, institutional policies and university management. However, the teaching practitioner side seems to be an under-studied area with regard to any changes in its expectations, satisfaction and perception of professional identity in the aftermath of introducing market-wise values into HE of the UK. As a result, this research aims to investigate the possible outcomes of market-driven values on the practitioner side of HE in the UK and finally seeks to address the following research questions: 1-How is the change in the mission of HE in the UK reflected in institutional documents? 1-A- How is the change of mission represented in job adverts? 1-B- How is the change of mission represented in university prospectuses? 2-How are teaching practitioners represented regarding their roles and obligations in the prospectuses and job ads published by UK HE institutions? In order to address these questions, the researcher will analyze 30 prospectuses and job ads published by Russel Group universities by taking Critical Discourse Analysis as his point of departure and the analytical methods of genre analysis and Systemic Functional Linguistics to probe into the generic features and representation of participants, in this case teaching practitioners, in the selected corpus.Keywords: higher education, job advertisements, marketization of higher education, prospectuses
Procedia PDF Downloads 248528 Nonlinear Response of Tall Reinforced Concrete Shear Wall Buildings under Wind Loads
Authors: Mahtab Abdollahi Sarvi, Siamak Epackachi, Ali Imanpour
Abstract:
Reinforced concrete shear walls are commonly used as the lateral load-resisting system of mid- to high-rise office or residential buildings around the world. Design of such systems is often governed by wind rather than seismic effects, in particular in low-to-moderate seismic regions. The current design philosophy as per the majority of building codes under wind loads require elastic response of lateral load-resisting systems including reinforced concrete shear walls when subjected to the rare design wind load, resulting in significantly large wall sections needed to meet strength requirements and drift limits. The latter can highly influence the design in upper stories due to stringent drift limits specified by building codes, leading to substantial added costs to the construction of the wall. However, such walls may offer limited to moderate over-strength and ductility due to their large reserve capacity provided that they are designed and detailed to appropriately develop such over-strength and ductility under extreme wind loads. This would significantly contribute to reducing construction time and costs, while maintaining structural integrity under gravity and frequently-occurring and less frequent wind events. This paper aims to investigate the over-strength and ductility capacity of several imaginary office buildings located in Edmonton, Canada with a glance at earthquake design philosophy. Selected models are 10- to 25-story buildings with three types of reinforced concrete shear wall configurations including rectangular, barbell, and flanged. The buildings are designed according to National Building Code of Canada. Then fiber-based numerical models of the walls are developed in Perform 3D and by conducting nonlinear static (pushover) analysis, lateral nonlinear behavior of the walls are evaluated. Ductility and over-strength of the structures are obtained based on the results of the pushover analyses. The results confirmed moderate nonlinear capacity of reinforced concrete shear walls under extreme wind loads. This is while lateral displacements of the walls pass the serviceability limit states defined in Pre standard for Performance-Based Wind Design (ASCE). The results indicate that we can benefit the limited nonlinear response observed in the reinforced concrete shear walls to economize the design of such systems under wind loads.Keywords: concrete shear wall, high-rise buildings, nonlinear static analysis, response modification factor, wind load
Procedia PDF Downloads 107527 Corrosion Protection and Failure Mechanism of ZrO₂ Coating on Zirconium Alloy Zry-4 under Varied LiOH Concentrations in Lithiated Water at 360°C and 18.5 MPa
Authors: Guanyu Jiang, Donghai Xu, Huanteng Liu
Abstract:
After the Fukushima-Daiichi accident, the development of accident tolerant fuel cladding materials to improve reactor safety has become a hot topic in the field of nuclear industry. ZrO₂ has a satisfactory neutron economy and can guarantee the fission chain reaction process, which enables it to be a promising coating for zirconium alloy cladding. Maintaining a good corrosion resistance in primary coolant loop during normal operations of Pressurized Water Reactors is a prerequisite for ZrO₂ as a protective coating on zirconium alloy cladding. Research on the corrosion performance of ZrO₂ coating in nuclear water chemistry is relatively scarce, and existing reports failed to provide an in-depth explanation for the failure causes of ZrO₂ coating. Herein, a detailed corrosion process of ZrO₂ coating in lithiated water at 360 °C and 18.5 MPa was proposed based on experimental research and molecular dynamics simulation. Lithiated water with different LiOH solutions in the present work was deaerated and had a dissolved oxygen concentration of < 10 ppb. The concentration of Li (as LiOH) was determined to be 2.3 ppm, 70 ppm, and 500 ppm, respectively. Corrosion tests were conducted in a static autoclave. Modeling and corresponding calculations were operated on Materials Studio software. The calculation of adsorption energy and dynamics parameters were undertaken by the Energy task and Dynamics task of the Forcite module, respectively. The protective effect and failure mechanism of ZrO₂ coating on Zry-4 under varied LiOH concentrations was further revealed by comparison with the coating corrosion performance in pure water (namely 0 ppm Li). ZrO₂ coating provided a favorable corrosion protection with the occurrence of localized corrosion at low LiOH concentrations. Factors influencing corrosion resistance mainly include pitting corrosion extension, enhanced Li+ permeation, short-circuit diffusion of O²⁻ and ZrO₂ phase transformation. In highly-concentrated LiOH solutions, intergranular corrosion, internal oxidation, and perforation resulted in coating failure. Zr ions were released to coating surface to form flocculent ZrO₂ and ZrO₂ clusters due to the strong diffusion and dissolution tendency of α-Zr in the Zry-4 substrate. Considering that primary water of Pressurized Water Reactors usually includes 2.3 ppm Li, the stability of ZrO₂ make itself a candidate fuel cladding coating material. Under unfavorable conditions with high Li concentrations, more boric acid should be added to alleviate caustic corrosion of ZrO₂ coating once it is used. This work can provide some references to understand the service behavior of nuclear coatings under variable water chemistry conditions and promote the in-pile application of ZrO₂ coating.Keywords: ZrO₂ coating, Zry-4, corrosion behavior, failure mechanism, LiOH concentration
Procedia PDF Downloads 85526 The French Ekang Ethnographic Dictionary. The Quantum Approach
Authors: Henda Gnakate Biba, Ndassa Mouafon Issa
Abstract:
Dictionaries modeled on the Western model [tonic accent languages] are not suitable and do not account for tonal languages phonologically, which is why the [prosodic and phonological] ethnographic dictionary was designed. It is a glossary that expresses the tones and the rhythm of words. It recreates exactly the speaking or singing of a tonal language, and allows the non-speaker of this language to pronounce the words as if they were a native. It is a dictionary adapted to tonal languages. It was built from ethnomusicological theorems and phonological processes, according to Jean. J. Rousseau 1776 hypothesis /To say and to sing were once the same thing/. Each word in the French dictionary finds its corresponding language, ekaη. And each word ekaη is written on a musical staff. This ethnographic dictionary is also an inventive, original and innovative research thesis, but it is also an inventive, original and innovative research thesis. A contribution to the theoretical, musicological, ethno musicological and linguistic conceptualization of languages, giving rise to the practice of interlocution between the social and cognitive sciences, the activities of artistic creation and the question of modeling in the human sciences: mathematics, computer science, translation automation and artificial intelligence. When you apply this theory to any text of a folksong of a world-tone language, you do not only piece together the exact melody, rhythm, and harmonies of that song as if you knew it in advance but also the exact speaking of this language. The author believes that the issue of the disappearance of tonal languages and their preservation has been structurally resolved, as well as one of the greatest cultural equations related to the composition and creation of tonal, polytonal and random music. The experimentation confirming the theorization designed a semi-digital, semi-analog application which translates the tonal languages of Africa (about 2,100 languages) into blues, jazz, world music, polyphonic music, tonal and anatonal music and deterministic and random music). To test this application, I use a music reading and writing software that allows me to collect the data extracted from my mother tongue, which is already modeled in the musical staves saved in the ethnographic (semiotic) dictionary for automatic translation ( volume 2 of the book). Translation is done (from writing to writing, from writing to speech and from writing to music). Mode of operation: you type a text on your computer, a structured song (chorus-verse), and you command the machine a melody of blues, jazz and, world music or, variety etc. The software runs, giving you the option to choose harmonies, and then you select your melody.Keywords: music, language, entenglement, science, research
Procedia PDF Downloads 69525 Modeling and Simulating Productivity Loss Due to Project Changes
Authors: Robert Pellerin, Michel Gamache, Remi Trudeau, Nathalie Perrier
Abstract:
The context of large engineering projects is particularly favorable to the appearance of engineering changes and contractual modifications. These elements are potential causes for claims. In this paper, we investigate one of the critical components of the claim management process: the calculation of the impacts of changes in terms of losses of productivity due to the need to accelerate some project activities. When project changes are initiated, delays can arise. Indeed, project activities are often executed in fast-tracking in an attempt to respect the completion date. But the acceleration of project execution and the resulting rework can entail important costs as well as induce productivity losses. In the past, numerous methods have been proposed to quantify the duration of delays, the gains achieved by project acceleration, and the loss of productivity. The calculation related to those changes can be divided into two categories: direct cost and indirect cost. The direct cost is easily quantifiable as opposed to indirect costs which are rarely taken into account during the calculation of the cost of an engineering change or contract modification despite several research projects have been made on this subject. However, proposed models have not been accepted by companies yet, nor they have been accepted in court. Those models require extensive data and are often seen as too specific to be used for all projects. These techniques are also ignoring the resource constraints and the interdependencies between the causes of delays and the delays themselves. To resolve this issue, this research proposes a simulation model that mimics how major engineering changes or contract modifications are handled in large construction projects. The model replicates the use of overtime in a reactive scheduling mode in order to simulate the loss of productivity present when a project change occurs. Multiple tests were conducted to compare the results of the proposed simulation model with statistical analysis conducted by other researchers. Different scenarios were also conducted in order to determine the impact the number of activities, the time of occurrence of the change, the availability of resources, and the type of project changes on productivity loss. Our results demonstrate that the number of activities in the project is a critical variable influencing the productivity of a project. When changes occur, the presence of a large number of activities leads to a much lower productivity loss than a small number of activities. The speed of reducing productivity for 30-job projects is about 25 percent faster than the reduction speed for 120-job projects. The moment of occurrence of a change also shows a significant impact on productivity. Indeed, the sooner the change occurs, the lower the productivity of the labor force. The availability of resources also impacts the productivity of a project when a change is implemented. There is a higher loss of productivity when the amount of resources is restricted.Keywords: engineering changes, indirect costs overtime, productivity, scheduling, simulation
Procedia PDF Downloads 238