Search results for: Large Eddy Simulation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11535

Search results for: Large Eddy Simulation

1335 Bond Strength of Nano Silica Concrete Subjected to Corrosive Environments

Authors: Muhammad S. El-Feky, Mohamed I. Serag, Ahmed M. Yasien, Hala Elkady

Abstract:

Reinforced concrete requires steel bars in order to provide the tensile strength that is needed in structural concrete. However, when steel bars corrode, a loss in bond between the concrete and the steel bars occurs due to the formation of rust on the bars surface. Permeability of concrete is a fundamental property in perspective of the durability of concrete as it represents the ease with which water or other fluids can move through concrete, subsequently transporting corrosive agents. Nanotechnology is a standout amongst active research zones that envelops varies disciplines including construction materials. The application of nanotechnology in the corrosion protection of metal has lately gained momentum as nano scale particles have ultimate physical, chemical and physicochemical properties, which may enhance the corrosion protection in comparison to large size materials. The presented research aims to study the bond performance of concrete containing relatively high volume nano silica (up to 4.5%) exposed to corrosive conditions. This was extensively studied through tensile, bond strengths as well as the permeability of nano silica concrete. In addition micro-structural analysis was performed in order to evaluate the effect of nano silica on the properties of concrete at both; the micro and nano levels. The results revealed that by the addition of nano silica, the permeability of concrete mixes decreased significantly to reach about 50% of the control mix by the addition of 4.5% nano silica. As for the corrosion resistance, the nano silica concrete is comparatively higher resistance than ordinary concrete. Increasing Nano Silica percentage increased significantly the critical time corresponding to a metal loss (equal to 50 ϻm) which usually corresponding to the first concrete cracking due to the corrosion of reinforcement to reach about 49 years instead of 40 years as for the normal concrete. Finally, increasing nano Silica percentage increased significantly the residual bond strength of concrete after being subjected to corrosive environment. After being subjected to corrosive environment, the pullout behavior was observed for the bars embedded in all of the mixes instead of the splitting behavior that was observed before being corroded. Adding 4.5% nano silica in concrete increased the residual bond strength to reach 79% instead of 27% only as compared to control mix (0%W) before the subjection of the corrosive environment. From the conducted study we can conclude that the Nano silica proved to be a significant pore blocker material.

Keywords: bond strength, concrete, corrosion resistance, nano silica, permeability

Procedia PDF Downloads 311
1334 The Effect of Finding and Development Costs and Gas Price on Basins in the Barnett Shale

Authors: Michael Kenomore, Mohamed Hassan, Amjad Shah, Hom Dhakal

Abstract:

Shale gas reservoirs have been of greater importance compared to shale oil reservoirs since 2009 and with the current nature of the oil market, understanding the technical and economic performance of shale gas reservoirs is of importance. Using the Barnett shale as a case study, an economic model was developed to quantify the effect of finding and development costs and gas prices on the basins in the Barnett shale using net present value as an evaluation parameter. A rate of return of 20% and a payback period of 60 months or less was used as the investment hurdle in the model. The Barnett was split into four basins (Strawn Basin, Ouachita Folded Belt, Forth-worth Syncline and Bend-arch Basin) with analysis conducted on each of the basin to provide a holistic outlook. The dataset consisted of only horizontal wells that started production from 2008 to at most 2015 with 1835 wells coming from the strawn basin, 137 wells from the Ouachita folded belt, 55 wells from the bend-arch basin and 724 wells from the forth-worth syncline. The data was analyzed initially on Microsoft Excel to determine the estimated ultimate recoverable (EUR). The range of EUR from each basin were loaded in the Palisade Risk software and a log normal distribution typical of Barnett shale wells was fitted to the dataset. Monte Carlo simulation was then carried out over a 1000 iterations to obtain a cumulative distribution plot showing the probabilistic distribution of EUR for each basin. From the cumulative distribution plot, the P10, P50 and P90 EUR values for each basin were used in the economic model. Gas production from an individual well with a EUR similar to the calculated EUR was chosen and rescaled to fit the calculated EUR values for each basin at the respective percentiles i.e. P10, P50 and P90. The rescaled production was entered into the economic model to determine the effect of the finding and development cost and gas price on the net present value (10% discount rate/year) as well as also determine the scenario that satisfied the proposed investment hurdle. The finding and development costs used in this paper (assumed to consist only of the drilling and completion costs) were £1 million, £2 million and £4 million while the gas price was varied from $2/MCF-$13/MCF based on Henry Hub spot prices from 2008-2015. One of the major findings in this study was that wells in the bend-arch basin were least economic, higher gas prices are needed in basins containing non-core counties and 90% of the Barnet shale wells were not economic at all finding and development costs irrespective of the gas price in all the basins. This study helps to determine the percentage of wells that are economic at different range of costs and gas prices, determine the basins that are most economic and the wells that satisfy the investment hurdle.

Keywords: shale gas, Barnett shale, unconventional gas, estimated ultimate recoverable

Procedia PDF Downloads 306
1333 Definition of Aerodynamic Coefficients for Microgravity Unmanned Aerial System

Authors: Gamaliel Salazar, Adriana Chazaro, Oscar Madrigal

Abstract:

The evolution of Unmanned Aerial Systems (UAS) has made it possible to develop new vehicles capable to perform microgravity experiments which due its cost and complexity were beyond the reach for many institutions. In this study, the aerodynamic behavior of an UAS is studied through its deceleration stage after an initial free fall phase (where the microgravity effect is generated) using Computational Fluid Dynamics (CFD). Due to the fact that the payload would be analyzed under a microgravity environment and the nature of the payload itself, the speed of the UAS must be reduced in a smoothly way. Moreover, the terminal speed of the vehicle should be low enough to preserve the integrity of the payload and vehicle during the landing stage. The UAS model is made by a study pod, control surfaces with fixed and mobile sections, landing gear and two semicircular wing sections. The speed of the vehicle is decreased by increasing the angle of attack (AoA) of each wing section from 2° (where the airfoil S1091 has its greatest aerodynamic efficiency) to 80°, creating a circular wing geometry. Drag coefficients (Cd) and forces (Fd) are obtained employing CFD analysis. A simplified 3D model of the vehicle is analyzed using Ansys Workbench 16. The distance between the object of study and the walls of the control volume is eight times the length of the vehicle. The domain is discretized using an unstructured mesh based on tetrahedral elements. The refinement of the mesh is made by defining an element size of 0.004 m in the wing and control surfaces in order to figure out the fluid behavior in the most important zones, as well as accurate approximations of the Cd. The turbulent model k-epsilon is selected to solve the governing equations of the fluids while a couple of monitors are placed in both wing and all-body vehicle to visualize the variation of the coefficients along the simulation process. Employing a statistical approximation response surface methodology the case of study is parametrized considering the AoA of the wing as the input parameter and Cd and Fd as output parameters. Based on a Central Composite Design (CCD), the Design Points (DP) are generated so the Cd and Fd for each DP could be estimated. Applying a 2nd degree polynomial approximation the drag coefficients for every AoA were determined. Using this values, the terminal speed at each position is calculated considering a specific Cd. Additionally, the distance required to reach the terminal velocity at each AoA is calculated, so the minimum distance for the entire deceleration stage without comprising the payload could be determine. The Cd max of the vehicle is 1.18, so its maximum drag will be almost like the drag generated by a parachute. This guarantees that aerodynamically the vehicle can be braked, so it could be utilized for several missions allowing repeatability of microgravity experiments.

Keywords: microgravity effect, response surface, terminal speed, unmanned system

Procedia PDF Downloads 177
1332 Regional Anesthesia in Carotid Surgery: A Single Center Experience

Authors: Daniel Thompson, Muhammad Peerbux, Sophie Cerutti, Hansraj Riteesh Bookun

Abstract:

Patients with carotid stenosis, which may be asymptomatic or symptomatic in the form of transient ischaemic attack (TIA), amaurosis fugax, or stroke, often require an endarterectomy to reduce stroke risk. Risks of this procedure include stroke, death, myocardial infarction, and cranial nerve damage. Carotid endarterectomy is most commonly performed under general anaesthetic, however, it can also be undertaken with a regional anaesthetic approach. Our tertiary centre generally performs carotid endarterectomy under regional anaesthetic. Our major tertiary hospital mostly utilises regional anaesthesia for carotid endarterectomy. We completed a cross-sectional analysis of all cases of carotid endarterectomy performed under regional anaesthesia across a 10-year period between January 2010 to March 2020 at our institution. 350 patients were included in this descriptive analysis, and demographic details for patients, indications for surgery, procedural details, length of surgery, and complications were collected. Data was cross tabulated and presented in frequency tables to describe these categorical variables. 263 of the 350 patients in the analysis were male, with a mean age of 71 ± 9. 172 patients had a history of ischaemic heart disease, 104 had diabetes mellitus, 318 had hypertension, and 17 patients had chronic kidney disease greater than Stage 3. 13.1% (46 patients) were current smokers, and the majority (63%) were ex-smokers. Most commonly, carotid endarterectomy was performed conventionally with patch arterioplasty 96% of the time (337 patients). The most common indication was TIA and stroke in 64% of patients, 18.9% were classified as asymptomatic, and 13.7% had amaurosis fugax. There were few general complications, with 9 wound complications/infections, 7 postoperative haematomas requiring return to theatre, 3 myocardial infarctions, 3 arrhythmias, 1 exacerbation of congestive heart failure, 1 chest infection, and 1 urinary tract infection. Specific complications to carotid endarterectomy included 3 strokes, 1 postoperative TIA, and 1 cerebral bleed. There were no deaths in our cohort. This analysis of a large cohort of patients from a major tertiary centre who underwent carotid endarterectomy under regional anaesthesia indicates the safety of such an approach for these patients. Regional anaesthesia holds the promise of less general respiratory and cardiac events compared to general anaesthesia, and in this vulnerable patient group, calls for comparative research between local and general anaesthesia in carotid surgery.

Keywords: anaesthesia, carotid endarterectomy, stroke, carotid stenosis

Procedia PDF Downloads 124
1331 Durability of Functionally Graded Concrete

Authors: Prasanna Kumar Acharya, Mausam Kumari Yadav

Abstract:

Cement concrete has emerged as the most consumed construction material. It has also dominated all other construction materials because of its versatility. Apart from numerous advantages it has a disadvantage concerning durability. The large structures constructed with cement concrete involving the consumption of huge natural materials remain in serviceable condition for 5 – 7 decades only while structures made with stones stand for many centuries. The short life span of structures not only affects the economy but also affects the ecology greatly. As such, the improvement of durability of cement concrete is a global concern and scientists around the globe are trying for this purpose. Functionally graded concrete (FGC) is an exciting development. In contrast to conventional concrete, FGC demonstrates different characteristics depending on its thickness, which enables it to conform to particular structural specifications. The purpose of FGC is to improve the performance and longevity of conventional concrete structures with cutting-edge building materials. By carefully distributing various kinds and amounts of reinforcements, additives, mix designs and/or aggregates throughout the concrete matrix, this variety is produced. A key component of functionally graded concrete's performance is its durability, which affects the material's capacity to tolerate aggressive environmental influences and load-bearing circumstances. This paper reports the durability of FGC made using Portland slag cement (PSC). For this purpose, control concretes (CC) of M20, M30 and M40 grades were designed. Single-layered samples were prepared using each grade of concrete. Further using combinations of M20 + M30, M30 + M40 and M40 + M20, doubled layered concrete samples in a depth ratio of 1:1 was prepared those are herein called FGC samples. The efficiency of FGC samples was compared with that of the higher-grade concrete of parent materials in terms of compressive strength, water absorption, sorptivity, acid resistance, sulphate resistance, chloride resistance and abrasion resistance. The properties were checked at the age of 28 and 91 days. Apart from strength and durability parameters, the microstructure of CC and FGC were studied in terms of X-ray diffraction, scanning electron microscopy and energy-dispersive X-ray. The result of the study revealed that there is an increase in the efficiency of concrete evaluated in terms of strength and durability when it is made functionally graded using a layered technology having different grades of concrete in layers. The results may help to enhance the efficiency of structural concrete and its durability.

Keywords: fresh on compacted, functionally graded concrete, acid, chloride, sulphate test, sorptivity, abrasion, water absorption test

Procedia PDF Downloads 26
1330 Identification of 332G>A Polymorphism in Exon 3 of the Leptin Gene and Partially Effects on Body Size and Tail Dimension in Sanjabi Sheep

Authors: Roya Bakhtiar, Alireza Abdolmohammadi, Hadi Hajarian, Zahra Nikousefat, Davood, Kalantar-Neyestanaki

Abstract:

The objective of the present study was to determine the polymorphism in the leptin (332G>A) and its association with biometric traits in Sanjabi sheep. For this purpose, blood samples from 96 rams were taken, and tail length, width tail, circumference tail, body length, body width, and height were simultaneously recorded. PCR was performed using specific primer to amplify 463 bp fragment including exon 3 of leptin gene, and PCR products were digested by Cail restriction enzymes. The 332G>A (at 332th nucleotide of exon 3 leptin gene) that caused an amino acid change from Arg to Gln was detected by Cail (CAGNNNCTG) endonuclease, as the endonuclease cannot cut this region if G nucleotide is located in this position. Three genotypes including GG (463), GA (463, 360and 103 bp) and GG (360 bp and 103 bp) were identified after digestion by enzyme. The estimated frequencies of three genotypes including GG, GA, and AA for 332G>A locus were 0.68, 0.29 and 0.03 and those were 0.18 and 0.82 for A and G alleles, respectively. In the current study, chi-square test indicated that 332G>A positions did not deviate from the Hardy–Weinberg (HW) equilibrium. The most important reason to show HW equation was that samples used in this study belong to three large local herds with a traditional breeding system having random mating and without selection. Shannon index amount was calculated which represent an average genetic variation in Sanjabi rams. Also, heterozygosity estimated by Nei index indicated that genetic diversity of mutation in the leptin gene is moderate. Leptin gene polymorphism in the 332G>A had significant effect on body length (P<0.05) trait, and individuals with GA genotype had significantly the higher body length compared to other individuals. Although animals with GA genotype had higher body width, this difference was not statistically significant (P>0.05). This non-synonymous SNP resulted in different amino acid changes at codon positions111(R/Q). As leptin activity is localized, at least in part, in domains between amino acid residues 106-1406, it is speculated that the detected SNP at position 332 may affect the activity of leptin and may lead to different biological functions. Based to our results, due to significant effect of leptin gene polymorphism on body size traits, this gene may be used a candidate gene for improving these traits.

Keywords: body size, Leptin gene, PCR-RFLP, Sanjabi sheep

Procedia PDF Downloads 346
1329 Solid State Drive End to End Reliability Prediction, Characterization and Control

Authors: Mohd Azman Abdul Latif, Erwan Basiron

Abstract:

A flaw or drift from expected operational performance in one component (NAND, PMIC, controller, DRAM, etc.) may affect the reliability of the entire Solid State Drive (SSD) system. Therefore, it is important to ensure the required quality of each individual component through qualification testing specified using standards or user requirements. Qualification testing is time-consuming and comes at a substantial cost for product manufacturers. A highly technical team, from all the eminent stakeholders is embarking on reliability prediction from beginning of new product development, identify critical to reliability parameters, perform full-blown characterization to embed margin into product reliability and establish control to ensure the product reliability is sustainable in the mass production. The paper will discuss a comprehensive development framework, comprehending SSD end to end from design to assembly, in-line inspection, in-line testing and will be able to predict and to validate the product reliability at the early stage of new product development. During the design stage, the SSD will go through intense reliability margin investigation with focus on assembly process attributes, process equipment control, in-process metrology and also comprehending forward looking product roadmap. Once these pillars are completed, the next step is to perform process characterization and build up reliability prediction modeling. Next, for the design validation process, the reliability prediction specifically solder joint simulator will be established. The SSD will be stratified into Non-Operating and Operating tests with focus on solder joint reliability and connectivity/component latent failures by prevention through design intervention and containment through Temperature Cycle Test (TCT). Some of the SSDs will be subjected to the physical solder joint analysis called Dye and Pry (DP) and Cross Section analysis. The result will be feedbacked to the simulation team for any corrective actions required to further improve the design. Once the SSD is validated and is proven working, it will be subjected to implementation of the monitor phase whereby Design for Assembly (DFA) rules will be updated. At this stage, the design change, process and equipment parameters are in control. Predictable product reliability at early product development will enable on-time sample qualification delivery to customer and will optimize product development validation, effective development resource and will avoid forced late investment to bandage the end-of-life product failures. Understanding the critical to reliability parameters earlier will allow focus on increasing the product margin that will increase customer confidence to product reliability.

Keywords: e2e reliability prediction, SSD, TCT, solder joint reliability, NUDD, connectivity issues, qualifications, characterization and control

Procedia PDF Downloads 181
1328 Internet of Things in Higher Education: Implications for Students with Disabilities

Authors: Scott Hollier, Ruchi Permvattana

Abstract:

The purpose of this abstract is to share the findings of a recently completed disability-related Internet of Things (IoT) project undertaken at Curtin University in Australia. The project focused on identifying how IoT could support people with disabilities with their educational outcomes. To achieve this, the research consisted of an analysis of current literature and interviews conducted with students with vision, hearing, mobility and print disabilities. While the research acknowledged the ability to collect data with IoT is now a fairly common occurrence, its benefits and applicability still need to be grounded back into real-world applications. Furthermore, it is important to consider if there are sections of our society that may benefit from these developments and if those benefits are being fully realised in a rush by large companies to achieve IoT dominance for their particular product or digital ecosystem. In this context, it is important to consider a group which, to our knowledge, has had little specific mainstream focus in the IoT area –people with disabilities. For people with disabilities, the ability for every device to interact with us and with each other has the potential to yield significant benefits. In terms of engagement, the arrival of smart appliances is already offering benefits such as the ability for a person in a wheelchair to give verbal commands to an IoT-enabled washing machine if the buttons are out of reach, or for a blind person to receive a notification on a smartphone when dinner has finished cooking in an IoT-enabled microwave. With clear benefits of IoT being identified for people with disabilities, it is important to also identify what implications there are for education. With higher education being a critical pathway for many people with disabilities in finding employment, the question as to whether such technologies can support the educational outcomes of people with disabilities was what ultimately led to this research project. This research will discuss several significant findings that have emerged from the research in relation to how consumer-based IoT can be used in the classroom to support the learning needs of students with disabilities, how industrial-based IoT sensors and actuators can be used to monitor and improve the real-time learning outcomes for the delivery of lectures and student engagement, and a proposed method for students to gain more control over their learning environment. The findings shared in this presentation are likely to have significant implications for the use of IoT in the classroom through the implementation of affordable and accessible IoT solutions and will provide guidance as to how policies can be developed as the implications of both benefits and risks continue to be considered by educators.

Keywords: disability, higher education, internet of things, students

Procedia PDF Downloads 121
1327 A Cognitive Training Program in Learning Disability: A Program Evaluation and Follow-Up Study

Authors: Krisztina Bohacs, Klaudia Markus

Abstract:

To author’s best knowledge we are in absence of studies on cognitive program evaluation and we are certainly short of programs that prove to have high effect sizes with strong retention results. The purpose of our study was to investigate the effectiveness of a comprehensive cognitive training program, namely BrainRx. This cognitive rehabilitation program target and remediate seven core cognitive skills and related systems of sub-skills through repeated engagement in game-like mental procedures delivered one-on-one by a clinician, supplemented by digital training. A larger sample of children with learning disability were given pretest and post-test cognitive assessments. The experimental group completed a twenty-week cognitive training program in a BrainRx center. A matched control group received another twenty-week intervention with Feuerstein’s Instrumental Enrichment programs. A second matched control group did not receive training. As for pre- and post-test, we used a general intelligence test to assess IQ and a computer-based test battery for assessing cognition across the lifespan. Multiple regression analyses indicated that the experimental BrainRx treatment group had statistically significant higher outcomes in attention, working memory, processing speed, logic and reasoning, auditory processing, visual processing and long-term memory compared to the non-treatment control group with very large effect sizes. With the exception of logic and reasoning, the BrainRx treatment group realized significantly greater gains in six of the above given seven cognitive measures compared to the Feuerstein control group. Our one-year retention measures showed that all the cognitive training gains were above ninety percent with the greatest retention skills in visual processing, auditory processing, logic, and reasoning. The BrainRx program may be an effective tool to establish long-term cognitive changes in case of students with learning disabilities. Recommendations are made for treatment centers and special education institutions on the cognitive training of students with special needs. The importance of our study is that targeted, systematic, progressively loaded and intensive brain training approach may significantly change learning disabilities.

Keywords: cognitive rehabilitation training, cognitive skills, learning disability, permanent structural cognitive changes

Procedia PDF Downloads 204
1326 An Assessment of Nodulation and Nitrogen Fixation of Lessertia Frutescens Plants Inoculated with Rhizobial Isolates from the Cape Fynbos

Authors: Mokgadi Miranda Hlongwane, Ntebogeng Sharon Mokgalaka, Felix Dapare Dakora

Abstract:

Lessertia (L.) frutescens (syn. Sutherlandia frutescens) is a leguminous medicinal plant indigenous to South Africa. Traditionally, L. frutescens has been used to treat cancer, diabetes, epilepsy, fever, HIV, stomach problems, wounds and other ailments. This legume is endemic to the Cape fynbos, with large populations occurring wild and cultivated in the Cape Florist Region. Its widespread distribution in the Western Cape, Northern Cape, Eastern Cape and Kwazulu-Natal is linked to its increased use as a phytomedicine in the treatment of various diseases by traditional healers. The frequent harvesting of field plants for use as a medicine has made it necessary to undertake studies towards the conservation of Lessertia frutescens. As a legume, this species can form root nodules and fix atmospheric N₂ when in symbiosis with soil bacteria called rhizobia. So far, however, few studies (if any) have been done on the efficacy and diversity of native bacterial symbionts nodulating L. frutescens in South Africa. The aim of this project was to isolate and characterize L. frutescens-nodulating bacteria from five different locations in the Western Cape Province. This was done by trapping soil rhizobia using rhizosphere soil suspension to inoculate L. frutescens seedlings growing in sterilized sand and receiving sterile N-free Hoagland nutrient solution under glasshouse conditions. At 60 days after planting, root nodules were harvested from L. frutescens plants, surface-sterilized, macerated, and streaked on yeast mannitol agar (YMA) plates and incubated at 28 ˚C for observation of bacterial growth. The majority of isolates were slow-growers that took 6-14 days to appear on YMA plates. However, seven isolates were fast-growers, taking 2-4 days to appear on YMA plates. Single-colony cultures of the isolates were assessed for their ability to nodulate L. frutescens as a homologous host under glasshouse conditions. Of the 92 bacterial isolates tested, 63 elicited nodule formation on L. frutescens. Symbiotic effectiveness varied markedly between and among test isolates. There were also significant (p≤0.005) differences in nodulation, shoot biomass, photosynthetic rates, leaf transpiration and stomatal conductance of L. frutescens plants inoculated with the test isolates, which is an indication of their functional diversity.

Keywords: lessertia frutescens, nodulating, rhizobia, symbiotic effectiveness

Procedia PDF Downloads 198
1325 Toxin-Producing Algae of Nigerian Coast, Gulf of Guinea

Authors: Medina O. Kadiri, Jeffrey U. Ogbebor

Abstract:

Toxin-producing algae are algal species that produce potent toxins, which accumulate in food chains and cause various gastrointestinal and neurological illnesses in humans and other animals. They result in shellfish toxicity, ecosystem alteration, cause fish kills and mortality of other animals and humans, in addition to compromised product quality as well as decreased consumer confidence. Animals, including man, are directly exposed to toxins by absorbing toxins from the water via swimming, drinking water with toxins, or ingestion of algal species via feeding on contaminated seafood. These toxins, algal toxins, undergo bioaccumulation, biotransformation, biotransferrence, and biomagnification through the natural food chains and food webs, thereby endangering animals and humans. The Nigerian coast is situated on the Atlantic Ocean, the Gulf of Guinea, one of Africa’s five large marine ecosystems (LME), and studies on toxic algae in this ecosystem are generally lacking. Algal samples were collected from eight coastal states and ten locations spanning the Bight of Bonny and the Bight of Benin. A total of 70 species of toxin-producing algae were found in the coastal waters of Nigeria. There was a great variety of toxin-producing algae in the coastal waters of Nigeria. They were Domoic acid-producing forms (DSP), Saxitoxin-producing, Gonyautoxin-producing, and Yessotoxin-producing (all PSP). Others were Okadaic acid-producing, Dinophysistoxin-producing, and Palytoxin-producing, which are representatives of DSP; CFP was represented by Ciguatoxin-producing forms and NSP by Brevitoxin-producing species. Emerging or new toxins are comprising of Gymnodimines, Spirolides, Palytoxins, and Prorocentrolidess-producing algae. The CyanoToxin Poisoning (CTP) was represented by Anatoxin-, Microcystin-, Cylindrospermopsis-Lyngbyatoxin-, Nordularin-Applyssiatoxin and Debromoapplatoxin-producing species. The highest group was the Saxitoxin-producing species, followed by Microcystin-producing species, then Anatoxin-producing species. Gonyautoxin (PSP), Palytoxin (DSP), Emerging toxins, and Cylindrospermopsin -producing species had a very substantial representation. Only Ciguatoxin-producing species, Lyngbyatoxin-Nordularin, Applyssiatoxin, and Debromoapplatoxin-producing species were represented by one taxon each. The presence of such overwhelming diversity of toxin-producing algae on the Nigerian coast is a source of concern for fisheries, aquaculture, human health, and ecosystem services. Therefore routine monitoring of toxic and harmful algae is greatly recommended.

Keywords: algal syndromes, Atlantic Ocean, harmful algae, Nigeria

Procedia PDF Downloads 210
1324 The Mediating Role of Positive Psychological Capital in the Relationship between Self-Leadership and Career Maturity among Korean University Students

Authors: Lihyo Sung

Abstract:

Background: Children and teens in Korea experience extreme levels of academic stress. To perform better on the college entrance exam and gain admission to Korea’s most prestigious universities, they devote a significant portion of their early lives to studying. Because of their excessive preparation for entrance exams, students have become accustomed to passive and involuntary engagement. Any student starting university, however, faces new challenges that require more active involvement and self-regulated practice. As a way to tackle this issue, the study focuses on investigating the mediating effects of positive psychological capital on the relationship between self-leadership and career maturity among Korean university students. Objectives and Hypotheses: The long term goal of this study is to offer insights that promote the use of positive psychological interventions in the development and adaptation of career maturity. The current objective is to assess the role of positive psychological capital as a mediator between self-leadership and career maturity among Korean university students. Based on previous research, the hypotheses are: (a) self-leadership will be positively associated with indices of career maturity, and (b) positive psychological capital will partially or fully mediate the relationship between self-leadership and career maturity. Sample Characteristics and Sample Size: Participants in the current study consisted of undergraduate students enrolled in various courses at 5 large universities in Korea. A total of 181 students participated in the study. Methodology: A quantitative research design was adopted to test the hypotheses proposed in the current study. By using a cross-sectional approach to research, a self-administered questionnaire was used to collect data on indices of positive psychological capital, self-leadership, and career maturity. The data were analyzed by means of Cronbach's alpha, Pierson correlation test, multiple regression, path analysis, and SPSS for Windows version 22.0 using descriptive statistics. Results: Findings showed that positive psychological capital fully mediated the relationship between self-leadership and career maturity. Self-leadership significantly impacted positive psychological capital and career maturity, respectively. Scientific Contribution: The results of the current study provided useful insights into the role of psychological strengths such as positive psychological capital in improving self-leadership and career maturity. Institutions can assist in increasing positive psychological capital through the creation of positive experiences for undergraduate students, such as opportunities for coaching and mentoring.

Keywords: career maturity, mediating role, positive psychological capital, self-leadership

Procedia PDF Downloads 131
1323 Innovating Assessment: Exploring AI-Driven Scoring for Language Tests in Pre-Service Education Admissions

Authors: Lucie Bartosova

Abstract:

The rapid advancements in generative artificial intelligence (AI) have introduced transformative possibilities in education, particularly in assessment methodologies. This work provides an overview of the current state of the literature on AI-scoring methodologies for evaluating student-written responses. The focus is on how these innovations can be leveraged within large-scale assessments to address resource constraints such as limited assessors, time, and budget. Drawing from an initiative tied to a language test used for admitting candidates into a pre-service education program in the Faculty of Education at an Ontario university, the review explores the practical and ethical implications of integrating AI-driven tools into assessment processes. These tools are designed to automate the evaluation of learners’ written compositions, provide performance feedback, and support grading procedures. By synthesizing findings from recent research, the review highlights the effectiveness, reliability, and potential biases of AI in scoring, alongside considerations for transparency and fairness. This work emphasizes the dual role of generative AI as both a practical solution for scaling assessments and a subject of critical scrutiny to ensure its responsible implementation. The proposed integration of AI-scoring methodologies in our language test underscores the need to balance innovation with accountability, ensuring that AI tools enhance, rather than compromise, educational equity and rigor. OBJECTIVES OF YOUR RESEARCH To determine which generative AI model is most capable of evaluating written responses for university assessments based on specific criteria and to investigate potential biases within AI models to ensure fair assessments. METHODOLOGIES Evaluating generative AI models to determine their performance in assessing written responses against specific criteria. Collecting responses from previous assessments and annotating them with expert feedback to train and validate the AI models. MAIN CONTRIBUTIONS Introducing a tailored AI model to assess written responses on language tests. Offering a scalable and replicable model that informs broader applications of AI in educational assessments, contributing to policy-making and institutional best practices.

Keywords: artificial intelligence, assessment practices, student written performance, automated essay scoring, language proficiency

Procedia PDF Downloads 12
1322 Optimizing Cell Culture Performance in an Ambr15 Microbioreactor Using Dynamic Flux Balance and Computational Fluid Dynamic Modelling

Authors: William Kelly, Sorelle Veigne, Xianhua Li, Zuyi Huang, Shyamsundar Subramanian, Eugene Schaefer

Abstract:

The ambr15™ bioreactor is a single-use microbioreactor for cell line development and process optimization. The ambr system offers fully automatic liquid handling with the possibility of fed-batch operation and automatic control of pH and oxygen delivery. With operating conditions for large scale biopharmaceutical production properly scaled down, micro bioreactors such as the ambr15™ can potentially be used to predict the effect of process changes such as modified media or different cell lines. In this study, gassing rates and dilution rates were varied for a semi-continuous cell culture system in the ambr15™ bioreactor. The corresponding changes to metabolite production and consumption, as well as cell growth rate and therapeutic protein production were measured. Conditions were identified in the ambr15™ bioreactor that produced metabolic shifts and specific metabolic and protein production rates also seen in the corresponding larger (5 liter) scale perfusion process. A Dynamic Flux Balance model was employed to understand and predict the metabolic changes observed. The DFB model-predicted trends observed experimentally, including lower specific glucose consumption when CO₂ was maintained at higher levels (i.e. 100 mm Hg) in the broth. A Computational Fluid Dynamic (CFD) model of the ambr15™ was also developed, to understand transfer of O₂ and CO₂ to the liquid. This CFD model predicted gas-liquid flow in the bioreactor using the ANSYS software. The two-phase flow equations were solved via an Eulerian method, with population balance equations tracking the size of the gas bubbles resulting from breakage and coalescence. Reasonable results were obtained in that the Carbon Dioxide mass transfer coefficient (kLa) and the air hold up increased with higher gas flow rate. Volume-averaged kLa values at 500 RPM increased as the gas flow rate was doubled and matched experimentally determined values. These results form a solid basis for optimizing the ambr15™, using both CFD and FBA modelling approaches together, for use in microscale simulations of larger scale cell culture processes.

Keywords: cell culture, computational fluid dynamics, dynamic flux balance analysis, microbioreactor

Procedia PDF Downloads 286
1321 Measuring Human Perception and Negative Elements of Public Space Quality Using Deep Learning: A Case Study of Area within the Inner Road of Tianjin City

Authors: Jiaxin Shi, Kaifeng Hao, Qingfan An, Zeng Peng

Abstract:

Due to a lack of data sources and data processing techniques, it has always been difficult to quantify public space quality, which includes urban construction quality and how it is perceived by people, especially in large urban areas. This study proposes a quantitative research method based on the consideration of emotional health and physical health of the built environment. It highlights the low quality of public areas in Tianjin, China, where there are many negative elements. Deep learning technology is then used to measure how effectively people perceive urban areas. First, this work suggests a deep learning model that might simulate how people can perceive the quality of urban construction. Second, we perform semantic segmentation on street images to identify visual elements influencing scene perception. Finally, this study correlated the scene perception score with the proportion of visual elements to determine the surrounding environmental elements that influence scene perception. Using a small-scale labeled Tianjin street view data set based on transfer learning, this study trains five negative spatial discriminant models in order to explore the negative space distribution and quality improvement of urban streets. Then it uses all Tianjin street-level imagery to make predictions and calculate the proportion of negative space. Visualizing the spatial distribution of negative space along the Tianjin Inner Ring Road reveals that the negative elements are mainly found close to the five key districts. The map of Tianjin was combined with the experimental data to perform the visual analysis. Based on the emotional assessment, the distribution of negative materials, and the direction of street guidelines, we suggest guidance content and design strategy points of the negative phenomena in Tianjin street space in the two dimensions of perception and substance. This work demonstrates the utilization of deep learning techniques to understand how people appreciate high-quality urban construction, and it complements both theory and practice in urban planning. It illustrates the connection between human perception and the actual physical public space environment, allowing researchers to make urban interventions.

Keywords: human perception, public space quality, deep learning, negative elements, street images

Procedia PDF Downloads 121
1320 Influence of Freeze-Thaw Cycles on Protein Integrity and Quality of Chicken Meat

Authors: Nafees Ahmed, Nur Izyani Kamaruzman, Saralla Nathan, Mohd Ezharul Hoque Chowdhury, Anuar Zaini Md Zain, Iekhsan Othman, Sharifah Binti Syed Hassan

Abstract:

Meat quality is always subject to consumer scrutiny when purchasing from retail markets on mislabeling as fresh meat. Various physiological and biochemical changes influence the quality of meat. As a major component of muscle tissue, proteins play a major role in muscle foods. In meat industry, freezing is the most common form of storage of meat products. Repeated cycles of freezing and thawing are common in restaurants, kitchen, and retail outlets and can also occur during transportation or storage. Temperature fluctuation is responsible for physical, chemical, and biochemical changes. Repeated cycles of ‘freeze-thaw’ degrade the quality of meat by stimulating the lipid oxidation and surface discoloration. The shelf life of meat is usually determined by its appearance, texture, color, flavor, microbial activity, and nutritive value and is influenced by frozen storage and subsequent thawing. The main deterioration of frozen meat during storage is due to protein. Due to the large price differences between fresh and frozen–thawed meat, it is of great interest to consumer to know whether a meat product is truly fresh or not. Researchers have mainly focused on the reduction of moisture loss due to freezing and thawing cycles of meat. The water holding capacity (WHC) of muscle proteins and reduced water content are key quality parameters of meat that ultimately changes color and texture. However, there has been limited progress towards understanding the actual mechanisms behind the meat quality changes under the freeze–thaw cycles. Furthermore, effect of freeze-thaw process on integrity of proteins is ignored. In this paper, we have studied the effect of ‘freeze-thawing’ on physicochemical changes of chicken meat protein. We have assessed the quality of meat by pH, spectroscopic measurements, Western Blot. Our results showed that increase in freeze-thaw cycles causes changes in pH. Measurements of absorbance (UV-visible and IR) indicated the degradation of proteins. The expression of various proteins (CREB, AKT, MAPK, GAPDH, and phosphorylated forms) were performed using Western Blot. These results indicated the repeated cycles of freeze-thaw is responsible for deterioration of protein, thus causing decrease in nutritious value of meat. It damges the use of these products in Islamic Sharia.

Keywords: chicken meat, freeze-thaw, halal, protein, western blot

Procedia PDF Downloads 414
1319 Seasonal Variability of M₂ Internal Tides Energetics in the Western Bay of Bengal

Authors: A. D. Rao, Sachiko Mohanty

Abstract:

The Internal Waves (IWs) are generated by the flow of barotropic tide over the rapidly varying and steep topographic features like continental shelf slope, subsurface ridges, and the seamounts, etc. The IWs of the tidal frequency are generally known as internal tides. These waves have a significant influence on the vertical density and hence causes mixing in the region. Such waves are also important in submarine acoustics, underwater navigation, offshore structures, ocean mixing and biogeochemical processes, etc. over the shelf-slope region. The seasonal variability of internal tides in the Bay of Bengal with special emphasis on its energetics is examined by using three-dimensional MITgcm model. The numerical simulations are performed for different periods covering August-September, 2013; November-December, 2013 and March-April, 2014 representing monsoon, post-monsoon and pre-monsoon seasons respectively during which high temporal resolution in-situ data sets are available. The model is initially validated through the spectral estimates of density and the baroclinic velocities. From the estimates, it is inferred that the internal tides associated with semi-diurnal frequency are more dominant in both observations and model simulations for November-December and March-April. However, in August, the estimate is found to be maximum near-inertial frequency at all the available depths. The observed vertical structure of the baroclinic velocities and its magnitude are found to be well captured by the model. EOF analysis is performed to decompose the zonal and meridional baroclinic tidal currents into different vertical modes. The analysis suggests that about 70-80% of the total variance comes from Mode-1 semi-diurnal internal tide in both observations as well as in the model simulations. The first three modes are sufficient to describe most of the variability for semidiurnal internal tides, as they represent 90-95% of the total variance for all the seasons. The phase speed, group speed, and wavelength are found to be maximum for post-monsoon season compared to other two seasons. The model simulation suggests that the internal tide is generated all along the shelf-slope regions and propagate away from the generation sites in all the months. The model simulated energy dissipation rate infers that its maximum occurs at the generation sites and hence the local mixing due to internal tide is maximum at these sites. The spatial distribution of available potential energy is found to be maximum in November (20kg/m²) in northern BoB and minimum in August (14kg/m²). The detailed energy budget calculation are made for all the seasons and results are analysed.

Keywords: available potential energy, baroclinic energy flux, internal tides, Bay of Bengal

Procedia PDF Downloads 173
1318 High Throughput Virtual Screening against ns3 Helicase of Japanese Encephalitis Virus (JEV)

Authors: Soma Banerjee, Aamen Talukdar, Argha Mandal, Dipankar Chaudhuri

Abstract:

Japanese Encephalitis is a major infectious disease with nearly half the world’s population living in areas where it is prevalent. Currently, treatment for it involves only supportive care and symptom management through vaccination. Due to the lack of antiviral drugs against Japanese Encephalitis Virus (JEV), the quest for such agents remains a priority. For these reasons, simulation studies of drug targets against JEV are important. Towards this purpose, docking experiments of the kinase inhibitors were done against the chosen target NS3 helicase as it is a nucleoside binding protein. Previous efforts regarding computational drug design against JEV revealed some lead molecules by virtual screening using public domain software. To be more specific and accurate regarding finding leads, in this study a proprietary software Schrödinger-GLIDE has been used. Druggability of the pockets in the NS3 helicase crystal structure was first calculated by SITEMAP. Then the sites were screened according to compatibility with ATP. The site which is most compatible with ATP was selected as target. Virtual screening was performed by acquiring ligands from databases: KinaseSARfari, KinaseKnowledgebase and Published inhibitor Set using GLIDE. The 25 ligands with best docking scores from each database were re-docked in XP mode. Protein structure alignment of NS3 was performed using VAST against MMDB, and similar human proteins were docked to all the best scoring ligands. The low scoring ligands were chosen for further studies and the high scoring ligands were screened. Seventy-three ligands were listed as the best scoring ones after performing HTVS. Protein structure alignment of NS3 revealed 3 human proteins with RMSD values lesser than 2Å. Docking results with these three proteins revealed the inhibitors that can interfere and inhibit human proteins. Those inhibitors were screened. Among the ones left, those with docking scores worse than a threshold value were also removed to get the final hits. Analysis of the docked complexes through 2D interaction diagrams revealed the amino acid residues that are essential for ligand binding within the active site. Interaction analysis will help to find a strongly interacting scaffold among the hits. This experiment yielded 21 hits with the best docking scores which could be investigated further for their drug like properties. Aside from getting suitable leads, specific NS3 helicase-inhibitor interactions were identified. Selection of Target modification strategies complementing docking methodologies which can result in choosing better lead compounds are in progress. Those enhanced leads can lead to better in vitro testing.

Keywords: antivirals, docking, glide, high-throughput virtual screening, Japanese encephalitis, ns3 helicase

Procedia PDF Downloads 237
1317 A Comprehensive Finite Element Model for Incremental Launching of Bridges: Optimizing Construction and Design

Authors: Mohammad Bagher Anvari, Arman Shojaei

Abstract:

Incremental launching, a widely adopted bridge erection technique, offers numerous advantages for bridge designers. However, accurately simulating and modeling the dynamic behavior of the bridge during each step of the launching process proves to be tedious and time-consuming. The perpetual variation of internal forces within the deck during construction stages adds complexity, exacerbated further by considerations of other load cases, such as support settlements and temperature effects. As a result, there is an urgent need for a reliable, simple, economical, and fast algorithmic solution to model bridge construction stages effectively. This paper presents a novel Finite Element (FE) model that focuses on studying the static behavior of bridges during the launching process. Additionally, a simple method is introduced to normalize all quantities in the problem. The new FE model overcomes the limitations of previous models, enabling the simulation of all stages of launching, which conventional models fail to achieve due to underlying assumptions. By leveraging the results obtained from the new FE model, this study proposes solutions to improve the accuracy of conventional models, particularly for the initial stages of bridge construction that have been neglected in previous research. The research highlights the critical role played by the first span of the bridge during the initial stages, a factor often overlooked in existing studies. Furthermore, a new and simplified model termed the "semi-infinite beam" model, is developed to address this oversight. By utilizing this model alongside a simple optimization approach, optimal values for launching nose specifications are derived. The practical applications of this study extend to optimizing the nose-deck system of incrementally launched bridges, providing valuable insights for practical usage. In conclusion, this paper introduces a comprehensive Finite Element model for studying the static behavior of bridges during incremental launching. The proposed model addresses limitations found in previous approaches and offers practical solutions to enhance accuracy. The study emphasizes the importance of considering the initial stages and introduces the "semi-infinite beam" model. Through the developed model and optimization approach, optimal specifications for launching nose configurations are determined. This research holds significant practical implications and contributes to the optimization of incrementally launched bridges, benefiting both the construction industry and bridge designers.

Keywords: incremental launching, bridge construction, finite element model, optimization

Procedia PDF Downloads 110
1316 God, The Master Programmer: The Relationship Between God and Computers

Authors: Mohammad Sabbagh

Abstract:

Anyone who reads the Torah or the Quran learns that GOD created everything that is around us, seen and unseen, in six days. Within HIS plan of creation, HE placed for us a key proof of HIS existence which is essentially computers and the ability to program them. Digital computer programming began with binary instructions, which eventually evolved to what is known as high-level programming languages. Any programmer in our modern time can attest that you are essentially giving the computer commands by words and when the program is compiled, whatever is processed as output is limited to what the computer was given as an ability and furthermore as an instruction. So one can deduce that GOD created everything around us with HIS words, programming everything around in six days, just like how we can program a virtual world on the computer. GOD did mention in the Quran that one day where GOD’s throne is, is 1000 years of what we count; therefore, one might understand that GOD spoke non-stop for 6000 years of what we count, and gave everything it’s the function, attributes, class, methods and interactions. Similar to what we do in object-oriented programming. Of course, GOD has the higher example, and what HE created is much more than OOP. So when GOD said that everything is already predetermined, it is because any input, whether physical, spiritual or by thought, is outputted by any of HIS creatures, the answer has already been programmed. Any path, any thought, any idea has already been laid out with a reaction to any decision an inputter makes. Exalted is GOD!. GOD refers to HIMSELF as The Fastest Accountant in The Quran; the Arabic word that was used is close to processor or calculator. If you create a 3D simulation of a supernova explosion to understand how GOD produces certain elements and fuses protons together to spread more of HIS blessings around HIS skies; in 2022 you are going to require one of the strongest, fastest, most capable supercomputers of the world that has a theoretical speed of 50 petaFLOPS to accomplish that. In other words, the ability to perform one quadrillion (1015) floating-point operations per second. A number a human cannot even fathom. To put in more of a perspective, GOD is calculating when the computer is going through those 50 petaFLOPS calculations per second and HE is also calculating all the physics of every atom and what is smaller than that in all the actual explosion, and it’s all in truth. When GOD said HE created the world in truth, one of the meanings a person can understand is that when certain things occur around you, whether how a car crashes or how a tree grows; there is a science and a way to understand it, and whatever programming or science you deduce from whatever event you observed, it can relate to other similar events. That is why GOD might have said in The Quran that it is the people of knowledge, scholars, or scientist that fears GOD the most! One thing that is essential for us to keep up with what the computer is doing and for us to track our progress along with any errors is we incorporate logging mechanisms and backups. GOD in The Quran said that ‘WE used to copy what you used to do’. Essentially as the world is running, think of it as an interactive movie that is being played out in front of you, in a full-immersive non-virtual reality setting. GOD is recording it, from every angle to every thought, to every action. This brings the idea of how scary the Day of Judgment will be when one might realize that it’s going to be a fully immersive video when we would be getting and reading our book.

Keywords: programming, the Quran, object orientation, computers and humans, GOD

Procedia PDF Downloads 110
1315 Effect of Women`s Autonomy on Unmet Need for Contraception and Family Size in India

Authors: Anshita Sharma

Abstract:

India is one of the countries to initiate family planning with intention to control the growing population by reducing fertility. In effort to this, India had introduced the National family planning programme in 1952. The level of unmet need in India shows a reducing trend with increasing effectiveness of family planning services as in NFHS-1 the unmet need for limiting, spacing and total was 46 percent, 14 percent & 9 percent, respectively. The demand for spacing has reduced to at 8 percent, 8 percent for limiting and total unmet need was 16 percent in NFHS-2. The total unmet need has reduced to 13 percent in NFHS-3 for all currently married women and the demand for limiting and spacing is 7 percent and 6 percent respectively. The level of unmet need in India shows a reducing trend with increasing effectiveness of family planning services. Despite the progress, there is chunk of women who are deprived of controlling unintended and unwanted pregnancies. The present paper examines the socio-cultural and economic and demographic correlates of unmet need for contraception in India. It also examines the effect of women’s autonomy and unmet need for contraception on family size among different socio-economic groups of population. It uses data from national family health survey-3 carried out in 2005-06 and employs bi-variate techniques and multivariate techniques for analysis. The multiple regression analysis has done to seek the level and direction of relationship among various socio-economic and demographic factors. The result reveals that women with higher level of education and economic status have low level of unmet need for family planning. Women living in non-nuclear family have high unmet need for spacing and women living in nuclear family have high unmet need for limiting and family size is slightly higher of women of nuclear family. In India, the level of autonomy varies at different life point; usually women with higher age enjoy higher autonomy than their junior female member in the family. The finding shows that women with higher autonomy have large family size counter to women with low autonomy have low family size. Unmet need for family planning decrease with women’s increasing exposure to mass- media. The demographic factors like experience of child loss are directly related to family size. Women who experience higher child loss have low unmet need for spacing and limiting. Thus, It is established with the help that women’s autonomy status play substantial role in fulfilling demand of contraception for limiting and spacing which affect the family size.

Keywords: family size, socio-economic correlates, unmet need for limiting, unmet need for spacing, women`s autonomy

Procedia PDF Downloads 271
1314 The Development and Change of Settlement in Tainan County (1904-2015) Using Historical Geographic Information System

Authors: Wei Ting Han, Shiann-Far Kung

Abstract:

In the early time, most of the arable land is dry farming and using rainfall as water sources for irrigation in Tainan county. After the Chia-nan Irrigation System (CIS) was completed in 1930, Chia-nan Plain was more efficient allocation of limited water sources or irrigation, because of the benefit from irrigation systems, drainage systems, and land improvement projects. The problem of long-term drought, flood and salt damage in the past were also improved by CIS. The canal greatly improved the paddy field area and agricultural output, Tainan county has become one of the important agricultural producing areas in Taiwan. With the development of water conservancy facilities, affected by national policies and other factors, many agricultural communities and settlements are formed indirectly, also promoted the change of settlement patterns and internal structures. With the development of historical geographic information system (HGIS), Academia Sinica developed the WebGIS theme with the century old maps of Taiwan which is the most complete historical map of database in Taiwan. It can be used to overlay historical figures of different periods, present the timeline of the settlement change, also grasp the changes in the natural environment or social sciences and humanities, and the changes in the settlements presented by the visualized areas. This study will explore the historical development and spatial characteristics of the settlements in various areas of Tainan County. Using of large-scale areas to explore the settlement changes and spatial patterns of the entire county, through the dynamic time and space evolution from Japanese rule to the present day. Then, digitizing the settlement of different periods to perform overlay analysis by using Taiwan historical topographic maps in 1904, 1921, 1956 and 1989. Moreover, using document analysis to analyze the temporal and spatial changes of regional environment and settlement structure. In addition, the comparison analysis method is used to classify the spatial characteristics and differences between the settlements. Exploring the influence of external environments in different time and space backgrounds, such as government policies, major construction, and industrial development. This paper helps to understand the evolution of the settlement space and the internal structural changes in Tainan County.

Keywords: historical geographic information system, overlay analysis, settlement change, Tainan County

Procedia PDF Downloads 132
1313 An Analysis of Employee Attitudes to Organisational Change Management Practices When Adopting New Technologies Within the Architectural, Engineering, and Construction Industry: A Case Study

Authors: Hannah O'Sullivan, Esther Quinn

Abstract:

Purpose: The Architectural, Engineering, and Construction (AEC) industry has historically struggled to adapt to change. Although the ability to innovate and successfully implement organizational change has been demonstrated to be critical in achieving a sustainable competitive advantage in the industry, many AEC organizations continue to struggle when affecting organizational change. One prominent area of organizational change that presents many challenges in the industry is the adoption of new forms of technology, for example, Building Information Modelling (BIM). Certain Organisational Change Management (OCM) practices have been proven to be effective in supporting organizations to adopt change, but little research has been carried out on diverging employee attitudes to change relative to their roles within the organization. The purpose of this research study is to examine how OCM practices influence employee attitudes to change when adopting new forms of technology and to analyze the diverging employee perspectives within an organization on the importance of different OCM strategies. Methodology: Adopting an interview-based approach, a case study was carried out on a large-sized, prominent Irish construction organization who are currently adopting a new technology platform for its projects. Qualitative methods were used to gain insight into differing perspectives on the utilization of various OCM practices and their efficacy when adopting a new form of technology on projects. Change agents implementing the organizational change gave insight into their intentions with the technology rollout strategy, while other employees were interviewed to understand how this rollout strategy was received and the challenges that were encountered. Findings: The results of this research study are currently being finalized. However, it is expected that employees in different roles will value different OCM practices above others. Findings and conclusions will be determined within the coming weeks. Value: This study will contribute to the body of knowledge relating to the introduction of new technologies, including BIM, to AEC organizations. It will also contribute to the field of organizational change management, providing insight into methods of introducing change that will be most effective for different employees based on their roles and levels of experience within the industry. The focus of this study steers away from traditional studies of the barriers to adopting BIM in its first instance at an organizational level and centers on the direct effect on employees when a company changes the technology platform being used.

Keywords: architectural, engineering, and construction (AEC) industry, Building Information Modelling, case study, challenges, employee perspectives, organisational change management.

Procedia PDF Downloads 75
1312 Current Concepts of Male Aesthetics: Facial Areas to Be Focused and Prioritized with Botulinum Toxin and Hyaluronic Acid Dermal Fillers Combination Therapies, Recommendations on Asian Patients

Authors: Sadhana Deshmukh

Abstract:

Objective: Men represent only a fraction of the medical aesthetic practice. They are increasingly becoming more cosmetically-inclined. The primary objective is to harmonize facial proportion by prioritizing and focusing on forehead nose, cheek and chin complex. Introduction: Despite tremendous variability, diverse population of the Indian subcontinent, the male skull is unique in its overall larger size, and shape. Men tend to have a large forehead with prominent supraorbital ridges, wide glabella, square orbit, and a prominent protruding mandible. Men have increased skeletal muscle mass, with less facial subcutaneous fat. Facial aesthetics is evolving rapidly. Commonly published canons of facial proportions usually represent feminine standards and are not applicable to males. Strict adherence to these norms is therefore not necessary to obtain satisfying results in male patients. Materials and Methods: Male patients age group 30-60 years have been enrolled. Botulinum toxin and hyaluronic acid fillers were used to update consensus recommendations for facial rejuvenation using these two types of products alone and in combination. Results: There are specific recommendations by facial area, focusing on relaxing musculature, restoring volume, recontouring using toxin and dermal fillers alone and in combination. For upper face, though botulinum toxin remains the cornerstone of treatment, temples and forehead fillers are recommended for optimal results. In Mid face, these fillers are placed more laterally to maintain the masculine look. Botulinum toxin and fillers in combination can improve outcomes in the lower face. Chin augmentation remains the center point for lower face. Conclusions: Males are more likely to have shorter doctor visits, less likely to ask questions, have a lower attention to bodily changes. The physician must patiently gauge male patients’ aging and cosmetic goals. Clinicians can also benefit from ongoing guidance on products, tailoring treatments, treating multiple facial areas, and using combinations of products. An appreciation that rejuvenation is 3-dimensional process involving muscle control, volume restoration and recontouring helps.

Keywords: male aesthetics, botulinum toxin, hyaluronic acid dermal fillers, Asian patients

Procedia PDF Downloads 161
1311 Calibration of Residential Buildings Energy Simulations Using Real Data from an Extensive in situ Sensor Network – A Study of Energy Performance Gap

Authors: Mathieu Bourdeau, Philippe Basset, Julien Waeytens, Elyes Nefzaoui

Abstract:

As residential buildings account for a third of the overall energy consumption and greenhouse gas emissions in Europe, building energy modeling is an essential tool to reach energy efficiency goals. In the energy modeling process, calibration is a mandatory step to obtain accurate and reliable energy simulations. Nevertheless, the comparison between simulation results and the actual building energy behavior often highlights a significant performance gap. The literature discusses different origins of energy performance gaps, from building design to building operation. Then, building operation description in energy models, especially energy usages and users’ behavior, plays an important role in the reliability of simulations but is also the most accessible target for post-occupancy energy management and optimization. Therefore, the present study aims to discuss results on the calibration ofresidential building energy models using real operation data. Data are collected through a sensor network of more than 180 sensors and advanced energy meters deployed in three collective residential buildings undergoing major retrofit actions. The sensor network is implemented at building scale and in an eight-apartment sample. Data are collected for over one year and half and coverbuilding energy behavior – thermal and electricity, indoor environment, inhabitants’ comfort, occupancy, occupants behavior and energy uses, and local weather. Building energy simulations are performed using a physics-based building energy modeling software (Pleaides software), where the buildings’features are implemented according to the buildingsthermal regulation code compliance study and the retrofit project technical files. Sensitivity analyses are performed to highlight the most energy-driving building features regarding each end-use. These features are then compared with the collected post-occupancy data. Energy-driving features are progressively replaced with field data for a step-by-step calibration of the energy model. Results of this study provide an analysis of energy performance gap on an existing residential case study under deep retrofit actions. It highlights the impact of the different building features on the energy behavior and the performance gap in this context, such as temperature setpoints, indoor occupancy, the building envelopeproperties but also domestic hot water usage or heat gains from electric appliances. The benefits of inputting field data from an extensive instrumentation campaign instead of standardized scenarios are also described. Finally, the exhaustive instrumentation solution provides useful insights on the needs, advantages, and shortcomings of the implemented sensor network for its replicability on a larger scale and for different use cases.

Keywords: calibration, building energy modeling, performance gap, sensor network

Procedia PDF Downloads 166
1310 Na Doped ZnO UV Filters with Reduced Photocatalytic Activity for Sunscreen Application

Authors: Rafid Mueen, Konstantin Konstantinov, Micheal Lerch, Zhenxiang Cheng

Abstract:

In the past two decades, the concern for skin protection from ultraviolet (UV) radiation has attracted considerable attention due to the increased intensity of UV rays that can reach the Earth’s surface as a result of the breakdown of ozone layer. Recently, UVA has also attracted attention, since, in comparison to UVB, it can penetrate deeply into the skin, which can result in significant health concerns. Sunscreen agents are one of the significant tools to protect the skin from UV irradiation, and it is either organic or in organic. Developing of inorganic UV blockers is essential, which provide efficient UV protection over a wide spectrum rather than organic filters. Furthermore inorganic UV blockers are good comfort, and high safety when applied on human skin. Inorganic materials can absorb, reflect, or scatter the ultraviolet radiation, depending on their particle size, unlike the organic blockers, which absorb the UV irradiation. Nowadays, most inorganic UV-blocking filters are based on (TiO2) and ZnO). ZnO can provide protection in the UVA range. Indeed, ZnO is attractive for in sunscreen formulization, and this relates to many advantages, such as its modest refractive index (2.0), absorption of a small fraction of solar radiation in the UV range which is equal to or less than 385 nm, its high probable recombination of photogenerated carriers (electrons and holes), large direct band gap, high exciton binding energy, non-risky nature, and high tendency towards chemical and physical stability which make it transparent in the visible region with UV protective activity. A significant issue for ZnO use in sunscreens is that it can generate ROS in the presence of UV light because of its photocatalytic activity. Therefore it is essential to make a non-photocatalytic material through modification by other metals. Several efforts have been made to deactivate the photocatalytic activity of ZnO by using inorganic surface modifiers. The doping of ZnO by different metals is another way to modify its photocatalytic activity. Recently, successful doping of ZnO with different metals such as Ce, La, Co, Mn, Al, Li, Na, K, and Cr by various procedures, such as a simple and facile one pot water bath, co-precipitation, hydrothermal, solvothermal, combustion, and sol gel methods has been reported. These materials exhibit greater performance than undoped ZnO towards increasing the photocatalytic activity of ZnO in visible light. Therefore, metal doping can be an effective technique to modify the ZnO photocatalytic activity. However, in the current work, we successfully reduce the photocatalytic activity of ZnO through Na doped ZnO fabricated via sol-gel and hydrothermal methods.

Keywords: photocatalytic, ROS, UVA, ZnO

Procedia PDF Downloads 147
1309 A Close Study on the Nitrate Fertilizer Use and Environmental Pollution for Human Health in Iran

Authors: Saeed Rezaeian, M. Rezaee Boroon

Abstract:

Nitrogen accumulates in soils during the process of fertilizer addition to promote the plant growth. When the organic matter decomposes, the form of available nitrogen produced is in the form of nitrate, which is highly mobile. The most significant health effect of nitrate ingestion is methemoglobinemia in infants under six months of age (blue baby syndrome). The mobile nutrients, like nitrate nitrogen, are not stored in the soil as the available forms for the long periods and in large amounts. It depends on the needs for the crops such as vegetables. On the other hand, the vegetables will compete actively for nitrate nitrogen as a mobile nutrient and water. The mobile nutrients must be shared. The fewer the plants, the larger this share is for each plant. Also, this nitrate nitrogen is poisonous for the people who use these vegetables. Nitrate is converted to nitrite by the existing bacteria in the stomach and the Gastro-Intestinal (GI) tract. When nitrite is entered into the blood cells, it converts the hemoglobin to methemoglobin, which causes the anoxemia and cyanosis. The increasing use of pesticides and chemical fertilizers, especially the fertilizers with nitrates compounds, which have been common for the increased production of agricultural crops, has caused the nitrate pollution in the (soil, water, and environment). They have caused a lot of damage to humans and animals. In this research, the nitrate accumulation in different kind of vegetables such as; green pepper, tomatoes, egg plants, watermelon, cucumber, and red pepper were observed in the suburbs of Mashhad, Neisabour, and Sabzevar cities. In some of these cities, the information forms of agronomical practices collected were such as; different vegetable crops fertilizer recommendations, varieties, pesticides, irrigation schedules, etc., which were filled out by some of our colleagues in the research areas mentioned above. Analysis of the samples was sent to the soil and water laboratory in our department in Mashhad. The final results from the chemical analysis of samples showed that the mean levels of nitrates from the samples of the fruit crops in the mentioned cities above were all lower than the critical levels. These fruit crop samples were in the order of: 35.91, 8.47, 24.81, 6.03, 46.43, 2.06 mg/kg dry matter, for the following crops such as; tomato, cucumber, eggplant, watermelon, green pepper, and red pepper. Even though, this study was conducted with limited samples and by considering the mean levels, the use of these crops from the nutritional point of view will not cause the poisoning of humans.

Keywords: environmental pollution, human health, nitrate accumulations, nitrate fertilizers

Procedia PDF Downloads 251
1308 Detailed Quantum Circuit Design and Evaluation of Grover's Algorithm for the Bounded Degree Traveling Salesman Problem Using the Q# Language

Authors: Wenjun Hou, Marek Perkowski

Abstract:

The Traveling Salesman problem is famous in computing and graph theory. In short, it asks for the Hamiltonian cycle of the least total weight in a given graph with N nodes. All variations on this problem, such as those with K-bounded-degree nodes, are classified as NP-complete in classical computing. Although several papers propose theoretical high-level designs of quantum algorithms for the Traveling Salesman Problem, no quantum circuit implementation of these algorithms has been created up to our best knowledge. In contrast to previous papers, the goal of this paper is not to optimize some abstract complexity measures based on the number of oracle iterations, but to be able to evaluate the real circuit and time costs of the quantum computer. Using the emerging quantum programming language Q# developed by Microsoft, which runs quantum circuits in a quantum computer simulation, an implementation of the bounded-degree problem and its respective quantum circuit were created. To apply Grover’s algorithm to this problem, a quantum oracle was designed, evaluating the cost of a particular set of edges in the graph as well as its validity as a Hamiltonian cycle. Repeating the Grover algorithm with an oracle that finds successively lower cost each time allows to transform the decision problem to an optimization problem, finding the minimum cost of Hamiltonian cycles. N log₂ K qubits are put into an equiprobablistic superposition by applying the Hadamard gate on each qubit. Within these N log₂ K qubits, the method uses an encoding in which every node is mapped to a set of its encoded edges. The oracle consists of several blocks of circuits: a custom-written edge weight adder, node index calculator, uniqueness checker, and comparator, which were all created using only quantum Toffoli gates, including its special forms, which are Feynman and Pauli X. The oracle begins by using the edge encodings specified by the qubits to calculate each node that this path visits and adding up the edge weights along the way. Next, the oracle uses the calculated nodes from the previous step and check that all the nodes are unique. Finally, the oracle checks that the calculated cost is less than the previously-calculated cost. By performing the oracle an optimal number of times, a correct answer can be generated with very high probability. The oracle of the Grover Algorithm is modified using the recalculated minimum cost value, and this procedure is repeated until the cost cannot be further reduced. This algorithm and circuit design have been verified, using several datasets, to generate correct outputs.

Keywords: quantum computing, quantum circuit optimization, quantum algorithms, hybrid quantum algorithms, quantum programming, Grover’s algorithm, traveling salesman problem, bounded-degree TSP, minimal cost, Q# language

Procedia PDF Downloads 195
1307 Numerical Investigation on Transient Heat Conduction through Brine-Spongy Ice

Authors: S. R. Dehghani, Y. S. Muzychka, G. F. Naterer

Abstract:

The ice accretion of salt water on cold substrates creates brine-spongy ice. This type of ice is a mixture of pure ice and liquid brine. A real case of creation of this type of ice is superstructure icing which occurs on marine vessels and offshore structures in cold and harsh conditions. Transient heat transfer through this medium causes phase changes between brine pockets and pure ice. Salt rejection during the process of transient heat conduction increases the salinity of brine pockets to reach a local equilibrium state. In this process the only effect of passing heat through the medium is not changing the sensible heat of the ice and brine pockets; latent heat plays an important role and affects the mechanism of heat transfer. In this study, a new analytical model for evaluating heat transfer through brine-spongy ice is suggested. This model considers heat transfer and partial solidification and melting together. Properties of brine-spongy ice are obtained using properties of liquid brine and pure ice. A numerical solution using Method of Lines discretizes the medium to reach a set of ordinary differential equations. Boundary conditions are chosen using one of the applicable cases of this type of ice; one side is considered as a thermally isolated surface, and the other side is assumed to be suddenly affected by a constant temperature boundary. All cases are evaluated in temperatures between -20 C and the freezing point of brine-spongy ice. Solutions are conducted using different salinities from 5 to 60 ppt. Time steps and space intervals are chosen properly to maintain the most stable and fast solution. Variation of temperature, volume fraction of brine and brine salinity versus time are the most important outputs of this study. Results show that transient heat conduction through brine-spongy ice can create a various range of salinity of brine pockets from the initial salinity to that of 180 ppt. The rate of variation of temperature is found to be slower for high salinity cases. The maximum rate of heat transfer occurs at the start of the simulation. This rate decreases as time passes. Brine pockets are smaller at portions closer to the colder side than that of the warmer side. A the start of the solution, the numerical solution tends to increase instabilities. This is because of sharp variation of temperature at the start of the process. Changing the intervals improves the unstable situation. The analytical model using a numerical scheme is capable of predicting thermal behavior of brine spongy ice. This model and numerical solutions are important for modeling the process of freezing of salt water and ice accretion on cold structures.

Keywords: method of lines, brine-spongy ice, heat conduction, salt water

Procedia PDF Downloads 220
1306 Small Community’s Proactive Thinking to Move from Zero to 100 Percent Water Reuse

Authors: Raj Chavan

Abstract:

The City of Jal serves a population of approximately 3,500 people, including 2,100 permanent inhabitants and 1,400 oil and gas sector workers and RV park occupants. Over the past three years, Jal's population has increased by about 70 percent, mostly due to the oil and gas industry. The City anticipates that the population will exceed 4,200 by 2020, necessitating the construction of a new wastewater treatment plant (WWTP) because the old plant (aerated lagoon system) cannot accommodate such rapid population expansion without major renovations or replacement. Adhering to discharge permit restrictions has been challenging due to aging infrastructure and equipment replacement needs, as well as increasing nutrient loading to the wastewater collecting system from the additional oil and gas residents' recreational vehicles. The WWTP has not been able to maintain permit discharge standards for total nitrogen of less than 20 mg N/L and other characteristics in recent years. Based on discussions with the state's environmental department, it is likely that the future permit renewal would impose stricter conditions. Given its location in the dry, western part of the country, the City must rely on its meager groundwater supplies and scant annual precipitation. The city's groundwater supplies will be depleted sooner than predicted due to rising demand from the growing population for drinking, leisure, and other industrial uses (fracking). The sole type of reuse the city was engaging in (recreational reuse for a golf course) had to be put on hold because of an effluent water compliance issue. As of right now, all treated effluent is evaporated. The city's long-term goal is to become a zero-waste community that sends all of its treated wastewater effluent either to the golf course, Jal Lake, or the oil and gas industry for reuse. Hydraulic fracturing uses a lot of water, but if the oil and gas industry can use recycled water, it can reduce its impact on freshwater supplies. The City's goal of 100% reuse has been delayed by the difficulties of meeting the constraints of the regular discharge permit due to the large rise in influent loads and the aging infrastructure. The City of Jal plans to build a new WWTP that can keep up with the city's rapid population increase due to the oil and gas industry. Several treatment methods were considered in light of the City's needs and its long-term goals, but MBR was ultimately chosen recommended since it meets all of the permit's requirements while also providing 100 percent beneficial reuse. This talk will lay out the plan for the city to reach its goal of 100 percent reuse, as well as the various avenues for funding the small community that have been considered.

Keywords: membrane bioreactor, nitrogent, reuse, small community

Procedia PDF Downloads 95