Search results for: size control
2567 Adaptive Responses of Carum copticum to in vitro Salt Stress
Authors: R. Razavizadeh, F. Adabavazeh, M. Rezaee Chermahini
Abstract:
Salinity is one of the most widespread agricultural problems in arid and semi-arid areas that limits the plant growth and crop productivity. In this study, the salt stress effects on protein, reducing sugar, proline contents and antioxidant enzymes activities of Carum copticum L. under in vitro conditions were studied. Seeds of C. copticum were cultured in Murashige and Skoog (MS) medium containing 0, 25, 50, 100 and 150 mM NaCl and calli were cultured in MS medium containing 1 μM 2, 4-dichlorophenoxyacetic acid, 4 μM benzyl amino purine and different levels of NaCl (0, 25, 50, 100 and 150 mM). After NaCl treatment for 28 days, the proline and reducing sugar contents of shoots, roots and calli increased significantly in relation to the severity of the salt stress. The highest amount of proline and carbohydrate were observed at 150 and 100 mM NaCl, respectively. The reducing sugar accumulation in shoots was the highest as compared to roots, whereas, proline contents did not show any significant difference in roots and shoots under salt stress. The results showed significant reduction of protein contents in seedlings and calli. Based on these results, proteins extracted from the shoots, roots and calli of C. copticum treated with 150 mM NaCl showed the lowest contents. The positive relationships were observed between activity of antioxidant enzymes and the increase in stress levels. Catalase, ascorbate peroxidase and superoxide dismutase activity increased significantly under salt concentrations in comparison to the control. These results suggest that the accumulation of proline and sugars, and activation of antioxidant enzymes play adaptive roles in the adaptation of seedlings and callus of C. copticum to saline conditions.Keywords: antioxidant enzymes, Carum copticum, organic solutes, salt stress
Procedia PDF Downloads 2822566 A Column Generation Based Algorithm for Airline Cabin Crew Rostering Problem
Authors: Nan Xu
Abstract:
In airlines, the crew scheduling problem is usually decomposed into two stages: crew pairing and crew rostering. In the crew pairing stage, pairings are generated such that each flight is covered by exactly one pairing and the overall cost is minimized. In the crew rostering stage, the pairings generated in the crew pairing stage are combined with off days, training and other breaks to create individual work schedules. The paper focuses on cabin crew rostering problem, which is challenging due to the extremely large size and the complex working rules involved. In our approach, the objective of rostering consists of two major components. The first is to minimize the number of unassigned pairings and the second is to ensure the fairness to crew members. There are two measures of fairness to crew members, the number of overnight duties and the total fly-hour over a given period. Pairings should be assigned to each crew member so that their actual overnight duties and fly hours are as close to the expected average as possible. Deviations from the expected average are penalized in the objective function. Since several small deviations are preferred than a large deviation, the penalization is quadratic. Our model of the airline crew rostering problem is based on column generation. The problem is decomposed into a master problem and subproblems. The mater problem is modeled as a set partition problem and exactly one roster for each crew is picked up such that the pairings are covered. The restricted linear master problem (RLMP) is considered. The current subproblem tries to find columns with negative reduced costs and add them to the RLMP for the next iteration. When no column with negative reduced cost can be found or a stop criteria is met, the procedure ends. The subproblem is to generate feasible crew rosters for each crew member. A separate acyclic weighted graph is constructed for each crew member and the subproblem is modeled as resource constrained shortest path problems in the graph. Labeling algorithm is used to solve it. Since the penalization is quadratic, a method to deal with non-additive shortest path problem using labeling algorithm is proposed and corresponding domination condition is defined. The major contribution of our model is: 1) We propose a method to deal with non-additive shortest path problem; 2) Operation to allow relaxing some soft rules is allowed in our algorithm, which can improve the coverage rate; 3) Multi-thread techniques are used to improve the efficiency of the algorithm when generating Line-of-Work for crew members. Here a column generation based algorithm for the airline cabin crew rostering problem is proposed. The objective is to assign a personalized roster to crew member which minimize the number of unassigned pairings and ensure the fairness to crew members. The algorithm we propose in this paper has been put into production in a major airline in China and numerical experiments show that it has a good performance.Keywords: aircrew rostering, aircrew scheduling, column generation, SPPRC
Procedia PDF Downloads 1462565 Place of Radiotherapy in the Treatment of Intracranial Meningiomas: Experience of the Cancer Center Emir Abdelkader of Oran Algeria
Authors: Taleb L., Benarbia M., Boutira F. M., Allam H., Boukerche A.
Abstract:
Introduction and purpose of the study: Meningiomas are the most common non-glial intracranial tumors in adults, accounting for approximately 30% of all central nervous system tumors. The aim of our study is to determine the epidemiological, clinical, therapeutic, and evolutionary characteristics of a cohort of patients with intracranial meningioma treated with radiotherapy at the Emir Abdelkader Cancer Center in Oran. Material and methods: This is a retrospective study of 44 patients during the period from 2014 to 2020. The overall survival and relapse-free survival curves were calculated using the Kaplan-Meier method. Results and statistical analysis: The median age of the patients was 49 years [21-76 years] with a clear female predominance (sex ratio=2.4). The average diagnostic delay was seven months [2 to 24 months], the circumstances of the discovery of which were dominated by headaches in 54.5% of cases (n=24), visual disturbances in 40.9% (n=18), and motor disorders in 15.9% (n=7). The seat of the tumor was essentially at the level of the base of the skull in 52.3% of patients (n=23), including 29.5% (n=13) at the level of the cavernous sinus, 27.3% (n=12) at the parasagittal level and 20.5% (n=9) at the convexity. The diagnosis was confirmed surgically in 36 patients (81.8%) whose anatomopathological study returned in favor of grades I, II, and III in respectively 40.9%, 29.5%, and 11.4% of the cases. Radiotherapy was indicated postoperatively in 45.5% of patients (n=20), exclusive in 27.3% (n=12) and after tumor recurrence in 27.3% of cases (n=18). The irradiation doses delivered were as follows: 50 Gy (20.5%), 54 Gy (65.9%), and 60 Gy (13.6%). With a median follow-up of 69 months, the probabilities of relapse-free survival and overall survival at three years are 93.2% and 95.4%, respectively, whereas they are 71.2% and 80.7% at five years. Conclusion: Meningiomas are common primary brain tumors. Most often benign but can also progress aggressively. Their treatment is essentially surgical, but radiotherapy retains its place in specific situations, allowing good tumor control and overall survival.Keywords: diagnosis, meningioma, surgery, radiotherapy, survival
Procedia PDF Downloads 1002564 Sensitivity Analysis of the Thermal Properties in Early Age Modeling of Mass Concrete
Authors: Farzad Danaei, Yilmaz Akkaya
Abstract:
In many civil engineering applications, especially in the construction of large concrete structures, the early age behavior of concrete has shown to be a crucial problem. The uneven rise in temperature within the concrete in these constructions is the fundamental issue for quality control. Therefore, developing accurate and fast temperature prediction models is essential. The thermal properties of concrete fluctuate over time as it hardens, but taking into account all of these fluctuations makes numerical models more complex. Experimental measurement of the thermal properties at the laboratory conditions also can not accurately predict the variance of these properties at site conditions. Therefore, specific heat capacity and the heat conductivity coefficient are two variables that are considered constant values in many of the models previously recommended. The proposed equations demonstrate that these two quantities are linearly decreasing as cement hydrates, and their value are related to the degree of hydration. The effects of changing the thermal conductivity and specific heat capacity values on the maximum temperature and the time it takes for concrete to reach that temperature are examined in this study using numerical sensibility analysis, and the results are compared to models that take a fixed value for these two thermal properties. The current study is conducted in 7 different mix designs of concrete with varying amounts of supplementary cementitious materials (fly ash and ground granulated blast furnace slag). It is concluded that the maximum temperature will not change as a result of the constant conductivity coefficient, but variable specific heat capacity must be taken into account, also about duration when a concrete's central node reaches its max value again variable specific heat capacity can have a considerable effect on the final result. Also, the usage of GGBFS has more influence compared to fly ash.Keywords: early-age concrete, mass concrete, specific heat capacity, thermal conductivity coefficient
Procedia PDF Downloads 772563 Radon-222 Concentration and Potential Risk to Workers of Al-Jalamid Phosphate Mines, North Province, Saudi Arabia
Authors: El-Said. I. Shabana, Mohammad S. Tayeb, Maher M. T. Qutub, Abdulraheem A. Kinsara
Abstract:
Usually, phosphate deposits contain 238U and 232Th in addition to their decay products. Due to their different pathways in the environment, the 238U/232Th activity concentration ratio usually found to be greater than unity in phosphate sediments. The presence of these radionuclides creates a potential need to control exposure of workers in the mining and processing activities of the phosphate minerals in accordance with IAEA safety standards. The greatest dose to workers comes from exposure to radon, especially 222Rn from the uranium series, and has to be controlled. In this regard, radon (222Rn) was measured in the atmosphere (indoor and outdoor) of Al-Jalamid phosphate-mines working area using a portable radon-measurement instrument RAD7, in a purpose of radiation protection. Radon was measured in 61 sites inside the open phosphate mines, the phosphate upgrading facility (offices and rooms of the workers, and in some open-air sites) and in the dwellings of the workers residence-village that lies at about 3 km from the mines working area. The obtained results indicated that the average indoor radon concentration was about 48.4 Bq/m3. Inside the upgrading facility, the average outdoor concentrations were 10.8 and 9.7 Bq/m3 in the concentrate piles and crushing areas, respectively. It was 12.3 Bq/m3 in the atmosphere of the open mines. These values are comparable with the global average values. Based on the average values, the annual effective dose due to radon inhalation was calculated and risk estimates have been done. The average annual effective dose to workers due to the radon inhalation was estimated by 1.32 mSv. The potential excess risk of lung cancer mortality that could be attributed to radon, when considering the lifetime exposure, was estimated by 53.0x10-4. The results have been discussed in detail.Keywords: dosimetry, environmental monitoring, phosphate deposits, radiation protection, radon
Procedia PDF Downloads 2722562 Hands-off Parking: Deep Learning Gesture-based System for Individuals with Mobility Needs
Authors: Javier Romera, Alberto Justo, Ignacio Fidalgo, Joshue Perez, Javier Araluce
Abstract:
Nowadays, individuals with mobility needs face a significant challenge when docking vehicles. In many cases, after parking, they encounter insufficient space to exit, leading to two undesired outcomes: either avoiding parking in that spot or settling for improperly placed vehicles. To address this issue, the following paper presents a parking control system employing gestural teleoperation. The system comprises three main phases: capturing body markers, interpreting gestures, and transmitting orders to the vehicle. The initial phase is centered around the MediaPipe framework, a versatile tool optimized for real-time gesture recognition. MediaPipe excels at detecting and tracing body markers, with a special emphasis on hand gestures. Hands detection is done by generating 21 reference points for each hand. Subsequently, after data capture, the project employs the MultiPerceptron Layer (MPL) for indepth gesture classification. This tandem of MediaPipe's extraction prowess and MPL's analytical capability ensures that human gestures are translated into actionable commands with high precision. Furthermore, the system has been trained and validated within a built-in dataset. To prove the domain adaptation, a framework based on the Robot Operating System (ROS), as a communication backbone, alongside CARLA Simulator, is used. Following successful simulations, the system is transitioned to a real-world platform, marking a significant milestone in the project. This real vehicle implementation verifies the practicality and efficiency of the system beyond theoretical constructs.Keywords: gesture detection, mediapipe, multiperceptron layer, robot operating system
Procedia PDF Downloads 1002561 Development of an Autonomous Automated Guided Vehicle with Robot Manipulator under Robot Operation System Architecture
Authors: Jinsiang Shaw, Sheng-Xiang Xu
Abstract:
This paper presents the development of an autonomous automated guided vehicle (AGV) with a robot arm attached on top of it within the framework of robot operation system (ROS). ROS can provide libraries and tools, including hardware abstraction, device drivers, libraries, visualizers, message-passing, package management, etc. For this reason, this AGV can provide automatic navigation and parts transportation and pick-and-place task using robot arm for typical industrial production line use. More specifically, this AGV will be controlled by an on-board host computer running ROS software. Command signals for vehicle and robot arm control and measurement signals from various sensors are transferred to respective microcontrollers. Users can operate the AGV remotely through the TCP / IP protocol and perform SLAM (Simultaneous Localization and Mapping). An RGBD camera and LIDAR sensors are installed on the AGV, using these data to perceive the environment. For SLAM, Gmapping is used to construct the environment map by Rao-Blackwellized particle filter; and AMCL method (Adaptive Monte Carlo localization) is employed for mobile robot localization. In addition, current AGV position and orientation can be visualized by ROS toolkit. As for robot navigation and obstacle avoidance, A* for global path planning and dynamic window approach for local planning are implemented. The developed ROS AGV with a robot arm on it has been experimented in the university factory. A 2-D and 3-D map of the factory were successfully constructed by the SLAM method. Base on this map, robot navigation through the factory with and without dynamic obstacles are shown to perform well. Finally, pick-and-place of parts using robot arm and ensuing delivery in the factory by the mobile robot are also accomplished.Keywords: automated guided vehicle, navigation, robot operation system, Simultaneous Localization and Mapping
Procedia PDF Downloads 1492560 Stem Cell Differentiation Toward Secretory Progenitors after Intestinal Ischemia-Reperfusion in a Rat is Accompanied by Inhibited Notch Signaling Cascade
Authors: Igor Sukhotnik
Abstract:
Objectives: Notch signaling is thought to act to drive cell versification in the lining of the small intestine. When Notch signaling is blocked, proliferation ceases, and epithelial cells become secretory. The purpose of the present study was to evaluate the role of Notch signaling pathway in stem cell differentiation in a rat model of intestinal ischemia-reperfusion (IR). Methods: Male Sprague-Dawley rats were randomly divided into four experimental groups: Sham-24 and Sham-48 rats underwent laparotomy and were killed 24 or 48 h later, respectively; IR-24 and IR-48 rats underwent occlusion of SMA and portal vein for 30 min followed by 24 or 48 h of reperfusion, respectively. Notch-related gene and protein expression were determined using Real Time PCR, Western blotting and immunohistochemistry. Wax histology and immunohistochemistry was used to determine cell differentiation toward absorptive (enterocytes) or secretory progenitors (goblet cells, enteroendocrine cells or Paneth cells). Results: IR-48 rats exhibited a significant decrease in Notch-1 protein expression (Western blot) that was coincided with a significant decrease in the number of Notch-1 positive cells (immunohistochemistry) in jejunum and ileum as well as Hes-1 positive cells in jejunum and ileum compared to Sham-48 rats. A significant down-regulation of Notch signaling related genes and proteins in IR animals was accompanied by a significant increase in the number of goblet and Paneth cells and decreased number of absorptive cells compared to control rats. Conclusions: Forty-eight hours following intestinal IR in rats, inhibited Notch signaling pathway was accompanied by intestinal stem cells differentiation toward secretory progenitors.Keywords: Intestine, notch, ischemia-reperfusion, cell differentiation, secretory
Procedia PDF Downloads 582559 The Impact of Sedimentary Heterogeneity on Oil Recovery in Basin-plain Turbidite: An Outcrop Analogue Simulation Case Study
Authors: Bayonle Abiola Omoniyi
Abstract:
In turbidite reservoirs with volumetrically significant thin-bedded turbidites (TBTs), thin-pay intervals may be underestimated during calculation of reserve volume due to poor vertical resolution of conventional well logs. This paper demonstrates the strong control of bed-scale sedimentary heterogeneity on oil recovery using six facies distribution scenarios that were generated from outcrop data from the Eocene Itzurun Formation, Basque Basin (northern Spain). The variable net sand volume in these scenarios serves as a primary source of sedimentary heterogeneity impacting sandstone-mudstone ratio, sand and shale geometry and dimensions, lateral and vertical variations in bed thickness, and attribute indices. The attributes provided input parameters for modeling the scenarios. The models are 20-m (65.6 ft) thick. Simulation of the scenarios reveals that oil production is markedly enhanced where degree of sedimentary heterogeneity and resultant permeability contrast are low, as exemplified by Scenarios 1, 2, and 3. In these scenarios, bed architecture encourages better apparent vertical connectivity across intervals of laterally continuous beds. By contrast, low net-to-gross Scenarios 4, 5, and 6, have rapidly declining oil production rates and higher water cut with more oil effectively trapped in low-permeability layers. These scenarios may possess enough lateral connectivity to enable injected water to sweep oil to production well; such sweep is achieved at a cost of high-water production. It is therefore imperative to consider not only net-to-gross threshold but also facies stack pattern and related attribute indices to better understand how to effectively manage water production for optimum oil recovery from basin-plain reservoirs.Keywords: architecture, connectivity, modeling, turbidites
Procedia PDF Downloads 242558 Alternative (In)Security: Using Photovoice Research Methodology to Explore Refugee Anxieties in Lebanon
Authors: Jessy Abouarab
Abstract:
For more than half a century, international norms related to refugee security and protection have proliferated, yet their role in alleviating war’s negative impacts on human life remains limited. The impact of refugee-security processes often manifests asymmetrically within populations. Many issues and people get silenced due to narrow security policies that focus either on abstract threat containment and refugee control or refugee protection and humanitarian aid. (In)security practices are gendered and experienced. Examining the case study of Syrian refugees in Lebanon, this study explores the gendered impact of refugee security mechanisms on local realities. A transnational feminist approach will be used to position this research in relation to existing studies in the field of security and the refugee-protection regime, highlighting the social, cultural, legal, and political barriers to gender equality in the areas of violence, rights, and social inclusion. Through Photovoice methodology, the Syrian refugees’ (in)securities in Lebanon were given visibility by enabling local volunteers to record and reflect their realities through pictures, at the same time voice the participants’ anxieties and recommendations to reach normative policy change. This Participatory Action Research approach helped participants observe the structural barriers and lack of culturally inclusive refugee services that hinder security, increase discrimination, stigma, and poverty. The findings have implications for a shift of the refugee protection mechanisms to a community-based approach in ways that extend beyond narrow security policies that hinder women empowerment and raise vulnerabilities such as gendered exploitation, abuse, and neglect.Keywords: gender, (in)security, Lebanon, refugee, Syrian refugees, women
Procedia PDF Downloads 1432557 Applications of Forensics/DNA Tools in Combating Gender-Based Violence: A Case Study in Nigeria
Authors: Edeaghe Ehikhamenor, Jennifer Nnamdi
Abstract:
Introduction: Gender-based violence (GBV) was a well-known global crisis before the COVID-19 pandemic. The pandemic burden only intensified the crisis. With prevailing lockdowns, increased poverty due to high unemployment, especially affecting females, and other mobility restrictions that have left many women trapped with their abusers, plus isolation from social contact and support networks, GBV cases spiraled out of control. Prevalence of economic with cultural disparity, which is greatly manifested in Nigeria, is a major contributory factor to GBV. This is made worst by religious adherents where the females are virtually relegated to the background. Our societal approaches to investigations and sanctions to culprits have not sufficiently applied forensic/DNA tools in combating these major vices. Violence against women or some rare cases against men can prevent them from carrying out their duties regardless of the position they hold. Objective: The main objective of this research is to highlight the origin of GBV, the victims, types, contributing factors, and the applications of forensics/DNA tools and remedies so as to minimize GBV in our society. Methods: Descriptive information was obtained through the search on our daily newspapers, electronic media, google scholar websites, other authors' observations and personal experiences, plus anecdotal reports. Results: Findings from our exploratory searches revealed a high incidence of GBV with very limited or no applications of Forensics/DNA tools as an intervening mechanism to reduce GBV in Nigeria. Conclusion: Nigeria needs to develop clear-cut policies on forensics/DNA tools in terms of institutional framework to develop a curriculum for the training of all stakeholders to fast-track justice for victims of GBV so as to serve as a deterrent to other culprits.Keywords: gender-based violence, forensics, DNA, justice
Procedia PDF Downloads 842556 Designing an Exhaust Gas Energy Recovery Module Following Measurements Performed under Real Operating Conditions
Authors: Jerzy Merkisz, Pawel Fuc, Piotr Lijewski, Andrzej Ziolkowski, Pawel Czarkowski
Abstract:
The paper presents preliminary results of the development of an automotive exhaust gas energy recovery module. The aim of the performed analyses was to select the geometry of the heat exchanger that would ensure the highest possible transfer of heat at minimum heat flow losses. The starting point for the analyses was a straight portion of a pipe, from which the exhaust system of the tested vehicle was made. The design of the heat exchanger had a cylindrical cross-section, was 300 mm long and was fitted with a diffuser and a confusor. The model works were performed for the mentioned geometry utilizing the finite volume method based on the Ansys CFX v12.1 and v14 software. This method consisted in dividing of the system into small control volumes for which the exhaust gas velocity and pressure calculations were performed using the Navier-Stockes equations. The heat exchange in the system was modeled based on the enthalpy balance. The temperature growth resulting from the acting viscosity was not taken into account. The heat transfer on the fluid/solid boundary in the wall layer with the turbulent flow was done based on an arbitrarily adopted dimensionless temperature. The boundary conditions adopted in the analyses included the convective condition of heat transfer on the outer surface of the heat exchanger and the mass flow and temperature of the exhaust gas at the inlet. The mass flow and temperature of the exhaust gas were assumed based on the measurements performed in actual traffic using portable PEMS analyzers. The research object was a passenger vehicle fitted with a 1.9 dm3 85 kW diesel engine. The tests were performed in city traffic conditions.Keywords: waste heat recovery, heat exchanger, CFD simulation, pems
Procedia PDF Downloads 5742555 N-Heterocyclic Carbene Based Dearomatized Iridium Complex as an Efficient Catalyst towards Carbon-Carbon Bond Formation via Hydrogen Borrowing Strategy
Authors: Mandeep Kaur, Jitendra K. Bera
Abstract:
The search for atom-economical and green synthetic methods for the synthesis of functionalized molecules has attracted much attention. Metal ligand cooperation (MLC) plays a pivotal role in organometallic catalysis to activate C−H, H−H, O−H, N−H and B−H bonds through reversible bond breaking and bond making process. Towards this goal, a bifunctional N─heterocyclic carbene (NHC) based pyridyl-functionalized amide ligand precursor, and corresponding dearomatized iridium complex was synthesized. The NMR and UV/Vis acid titration study have been done to prove the proton response nature of the iridium complex. Further, the dearomatized iridium complex explored as a catalyst on the platform of MLC via dearomatzation/aromatization mode of action towards atom economical α and β─alkylation of ketones and secondary alcohols by using primary alcohols through hydrogen borrowing methodology. The key features of the catalysis are high turnover frequency (TOF) values, low catalyst loading, low base loading and no waste product. The greener syntheses of quinoline, lactone derivatives and selective alkylation of drug molecules like pregnenolone and testosterone were also achieved successfully. Another structurally similar iridium complex was also synthesized with modified ligand precursor where a pendant amide unit was absent. The inactivity of this analogue iridium complex towards catalysis authenticated the participation of proton responsive imido sidearm of the ligand to accelerate the catalytic reaction. The mechanistic investigation through control experiments, NMR and deuterated labeling study, authenticate the borrowing hydrogen strategy.Keywords: C-C bond formation, hydrogen borrowing, metal ligand cooperation (MLC), n-heterocyclic carbene
Procedia PDF Downloads 1812554 Biophysical Study of the Interaction of Harmalol with Nucleic Acids of Different Motifs: Spectroscopic and Calorimetric Approaches
Authors: Kakali Bhadra
Abstract:
Binding of small molecules to DNA and recently to RNA, continues to attract considerable attention for developing effective therapeutic agents for control of gene expression. This work focuses towards understanding interaction of harmalol, a dihydro beta-carboline alkaloid, with different nucleic acid motifs viz. double stranded CT DNA, single stranded A-form poly(A), double-stranded A-form of poly(C)·poly(G) and clover leaf tRNAphe by different spectroscopic, calorimetric and molecular modeling techniques. Results of this study converge to suggest that (i) binding constant varied in the order of CT DNA > poly(C)·poly(G) > tRNAphe > poly(A), (ii) non-cooperative binding of harmalol to poly(C)·poly(G) and poly(A) and cooperative binding with CT DNA and tRNAphe, (iii) significant structural changes of CT DNA, poly(C)·poly(G) and tRNAphe with concomitant induction of optical activity in the bound achiral alkaloid molecules, while with poly(A) no intrinsic CD perturbation was observed, (iv) the binding was predominantly exothermic, enthalpy driven, entropy favoured with CT DNA and poly(C)·poly(G) while it was entropy driven with tRNAphe and poly(A), (v) a hydrophobic contribution and comparatively large role of non-polyelectrolytic forces to Gibbs energy changes with CT DNA, poly(C)·poly(G) and tRNAphe, and (vi) intercalated state of harmalol with CT DNA and poly(C)·poly(G) structure as revealed from molecular docking and supported by the viscometric data. Furthermore, with competition dialysis assay it was shown that harmalol prefers hetero GC sequences. All these findings unequivocally pointed out that harmalol prefers binding with ds CT DNA followed by ds poly(C)·poly(G), clover leaf tRNAphe and least with ss poly(A). The results highlight the importance of structural elements in these natural beta-carboline alkaloids in stabilizing different DNA and RNA of various motifs for developing nucleic acid based better therapeutic agents.Keywords: calorimetry, docking, DNA/RNA-alkaloid interaction, harmalol, spectroscopy
Procedia PDF Downloads 2282553 Modeling Visual Memorability Assessment with Autoencoders Reveals Characteristics of Memorable Images
Authors: Elham Bagheri, Yalda Mohsenzadeh
Abstract:
Image memorability refers to the phenomenon where certain images are more likely to be remembered by humans than others. It is a quantifiable and intrinsic attribute of an image. Understanding how visual perception and memory interact is important in both cognitive science and artificial intelligence. It reveals the complex processes that support human cognition and helps to improve machine learning algorithms by mimicking the brain's efficient data processing and storage mechanisms. To explore the computational underpinnings of image memorability, this study examines the relationship between an image's reconstruction error, distinctiveness in latent space, and its memorability score. A trained autoencoder is used to replicate human-like memorability assessment inspired by the visual memory game employed in memorability estimations. This study leverages a VGG-based autoencoder that is pre-trained on the vast ImageNet dataset, enabling it to recognize patterns and features that are common to a wide and diverse range of images. An empirical analysis is conducted using the MemCat dataset, which includes 10,000 images from five broad categories: animals, sports, food, landscapes, and vehicles, along with their corresponding memorability scores. The memorability score assigned to each image represents the probability of that image being remembered by participants after a single exposure. The autoencoder is finetuned for one epoch with a batch size of one, attempting to create a scenario similar to human memorability experiments where memorability is quantified by the likelihood of an image being remembered after being seen only once. The reconstruction error, which is quantified as the difference between the original and reconstructed images, serves as a measure of how well the autoencoder has learned to represent the data. The reconstruction error of each image, the error reduction, and its distinctiveness in latent space are calculated and correlated with the memorability score. Distinctiveness is measured as the Euclidean distance between each image's latent representation and its nearest neighbor within the autoencoder's latent space. Different structural and perceptual loss functions are considered to quantify the reconstruction error. The results indicate that there is a strong correlation between the reconstruction error and the distinctiveness of images and their memorability scores. This suggests that images with more unique distinct features that challenge the autoencoder's compressive capacities are inherently more memorable. There is also a negative correlation between the reduction in reconstruction error compared to the autoencoder pre-trained on ImageNet, which suggests that highly memorable images are harder to reconstruct, probably due to having features that are more difficult to learn by the autoencoder. These insights suggest a new pathway for evaluating image memorability, which could potentially impact industries reliant on visual content and mark a step forward in merging the fields of artificial intelligence and cognitive science. The current research opens avenues for utilizing neural representations as instruments for understanding and predicting visual memory.Keywords: autoencoder, computational vision, image memorability, image reconstruction, memory retention, reconstruction error, visual perception
Procedia PDF Downloads 912552 Creating a Child Friendly Environment as a Curriculum Model for Early Years Teaching
Authors: Undiyaundeye Florence Atube, Ugar Innocent A.
Abstract:
Young children are active learners who use all their senses to build concepts and ideas from their experiences. The process of learning, the content and the outcomes, is vital for young children. They need time to explore whether they are satisfied with what is learnt. Of all levels of education, early childhood education is considered to be most critical for the social, emotional, cognitive and physical development. For this reason, the teachers for early years need to play a significant role in the teaching and learning process through the provision of a friendly environment in the school. A case study approach was used in this study. The information was gathered through various methods like class observation, field notes, documents analysis, group processes, and semi structured interviews. The group processes participants and interviewees were taken from some stakeholders such as parents, students, teachers, and head teachers from public schools, to have a broad and comprehensive analysis, informal interaction with different stakeholders and self-reflection was used to clarify aspects of varying issues and findings. The teachers’ roles in developing a child friendly environment in personal capacity to learning were found to improve a pupils learning ability. Prior to early child development education, learning experiences and pedagogical content knowledge played a vital role in engaging teachers in developing their thinking and teaching practice. Children can be helped to develop independent self-control and self-reliance with careful planning and development of the child’s experience with sensitive and appropriate interaction by the educator to propel eagerness to learn through the provision of a friendly environment.Keywords: child friendly environment, early childhood, education and development, teaching, learning and the curriculum
Procedia PDF Downloads 3742551 Cosmic Muon Tomography at the Wylfa Reactor Site Using an Anti-Neutrino Detector
Authors: Ronald Collins, Jonathon Coleman, Joel Dasari, George Holt, Carl Metelko, Matthew Murdoch, Alexander Morgan, Yan-Jie Schnellbach, Robert Mills, Gareth Edwards, Alexander Roberts
Abstract:
At the Wylfa Magnox Power Plant between 2014–2016, the VIDARR prototype anti-neutrino detector was deployed. It is comprised of extruded plastic scintillating bars measuring 4 cm × 1 cm × 152 cm and utilised wavelength shifting fibres (WLS) and multi-pixel photon counters (MPPCs) to detect and quantify radiation. During deployment, it took cosmic muon data in accidental coincidence with the anti-neutrino measurements with the power plant site buildings obscuring the muon sky. Cosmic muons have a significantly higher probability of being attenuated and/or absorbed by denser objects, and so one-sided cosmic muon tomography was utilised to image the reactor site buildings. In order to achieve clear building outlines, a control data set was taken at the University of Liverpool from 2016 – 2018, which had minimal occlusion of the cosmic muon flux by dense objects. By taking the ratio of these two data sets and using GEANT4 simulations, it is possible to perform a one-sided cosmic muon tomography analysis. This analysis can be used to discern specific buildings, building heights, and features at the Wylfa reactor site, including the reactor core/reactor core shielding using ∼ 3 hours worth of cosmic-ray detector live time. This result demonstrates the feasibility of using cosmic muon analysis to determine a segmented detector’s location with respect to surrounding buildings, assisted by aerial photography or satellite imagery.Keywords: anti-neutrino, GEANT4, muon, tomography, occlusion
Procedia PDF Downloads 1862550 Metagenomics-Based Molecular Epidemiology of Viral Diseases
Authors: Vyacheslav Furtak, Merja Roivainen, Olga Mirochnichenko, Majid Laassri, Bella Bidzhieva, Tatiana Zagorodnyaya, Vladimir Chizhikov, Konstantin Chumakov
Abstract:
Molecular epidemiology and environmental surveillance are parts of a rational strategy to control infectious diseases. They have been widely used in the worldwide campaign to eradicate poliomyelitis, which otherwise would be complicated by the inability to rapidly respond to outbreaks and determine sources of the infection. The conventional scheme involves isolation of viruses from patients and the environment, followed by their identification by nucleotide sequences analysis to determine phylogenetic relationships. This is a tedious and time-consuming process that yields definitive results when it may be too late to implement countermeasures. Because of the difficulty of high-throughput full-genome sequencing, most such studies are conducted by sequencing only capsid genes or their parts. Therefore the important information about the contribution of other parts of the genome and inter- and intra-species recombination to viral evolution is not captured. Here we propose a new approach based on the rapid concentration of sewage samples with tangential flow filtration followed by deep sequencing and reconstruction of nucleotide sequences of viruses present in the samples. The entire nucleic acids content of each sample is sequenced, thus preserving in digital format the complete spectrum of viruses. A set of rapid algorithms was developed to separate deep sequence reads into discrete populations corresponding to each virus and assemble them into full-length consensus contigs, as well as to generate a complete profile of sequence heterogeneities in each of them. This provides an effective approach to study molecular epidemiology and evolution of natural viral populations.Keywords: poliovirus, eradication, environmental surveillance, laboratory diagnosis
Procedia PDF Downloads 2812549 An Evolutionary Approach for Automated Optimization and Design of Vivaldi Antennas
Authors: Sahithi Yarlagadda
Abstract:
The design of antenna is constrained by mathematical and geometrical parameters. Though there are diverse antenna structures with wide range of feeds yet, there are many geometries to be tried, which cannot be customized into predefined computational methods. The antenna design and optimization qualify to apply evolutionary algorithmic approach since the antenna parameters weights dependent on geometric characteristics directly. The evolutionary algorithm can be explained simply for a given quality function to be maximized. We can randomly create a set of candidate solutions, elements of the function's domain, and apply the quality function as an abstract fitness measure. Based on this fitness, some of the better candidates are chosen to seed the next generation by applying recombination and permutation to them. In conventional approach, the quality function is unaltered for any iteration. But the antenna parameters and geometries are wide to fit into single function. So, the weight coefficients are obtained for all possible antenna electrical parameters and geometries; the variation is learnt by mining the data obtained for an optimized algorithm. The weight and covariant coefficients of corresponding parameters are logged for learning and future use as datasets. This paper drafts an approach to obtain the requirements to study and methodize the evolutionary approach to automated antenna design for our past work on Vivaldi antenna as test candidate. The antenna parameters like gain, directivity, etc. are directly caged by geometries, materials, and dimensions. The design equations are to be noted here and valuated for all possible conditions to get maxima and minima for given frequency band. The boundary conditions are thus obtained prior to implementation, easing the optimization. The implementation mainly aimed to study the practical computational, processing, and design complexities that incur while simulations. HFSS is chosen for simulations and results. MATLAB is used to generate the computations, combinations, and data logging. MATLAB is also used to apply machine learning algorithms and plotting the data to design the algorithm. The number of combinations is to be tested manually, so HFSS API is used to call HFSS functions from MATLAB itself. MATLAB parallel processing tool box is used to run multiple simulations in parallel. The aim is to develop an add-in to antenna design software like HFSS, CSTor, a standalone application to optimize pre-identified common parameters of wide range of antennas available. In this paper, we have used MATLAB to calculate Vivaldi antenna parameters like slot line characteristic impedance, impedance of stripline, slot line width, flare aperture size, dielectric and K means, and Hamming window are applied to obtain the best test parameters. HFSS API is used to calculate the radiation, bandwidth, directivity, and efficiency, and data is logged for applying the Evolutionary genetic algorithm in MATLAB. The paper demonstrates the computational weights and Machine Learning approach for automated antenna optimizing for Vivaldi antenna.Keywords: machine learning, Vivaldi, evolutionary algorithm, genetic algorithm
Procedia PDF Downloads 1102548 Application of Industrial Ergonomics in Vehicle Service System Design
Authors: Zhao Yu, Zhi-Nan Zhang
Abstract:
More and more interactive devices are used in the transportation service system. Our mobile phones, on-board computers, and Head-Up Displays (HUDs) can all be used as the tools of the in-car service system. People can access smart systems with different terminals such as mobile phones, computers, pads and even their cars and watches. Different forms of terminals bring the different quality of interaction by the various human-computer Interaction modes. The new interactive devices require good ergonomics design at each stage of the whole design process. According to the theory of human factors and ergonomics, this paper compared three types of interactive devices by four driving tasks. Forty-eight drivers were chosen to experience these three interactive devices (mobile phones, on-board computers, and HUDs) by a simulate driving process. The subjects evaluated ergonomics performance and subjective workload after the process. And subjects were encouraged to support suggestions for improving the interactive device. The result shows that different interactive devices have different advantages in driving tasks, especially in non-driving tasks such as information and entertainment fields. Compared with mobile phones and onboard groups, the HUD groups had shorter response times in most tasks. The tasks of slow-up and the emergency braking are less accurate than the performance of a control group, which may because the haptic feedback of these two tasks is harder to distinguish than the visual information. Simulated driving is also helpful in improving the design of in-vehicle interactive devices. The paper summarizes the ergonomics characteristics of three in-vehicle interactive devices. And the research provides a reference for the future design of in-vehicle interactive devices through an ergonomic approach to ensure a good interaction relationship between the driver and the in-vehicle service system.Keywords: human factors, industrial ergonomics, transportation system, usability, vehicle user interface
Procedia PDF Downloads 1392547 Numerical Investigation of the Influence on Buckling Behaviour Due to Different Launching Bearings
Authors: Nadine Maier, Martin Mensinger, Enea Tallushi
Abstract:
In general, today, two types of launching bearings are used in the construction of large steel and steel concrete composite bridges. These are sliding rockers and systems with hydraulic bearings. The advantages and disadvantages of the respective systems are under discussion. During incremental launching, the center of the webs of the superstructure is not perfectly in line with the center of the launching bearings due to unavoidable tolerances, which may have an influence on the buckling behavior of the web plates. These imperfections are not considered in the current design against plate buckling, according to DIN EN 1993-1-5. It is therefore investigated whether the design rules have to take into account any eccentricities which occur during incremental launching and also if this depends on the respective launching bearing. Therefore, at the Technical University Munich, large-scale buckling tests were carried out on longitudinally stiffened plates under biaxial stresses with the two different types of launching bearings and eccentric load introduction. Based on the experimental results, a numerical model was validated. Currently, we are evaluating different parameters for both types of launching bearings, such as load introduction length, load eccentricity, the distance between longitudinal stiffeners, the position of the rotation point of the spherical bearing, which are used within the hydraulic bearings, web, and flange thickness and imperfections. The imperfection depends on the geometry of the buckling field and whether local or global buckling occurs. This and also the size of the meshing is taken into account in the numerical calculations of the parametric study. As a geometric imperfection, the scaled first buckling mode is applied. A bilinear material curve is used so that a GMNIA analysis is performed to determine the load capacity. Stresses and displacements are evaluated in different directions, and specific stress ratios are determined at the critical points of the plate at the time of the converging load step. To evaluate the load introduction of the transverse load, the transverse stress concentration is plotted on a defined longitudinal section on the web. In the same way, the rotation of the flange is evaluated in order to show the influence of the different degrees of freedom of the launching bearings under eccentric load introduction and to be able to make an assessment for the case, which is relevant in practice. The input and the output are automatized and depend on the given parameters. Thus we are able to adapt our model to different geometric dimensions and load conditions. The programming is done with the help of APDL and a Python code. This allows us to evaluate and compare more parameters faster. Input and output errors are also avoided. It is, therefore, possible to evaluate a large spectrum of parameters in a short time, which allows a practical evaluation of different parameters for buckling behavior. This paper presents the results of the tests as well as the validation and parameterization of the numerical model and shows the first influences on the buckling behavior under eccentric and multi-axial load introduction.Keywords: buckling behavior, eccentric load introduction, incremental launching, large scale buckling tests, multi axial stress states, parametric numerical modelling
Procedia PDF Downloads 1072546 A Knowledge-Based Development of Risk Management Approaches for Construction Projects
Authors: Masoud Ghahvechi Pour
Abstract:
Risk management is a systematic and regular process of identifying, analyzing and responding to risks throughout the project's life cycle in order to achieve the optimal level of elimination, reduction or control of risk. The purpose of project risk management is to increase the probability and effect of positive events and reduce the probability and effect of unpleasant events on the project. Risk management is one of the most fundamental parts of project management, so that unmanaged or untransmitted risks can be one of the primary factors of failure in a project. Effective risk management does not apply to risk regression, which is apparently the cheapest option of the activity. However, the main problem with this option is the economic sensitivity, because what is potentially profitable is by definition risky, and what does not pose a risk is economically interesting and does not bring tangible benefits. Therefore, in relation to the implemented project, effective risk management is finding a "middle ground" in its management, which includes, on the one hand, protection against risk from a negative direction by means of accurate identification and classification of risk, which leads to analysis And it becomes a comprehensive analysis. On the other hand, management using all mathematical and analytical tools should be based on checking the maximum benefits of these decisions. Detailed analysis, taking into account all aspects of the company, including stakeholder analysis, will allow us to add what will become tangible benefits for our project in the future to effective risk management. Identifying the risk of the project is based on the theory that which type of risk may affect the project, and also refers to specific parameters and estimating the probability of their occurrence in the project. These conditions can be divided into three groups: certainty, uncertainty, and risk, which in turn support three types of investment: risk preference, risk neutrality, specific risk deviation, and its measurement. The result of risk identification and project analysis is a list of events that indicate the cause and probability of an event, and a final assessment of its impact on the environment.Keywords: risk, management, knowledge, risk management
Procedia PDF Downloads 662545 Biocontrol Effectiveness of Indigenous Trichoderma Species against Meloidogyne javanica and Fusarium oxysporum f. sp. radicis lycopersici on Tomato
Authors: Hajji Lobna, Chattaoui Mayssa, Regaieg Hajer, M'Hamdi-Boughalleb Naima, Rhouma Ali, Horrigue-Raouani Najet
Abstract:
In this study, three local isolates of Trichoderma (Tr1: T. viride, Tr2: T. harzianum and Tr3: T. asperellum) were isolated and evaluated for their biocontrol effectiveness under in vitro conditions and in greenhouse. In vitro bioassay revealed a biopotential control against Fusarium oxysporum f. sp. radicis lycopersici and Meloidogyne javanica (RKN) separately. All species of Trichoderma exhibited biocontrol performance and (Tr1) Trichoderma viride was the most efficient. In fact, growth rate inhibition of Fusarium oxysporum f. sp. radicis lycopersici (FORL) was reached 75.5% with Tr1. Parasitism rate of root-knot nematode was 60% for juveniles and 75% for eggs with the same one. Pots experiment results showed that Tr1 and Tr2, compared to chemical treatment, enhanced the plant growth and exhibited better antagonism against root-knot nematode and root-rot fungi separated or combined. All Trichoderma isolates revealed a bioprotection potential against Fusarium oxysporum f. sp. radicis lycopersici. When pathogen fungi inoculated alone, Fusarium wilt index and browning vascular rate were reduced significantly with Tr1 (0.91, 2.38%) and Tr2 (1.5, 5.5%), respectively. In the case of combined infection with Fusarium and nematode, the same isolate of Trichoderma Tr1 and Tr2 decreased Fusarium wilt index at 1.1 and 0.83 and reduced the browning vascular rate at 6.5% and 6%, respectively. Similarly, the isolate Tr1 and Tr2 caused maximum inhibition of nematode multiplication. Multiplication rate was declined at 4% with both isolates either tomato infected by nematode separately or concomitantly with Fusarium. The chemical treatment was moderate in activity against Meloidogyne javanica and Fusarium oxysporum f. sp. radicis lycopersici alone and combined.Keywords: trichoderma spp., meloidogyne javanica, Fusarium oxysporum f.sp.radicis lycopersici, biocontrol
Procedia PDF Downloads 2782544 Determination of Viscosity and Degree of Hydrogenation of Liquid Organic Hydrogen Carriers by Cavity Based Permittivity Measurement
Authors: I. Wiemann, N. Weiß, E. Schlücker, M. Wensing
Abstract:
A very promising alternative to compression or cryogenics is the chemical storage of hydrogen by liquid organic hydrogen carriers (LOHC). These carriers enable high energy density and allow, at the same time, efficient and safe storage under ambient conditions without leakage losses. Another benefit of this storage medium is the possibility of transporting it using already available infrastructure for the transport of fossil fuels. Efficient use of LOHC is related to precise process control, which requires a number of sensors in order to measure all relevant process parameters, for example, to measure the level of hydrogen loading of the carrier. The degree of loading is relevant for the energy content of the storage carrier and simultaneously represents the modification in the chemical structure of the carrier molecules. This variation can be detected in different physical properties like permittivity, viscosity, or density. E.g., each degree of loading corresponds to different viscosity values. Conventional measurements currently use invasive viscosity measurements or near-line measurements to obtain quantitative information. This study investigates permittivity changes resulting from changes in hydrogenation degree (chemical structure) and temperature. Based on calibration measurements, the degree of loading and temperature of LOHC can thus be determined by comparatively simple permittivity measurements in a cavity resonator. Subsequently, viscosity and density can be calculated. An experimental setup with a heating device and flow test bench was designed. By varying temperature in the range of 293,15 K -393,15 K and flow velocity up to 140 mm/s, corresponding changes in the resonation frequency were determined in the hundredths of the GHz range. This approach allows inline process monitoring of hydrogenation of the liquid organic hydrogen carrier (LOHC).Keywords: hydrogen loading, LOHC, measurement, permittivity, viscosity
Procedia PDF Downloads 812543 Reconstruction of the 'Bakla' as an Identity
Authors: Oscar H. Malaco Jr.
Abstract:
Homosexuality has been adapted as the universal concept that defines the deviations from the heteronormative parameters of society. Sexual orientation and gender identities have been used in a concretely separate manner the same way as the dynamics between man and woman, male and female, gender and sex operate. These terms are all products of human beings’ utilization of language. Language has proven its power to define and determine the status and the categories of the subjects in society. This tool developed by human beings provides a definition of their own specific cultural community and their individual selves that either claim or oppugn their space in the social hierarchy. The label ‘bakla’ is reasoned as an identity which is a reaction to the spectral disposition of gender and sexuality in the Philippine society. To expose the Filipino constitutes of bakla is the major attempt of this study. Through the methods of Sikolohiyang Pilipino (Filipino Psychology), namely Pagtatanung-tanong (asking questions) and Pakikipagkuwentuhan (story-telling), the utterances of the bakla were gathered and analyzed in a rhetorical and ideological manner. Furthermore, the Dramatistic Pentad of Kenneth Burke was adapted as a methodology and also utilized as a perspective of analysis. The results suggest that the bakla as an identity carries the hurdles of class. The performativity of the bakla is proven to be a cycle propelled by their guilt to be identified and recognized as subjects in a society where heteronormative power contests their gender and sexual expressions as relatively aberrational to the binary gender and sexual roles. The labels, hence, are potent structures that control the disposition of the bakla in the society, reflecting an aspect of the disposition of Filipino identities. After all, performing kabaklaan in the Philippine society is interplay between resistance and conformity to the hegemonic dominions as a result of imperial attempts to universalize the concept of homosexuality between and among distant cultural communities.Keywords: gender identity, sexual orientation, rhetoric, performativity
Procedia PDF Downloads 4442542 Numerical Studies on Thrust Vectoring Using Shock-Induced Self Impinging Secondary Jets
Authors: S. Vignesh, N. Vishnu, S. Vigneshwaran, M. Vishnu Anand, Dinesh Kumar Babu, V. R. Sanal Kumar
Abstract:
The study of the primary flow velocity and the self impinging secondary jet flow mixing is important from both the fundamental research and the application point of view. Real industrial configurations are more complex than simple shear layers present in idealized numerical thrust-vectoring models due to the presence of combustion, swirl and confinement. Predicting the flow features of self impinging secondary jets in a supersonic primary flow is complex owing to the fact that there are a large number of parameters involved. Earlier studies have been highlighted several key features of self impinging jets, but an extensive characterization in terms of jet interaction between supersonic flow and self impinging secondary sonic jets is still an active research topic. In this paper numerical studies have been carried out using a validated two-dimensional k-omega standard turbulence model for the design optimization of a thrust vector control system using shock induced self impinging secondary flow sonic jets using non-reacting flows. Efforts have been taken for examining the flow features of TVC system with various secondary jets at different divergent locations and jet impinging angles with the same inlet jet pressure and mass flow ratio. The results from the parametric studies reveal that in addition to the primary to the secondary mass flow ratio the characteristics of the self impinging secondary jets having bearing on an efficient thrust vectoring. We concluded that the self impinging secondary jet nozzles are better than single jet nozzle with the same secondary mass flow rate owing to the fact fixing of the self impinging secondary jet nozzles with proper jet angle could facilitate better thrust vectoring for any supersonic aerospace vehicle.Keywords: fluidic thrust vectoring, rocket steering, supersonic to sonic jet interaction, TVC in aerospace vehicles
Procedia PDF Downloads 5892541 Prebiotics and Essential Oils-Enriched Diet Can Increase the Efficiency of Vaccine against Furunculosis in Rainbow Trout (Oncorhynchus Mykiss)
Authors: Niki Hayatgheib, SéGolèNe Calvez, Catherine Fournel, Lionel Pineau, Herve Pouliquen, Emmanuelle Moreau
Abstract:
Furunculosis caused by infection with Aeromonas salmonicida subsp. salmonicida has been a known disease found principally in salmonid aquaculture. Vaccination has been partly successful in preventing this disease, but outbreaks still occur. The application of functional feed additive found to be a promising yield to improve fish health against diseases. In this study, we tested the efficacy of prebiotics and plant essential oils-enriched diet on immune response and disease resistance in vaccinated and non-vaccinated rainbow trout (Oncorhynchus mykiss) against furunculosis. A total of 600 fish were fed with the basal diet or supplement. On 4th week of feeding, fish were vaccinated with an autovaccine. Following 8 weeks, fish were challenged with Aeromonas salmonicida subsp. salmonicida and mortalities were recorded for 3 weeks. Lysozyme activity and antibody titer in serum were measured in different groups. The results of this study showed that lysozyme and circulatory antibody titer in plasma elevated significantly in vaccinated fish fed with additive. The best growth rate and relative percentage survival (62%) were in fish fed with a supplement, while 15% in control fish. Overall, prebiotics and essential oils association can be considered as a potential component for enhancing vaccine efficacy against furunculosis by increasing the growth performance, immune responses and disease resistance in rainbow trout.Keywords: aeromonas salmonicida subsp. salmonicida, aquaculture, disease resistance, fish, immune response, prebiotics-essential oils feed additive, rainbow trout, vaccination
Procedia PDF Downloads 1202540 Effects of Climate Change and Land Use, Land Cover Change on Atmospheric Mercury
Authors: Shiliang Wu, Huanxin Zhang
Abstract:
Mercury has been well-known for its negative effects on wildlife, public health as well as the ecosystem. Once emitted into atmosphere, mercury can be transformed into different forms or enter the ecosystem through dry deposition or wet deposition. Some fraction of the mercury will be reemitted back into the atmosphere and be subject to the same cycle. In addition, the relatively long lifetime of elemental mercury in the atmosphere enables it to be transported long distances from source regions to receptor regions. Global change such as climate change and land use/land cover change impose significant challenges for mercury pollution control besides the efforts to regulate mercury anthropogenic emissions. In this study, we use a global chemical transport model (GEOS-Chem) to examine the potential impacts from changes in climate and land use/land cover on the global budget of mercury as well as its atmospheric transport, chemical transformation, and deposition. We carry out a suite of sensitivity model simulations to separate the impacts on atmospheric mercury associated with changes in climate and land use/land cover. Both climate change and land use/land cover change are found to have significant impacts on global mercury budget but through different pathways. Land use/land cover change primarily increase mercury dry deposition in northern mid-latitudes over continental regions and central Africa. Climate change enhances the mobilization of mercury from soil and ocean reservoir to the atmosphere. Also, dry deposition is enhanced over most continental areas while a change in future precipitation dominates the change in mercury wet deposition. We find that 2000-2050 climate change could increase the global atmospheric burden of mercury by 5% and mercury deposition by up to 40% in some regions. Changes in land use and land cover also increase mercury deposition over some continental regions, by up to 40%. The change in the lifetime of atmospheric mercury has important implications for long-range transport of mercury. Our case study shows that changes in climate and land use and cover could significantly affect the source-receptor relationships for mercury.Keywords: mercury, toxic pollutant, atmospheric transport, deposition, climate change
Procedia PDF Downloads 4892539 An Experimental Study on the Temperature Reduction of Exhaust Gas at a Snorkeling of Submarine
Authors: Seok-Tae Yoon, Jae-Yeong Choi, Gyu-Mok Jeon, Yong-Jin Cho, Jong-Chun Park
Abstract:
Conventional submarines obtain propulsive force by using an electric propulsion system consisting of a diesel generator, battery, motor, and propeller. In the underwater, the submarine uses the electric power stored in the battery. After that, when a certain amount of electric power is consumed, the submarine floats near the sea water surface and recharges the electric power by using the diesel generator. The voyage carried out while charging the power is called a snorkel, and the high-temperature exhaust gas from the diesel generator forms a heat distribution on the sea water surface. The heat distribution is detected by weapon system equipped with thermo-detector and that is the main cause of reducing the survivability of the submarine. In this paper, an experimental study was carried out to establish optimal operating conditions of a submarine for reduction of infrared signature radiated from the sea water surface. For this, a hot gas generating system and a round acrylic water tank with adjustable water level were made. The control variables of the experiment were set as the mass flow rate, the temperature difference between the water and the hot gas in the water tank, and the water level difference between the air outlet and the water surface. The experimental instrumentation used a thermocouple of T-type to measure the released air temperature on the surface of the water, and a thermography system to measure the thermal energy distribution on the water surface. As a result of the experiment study, we analyzed the correlation between the final released temperature of the exhaust pipe exit in a submarine and the depth of the snorkel, and presented reasonable operating conditions for the infrared signature reduction of submarine.Keywords: experiment study, flow rate, infrared signature, snorkeling, thermography
Procedia PDF Downloads 3522538 Factors Influencing the General Public Intention to Be Vaccinated: A Case of Botswana
Authors: Meng Qing Feng, Otsile Morake
Abstract:
Background: Successful implementation of the COVID-19 vaccination ensures the prevention of virus infection. Postponement and refusal of the vaccination will threaten public health, which is now common among the general public across the world. In addition, an acceptance of the COVID-19 vaccine appears as a decisive factor in controlling the COVID-19 pandemic. Purpose: This study's objective is to explore the factors influencing the public intention to be vaccinated (ITBV). Design/methodology/approach: The web-based survey included socio-demographics and questions related to the theory of planned behavior (TPB) and the health belief model (HBM). An online survey was administered using Google Form to collect data from participants of Botswana. The sample included 339 participants, half-half of the participants were female. Data analysis was run using the Statistical Package for the Social Sciences (SPSS). Findings: The study results highlight that perceived severity, perceived barriers, health motivation, and attitude have a positive and significant effect on ITBV, while perceived susceptibility, benefits, subjective norms, and perceived behavior control do not affect ITBV. Among all of the predictors, perceived barriers have the most significant influence on ITBV. Conclusion: Theoretically, this research stated that both HBM and TPB are effective in predicting and explaining the general public ITBV. Practically, this study offers insights to the government and health departments to arrange and launch health awareness programs and provide a better guide to vaccination so that doubts about vaccine confidence and the level of uncertainty can be decreased.Keywords: COVID-19, Omicron, intention to be COVID-19 vaccine, health behavior model, theory of planned behavior, Botswana
Procedia PDF Downloads 94