Search results for: optimal tracking
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3921

Search results for: optimal tracking

651 DNA-Polycation Condensation by Coarse-Grained Molecular Dynamics

Authors: Titus A. Beu

Abstract:

Many modern gene-delivery protocols rely on condensed complexes of DNA with polycations to introduce the genetic payload into cells by endocytosis. In particular, polyethyleneimine (PEI) stands out by a high buffering capacity (enabling the efficient condensation of DNA) and relatively simple fabrication. Realistic computational studies can offer essential insights into the formation process of DNA-PEI polyplexes, providing hints on efficient designs and engineering routes. We present comprehensive computational investigations of solvated PEI and DNA-PEI polyplexes involving calculations at three levels: ab initio, all-atom (AA), and coarse-grained (CG) molecular mechanics. In the first stage, we developed a rigorous AA CHARMM (Chemistry at Harvard Macromolecular Mechanics) force field (FF) for PEI on the basis of accurate ab initio calculations on protonated model pentamers. We validated this atomistic FF by matching the results of extensive molecular dynamics (MD) simulations of structural and dynamical properties of PEI with experimental data. In a second stage, we developed a CG MARTINI FF for PEI by Boltzmann inversion techniques from bead-based probability distributions obtained from AA simulations and ensuring an optimal match between the AA and CG structural and dynamical properties. In a third stage, we combined the developed CG FF for PEI with the standard MARTINI FF for DNA and performed comprehensive CG simulations of DNA-PEI complex formation and condensation. Various technical aspects which are crucial for the realistic modeling of DNA-PEI polyplexes, such as options of treating electrostatics and the relevance of polarizable water models, are discussed in detail. Massive CG simulations (with up to 500 000 beads) shed light on the mechanism and provide time scales for DNA polyplex formation independence of PEI chain size and protonation pattern. The DNA-PEI condensation mechanism is shown to primarily rely on the formation of DNA bundles, rather than by changes of the DNA-strand curvature. The gained insights are expected to be of significant help for designing effective gene-delivery applications.

Keywords: DNA condensation, gene-delivery, polyethylene-imine, molecular dynamics.

Procedia PDF Downloads 119
650 Reinforcement Learning For Agile CNC Manufacturing: Optimizing Configurations And Sequencing

Authors: Huan Ting Liao

Abstract:

In a typical manufacturing environment, computer numerical control (CNC) machining is essential for automating production through precise computer-controlled tool operations, significantly enhancing efficiency and ensuring consistent product quality. However, traditional CNC production lines often rely on manual loading and unloading, limiting operational efficiency and scalability. Although automated loading systems have been developed, they frequently lack sufficient intelligence and configuration efficiency, requiring extensive setup adjustments for different products and impacting overall productivity. This research addresses the job shop scheduling problem (JSSP) in CNC machining environments, aiming to minimize total completion time (makespan) and maximize CNC machine utilization. We propose a novel approach using reinforcement learning (RL), specifically the Q-learning algorithm, to optimize scheduling decisions. The study simulates the JSSP, incorporating robotic arm operations, machine processing times, and work order demand allocation to determine optimal processing sequences. The Q-learning algorithm enhances machine utilization by dynamically balancing workloads across CNC machines, adapting to varying job demands and machine states. This approach offers robust solutions for complex manufacturing environments by automating decision-making processes for job assignments. Additionally, we evaluate various layout configurations to identify the most efficient setup. By integrating RL-based scheduling optimization with layout analysis, this research aims to provide a comprehensive solution for improving manufacturing efficiency and productivity in CNC-based job shops. The proposed method's adaptability and automation potential promise significant advancements in tackling dynamic manufacturing challenges.

Keywords: job shop scheduling problem, reinforcement learning, operations sequence, layout optimization, q-learning

Procedia PDF Downloads 24
649 Examining Statistical Monitoring Approach against Traditional Monitoring Techniques in Detecting Data Anomalies during Conduct of Clinical Trials

Authors: Sheikh Omar Sillah

Abstract:

Introduction: Monitoring is an important means of ensuring the smooth implementation and quality of clinical trials. For many years, traditional site monitoring approaches have been critical in detecting data errors but not optimal in identifying fabricated and implanted data as well as non-random data distributions that may significantly invalidate study results. The objective of this paper was to provide recommendations based on best statistical monitoring practices for detecting data-integrity issues suggestive of fabrication and implantation early in the study conduct to allow implementation of meaningful corrective and preventive actions. Methodology: Electronic bibliographic databases (Medline, Embase, PubMed, Scopus, and Web of Science) were used for the literature search, and both qualitative and quantitative studies were sought. Search results were uploaded into Eppi-Reviewer Software, and only publications written in the English language from 2012 were included in the review. Gray literature not considered to present reproducible methods was excluded. Results: A total of 18 peer-reviewed publications were included in the review. The publications demonstrated that traditional site monitoring techniques are not efficient in detecting data anomalies. By specifying project-specific parameters such as laboratory reference range values, visit schedules, etc., with appropriate interactive data monitoring, statistical monitoring can offer early signals of data anomalies to study teams. The review further revealed that statistical monitoring is useful to identify unusual data patterns that might be revealing issues that could impact data integrity or may potentially impact study participants' safety. However, subjective measures may not be good candidates for statistical monitoring. Conclusion: The statistical monitoring approach requires a combination of education, training, and experience sufficient to implement its principles in detecting data anomalies for the statistical aspects of a clinical trial.

Keywords: statistical monitoring, data anomalies, clinical trials, traditional monitoring

Procedia PDF Downloads 75
648 Optical-Based Lane-Assist System for Rowing Boats

Authors: Stephen Tullis, M. David DiDonato, Hong Sung Park

Abstract:

Rowing boats (shells) are often steered by a small rudder operated by one of the backward-facing rowers; the attention required of that athlete then slightly decreases the power that that athlete can provide. Reducing the steering distraction would then increase the overall boat speed. Races are straight 2000 m courses with each boat in a 13.5 m wide lane marked by small (~15 cm) widely-spaced (~10 m) buoys, and the boat trajectory is affected by both cross-currents and winds. An optical buoy recognition and tracking system has been developed that provides the boat’s location and orientation with respect to the lane edges. This information is provided to the steering athlete as either: a simple overlay on a video display, or fed to a simplified autopilot system giving steering directions to the athlete or directly controlling the rudder. The system is then effectively a “lane-assist” device but with small, widely-spaced lane markers viewed from a very shallow angle due to constraints on camera height. The image is captured with a lightweight 1080p webcam, and most of the image analysis is done in OpenCV. The colour RGB-image is converted to a grayscale using the difference of the red and blue channels, which provides good contrast between the red/yellow buoys and the water, sky, land background and white reflections and noise. Buoy detection is done with thresholding within a tight mask applied to the image. Robust linear regression using Tukey’s biweight estimator of the previously detected buoy locations is used to develop the mask; this avoids the false detection of noise such as waves (reflections) and, in particular, buoys in other lanes. The robust regression also provides the current lane edges in the camera frame that are used to calculate the displacement of the boat from the lane centre (lane location), and its yaw angle. The interception of the detected lane edges provides a lane vanishing point, and yaw angle can be calculated simply based on the displacement of this vanishing point from the camera axis and the image plane distance. Lane location is simply based on the lateral displacement of the vanishing point from any horizontal cut through the lane edges. The boat lane position and yaw are currently fed what is essentially a stripped down marine auto-pilot system. Currently, only the lane location is used in a PID controller of a rudder actuator with integrator anti-windup to deal with saturation of the rudder angle. Low Kp and Kd values decrease unnecessarily fast return to lane centrelines and response to noise, and limiters can be used to avoid lane departure and disqualification. Yaw is not used as a control input, as cross-winds and currents can cause a straight course with considerable yaw or crab angle. Mapping of the controller with rudder angle “overall effectiveness” has not been finalized - very large rudder angles stall and have decreased turning moments, but at less extreme angles the increased rudder drag slows the boat and upsets boat balance. The full system has many features similar to automotive lane-assist systems, but with the added constraints of the lane markers, camera positioning, control response and noise increasing the challenge.

Keywords: auto-pilot, lane-assist, marine, optical, rowing

Procedia PDF Downloads 132
647 Performance Analysis of Microelectromechanical Systems-Based Piezoelectric Energy Harvester

Authors: Sanket S. Jugade, Swapneel U. Naphade, Satyabodh M. Kulkarni

Abstract:

Microscale energy harvesters can be used to convert ambient mechanical vibrations to electrical energy. Such devices have great applications in low powered electronics in remote environments like powering wireless sensor nodes of Internet of Things, lightings on highways or in ships, etc. In this paper, a Microelectromechanical systems (MEMS) based energy harvester has been modeled using Analytical and Finite Element Method (FEM). The device consists of a microcantilever with a proof mass attached to its free end and a Polyvinylidene Fluoride (PVDF) piezoelectric thin film deposited on the surface of microcantilever in a unimorph or bimorph configuration. For the analytical method, the energy harvester was modeled as an equivalent electrical system in SIMULINK. The Finite element model was developed and analyzed using the commercial package COMSOL Multiphysics. The modal analysis was performed first to find the fundamental natural frequency and its variation with geometrical parameters of the system. Then the harmonic analysis was performed to find the input mechanical power, output electrical voltage, and power for a range of excitation frequencies and base acceleration values. The variation of output power with load resistance, PVDF film thickness, and damping values was also found out. The results from FEM were then validated with that of the analytical model. Finally, the performance of the device was optimized with respect to various electro-mechanical parameters. For a unimorph configuration consisting of single crystal silicon microcantilever of dimensions 8mm×2mm×80µm and proof mass of 9.32 mg with optimal values of the thickness of PVDF film and load resistance as 225 µm and 20 MΩ respectively, the maximum electrical power generated for base excitation of 0.2g at 630 Hz is 0.9 µW.

Keywords: bimorph, energy harvester, FEM, harmonic analysis, MEMS, PVDF, unimorph

Procedia PDF Downloads 190
646 Long-Term Results of Coronary Bifurcation Stenting with Drug Eluting Stents

Authors: Piotr Muzyk, Beata Morawiec, Mariusz Opara, Andrzej Tomasik, Brygida Przywara-Chowaniec, Wojciech Jachec, Ewa Nowalany-Kozielska, Damian Kawecki

Abstract:

Background: Coronary bifurcation is one of the most complex lesion in patients with coronary ar-tery disease. Provisional T-stenting is currently one of the recommended techniques. The aim was to assess optimal methods of treatment in the era of drug-eluting stents (DES). Methods: The regis-try consisted of data from 1916 patients treated with coronary percutaneous interventions (PCI) using either first- or second-generation DES. Patients with bifurcation lesion entered the analysis. Major adverse cardiac and cardiovascular events (MACCE) were assessed at one year of follow-up and comprised of death, acute myocardial infarction (AMI), repeated PCI (re-PCI) of target ves-sel and stroke. Results: Of 1916 registry patients, 204 patients (11%) were diagnosed with bifurcation lesion >50% and entered the analysis. The most commonly used technique was provi-sional T-stenting (141 patients, 69%). Optimization with kissing-balloons technique was performed in 45 patients (22%). In 59 patients (29%) second-generation DES was implanted, while in 112 pa-tients (55%), first-generation DES was used. In 33 patients (16%) both types of DES were used. The procedure success rate (TIMI 3 flow) was achieved in 98% of patients. In one-year follow-up, there were 39 MACCE (19%) (9 deaths, 17 AMI, 16 re-PCI and 5 strokes). Provisional T-stenting resulted in similar rate of MACCE to other techniques (16% vs. 5%, p=0.27) and similar occurrence of re-PCI (6% vs. 2%, p=0.78). The results of post-PCI kissing-balloon technique gave equal out-comes with 3% vs. 16% of MACCE in patients in whom no optimization technique was used (p=0.39). The type of implanted DES (second- vs. first-generation) had no influence on MACCE (4% vs 14%, respectively, p=0.12) and re-PCI (1.7% vs. 51% patients, respectively, p=0.28). Con-clusions: The treatment of bifurcation lesions with PCI represent high-risk procedures with high rate of MACCE. Stenting technique, optimization of PCI and the generation of implanted stent should be personalized for each case to balance risk of the procedure. In this setting, the operator experience might be the factor of better outcome, which should be further investigated.

Keywords: coronary bifurcation, drug eluting stents, long-term follow-up, percutaneous coronary interventions

Procedia PDF Downloads 204
645 Off-Line Text-Independent Arabic Writer Identification Using Optimum Codebooks

Authors: Ahmed Abdullah Ahmed

Abstract:

The task of recognizing the writer of a handwritten text has been an attractive research problem in the document analysis and recognition community with applications in handwriting forensics, paleography, document examination and handwriting recognition. This research presents an automatic method for writer recognition from digitized images of unconstrained writings. Although a great effort has been made by previous studies to come out with various methods, their performances, especially in terms of accuracy, are fallen short, and room for improvements is still wide open. The proposed technique employs optimal codebook based writer characterization where each writing sample is represented by a set of features computed from two codebooks, beginning and ending. Unlike most of the classical codebook based approaches which segment the writing into graphemes, this study is based on fragmenting a particular area of writing which are beginning and ending strokes. The proposed method starting with contour detection to extract significant information from the handwriting and the curve fragmentation is then employed to categorize the handwriting into Beginning and Ending zones into small fragments. The similar fragments of beginning strokes are grouped together to create Beginning cluster, and similarly, the ending strokes are grouped to create the ending cluster. These two clusters lead to the development of two codebooks (beginning and ending) by choosing the center of every similar fragments group. Writings under study are then represented by computing the probability of occurrence of codebook patterns. The probability distribution is used to characterize each writer. Two writings are then compared by computing distances between their respective probability distribution. The evaluations carried out on ICFHR standard dataset of 206 writers using Beginning and Ending codebooks separately. Finally, the Ending codebook achieved the highest identification rate of 98.23%, which is the best result so far on ICFHR dataset.

Keywords: off-line text-independent writer identification, feature extraction, codebook, fragments

Procedia PDF Downloads 512
644 The Control of Wall Thickness Tolerance during Pipe Purchase Stage Based on Reliability Approach

Authors: Weichao Yu, Kai Wen, Weihe Huang, Yang Yang, Jing Gong

Abstract:

Metal-loss corrosion is a major threat to the safety and integrity of gas pipelines as it may result in the burst failures which can cause severe consequences that may include enormous economic losses as well as the personnel casualties. Therefore, it is important to ensure the corroding pipeline integrity and efficiency, considering the value of wall thickness, which plays an important role in the failure probability of corroding pipeline. Actually, the wall thickness is controlled during pipe purchase stage. For example, the API_SPEC_5L standard regulates the allowable tolerance of the wall thickness from the specified value during the pipe purchase. The allowable wall thickness tolerance will be used to determine the wall thickness distribution characteristic such as the mean value, standard deviation and distribution. Taking the uncertainties of the input variables in the burst limit-state function into account, the reliability approach rather than the deterministic approach will be used to evaluate the failure probability. Moreover, the cost of pipe purchase will be influenced by the allowable wall thickness tolerance. More strict control of the wall thickness usually corresponds to a higher pipe purchase cost. Therefore changing the wall thickness tolerance will vary both the probability of a burst failure and the cost of the pipe. This paper describes an approach to optimize the wall thickness tolerance considering both the safety and economy of corroding pipelines. In this paper, the corrosion burst limit-state function in Annex O of CSAZ662-7 is employed to evaluate the failure probability using the Monte Carlo simulation technique. By changing the allowable wall thickness tolerance, the parameters of the wall thickness distribution in the limit-state function will be changed. Using the reliability approach, the corresponding variations in the burst failure probability will be shown. On the other hand, changing the wall thickness tolerance will lead to a change in cost in pipe purchase. Using the variation of the failure probability and pipe cost caused by changing wall thickness tolerance specification, the optimal allowable tolerance can be obtained, and used to define pipe purchase specifications.

Keywords: allowable tolerance, corroding pipeline segment, operation cost, production cost, reliability approach

Procedia PDF Downloads 396
643 Sleep Health Management in Residential Aged Care Facilities

Authors: Elissar Mansour, Emily Chen, Tracee Fernandez, Mariam Basheti, Christopher Gordon, Bandana Saini

Abstract:

Sleep is an essential process for the maintenance of several neurobiological processes such as memory consolidation, mood, and metabolic processes. It is known that sleep patterns vary with age and is affected by multiple factors. While non-pharmacological strategies are generally considered first-line, sedatives are excessively used in the older population. This study aimed to explore the management of sleep in residential aged care facilities (RACFs) by nurse professionals and to identify the key factors that impact provision of optimal sleep health care. An inductive thematic qualitative research method was employed to analyse the data collected from semi-structured interviews with registered nurses working in RACF. Seventeen interviews were conducted, and the data yielded three themes: 1) the nurses’ observations and knowledge of sleep health, 2) the strategies employed in RACF for the management of sleep disturbances, 3) the organizational barriers to evidence-based sleep health management. Nurse participants reported the use of both non-pharmacological and pharmacological interventions. Sedatives were commonly prescribed due to their fast action and accessibility despite the guidelines indicating their use in later stages. Although benzodiazepines are known for their many side effects, such as drowsiness and oversedation, temazepam was the most commonly administered drug. Sleep in RACF was affected by several factors such as aging and comorbidities (e.g., dementia, pain, anxiety). However, the were also many modifiable factors that negatively impacted sleep management in RACF. These include staffing ratios, nursing duties, medication side effects, and lack of training and involvement of allied health professionals. This study highlighted the importance of involving a multidisciplinary team and the urge to develop guidelines and training programs for healthcare professionals to improve sleep health management in RACF.

Keywords: registered nurses, residential aged care facilities, sedative use, sleep

Procedia PDF Downloads 106
642 Effect of Light Spectra, Light Intensity, and HRT on the Co-Production of Phycoerythrin and Exopolysaccharides from Poprhyridium Marinum

Authors: Rosaria Tizzani, Tomas Morosinotto, Fabrizio Bezzo, Eleonora Sforza

Abstract:

Red microalga Porphyridium marinum CCAP 13807/10 has the potential to produce a broad range of commercially valuable chemicals such as PhycoErytrin (PE) and sulphated ExoPolySaccharides (EPS). Multiple abiotic factors influence the growth of Porphyridium sp., e.g. the wavelength of the light source and different cultivation strategies (one or two steps, batch, semi-, and continuous regime). The microalga of interest is cultivated in a two-step system. First, the culture grows photoautotrophically in a controlled bioreactor with pH-dependent CO2 injection, temperature monitoring, light intensity, and LED wavelength remote control in a semicontinuous mode. In the second step, the harvested biomass is subjected to mixotrophic conditions to enhance further growth. Preliminary tests have been performed to define the suitable media, salinity, pH, and organic carbon substrate to obtain the highest biomass productivity. Dynamic light and operational conditions (e.g. HRT) are evaluated to achieve high biomass production, high PE accumulation in the biomass, and high EPS release in the medium. Porphyridium marinum is able to chromatically adapt the photosynthetic apparatus to efficiently exploit the full light spectra composition. The effect of specific narrow LED wavelengths (white W, red R, green G, blue B) and a combination of LEDs (WR, WB, WG, BR, BG, RG) are identified to understand the phenomenon of chromatic adaptation under photoautotrophic conditions. The effect of light intensity, residence time, and light quality are investigated to define optimal operational strategies for full scale commercial applications. Production of biomass, phycobiliproteins, PE, EPS, EPS sulfate content, EPS composition, Chlorophyll-a, and pigment content are monitored to determine the effect of LED wavelength on the cultivation Porphyridium marinum in order to optimize the production of these multiple, highly valuable bioproducts of commercial interest.

Keywords: red microalgae, LED, exopolysaccharide, phycoerythrin

Procedia PDF Downloads 108
641 Introducing, Testing, and Evaluating a Unified JavaScript Framework for Professional Online Studies

Authors: Caspar Goeke, Holger Finger, Dorena Diekamp, Peter König

Abstract:

Online-based research has recently gained increasing attention from various fields of research in the cognitive sciences. Technological advances in the form of online crowdsourcing (Amazon Mechanical Turk), open data repositories (Open Science Framework), and online analysis (Ipython notebook) offer rich possibilities to improve, validate, and speed up research. However, until today there is no cross-platform integration of these subsystems. Furthermore, implementation of online studies still suffers from the complex implementation (server infrastructure, database programming, security considerations etc.). Here we propose and test a new JavaScript framework that enables researchers to conduct any kind of behavioral research in the browser without the need to program a single line of code. In particular our framework offers the possibility to manipulate and combine the experimental stimuli via a graphical editor, directly in the browser. Moreover, we included an action-event system that can be used to handle user interactions, interactively change stimuli properties or store participants’ responses. Besides traditional recordings such as reaction time, mouse and keyboard presses, the tool offers webcam based eye and face-tracking. On top of these features our framework also takes care about the participant recruitment, via crowdsourcing platforms such as Amazon Mechanical Turk. Furthermore, the build in functionality of google translate will ensure automatic text translations of the experimental content. Thereby, thousands of participants from different cultures and nationalities can be recruited literally within hours. Finally, the recorded data can be visualized and cleaned online, and then exported into the desired formats (csv, xls, sav, mat) for statistical analysis. Alternatively, the data can also be analyzed online within our framework using the integrated Ipython notebook. The framework was designed such that studies can be used interchangeably between researchers. This will support not only the idea of open data repositories but also constitutes the possibility to share and reuse the experimental designs and analyses such that the validity of the paradigms will be improved. Particularly, sharing and integrating the experimental designs and analysis will lead to an increased consistency of experimental paradigms. To demonstrate the functionality of the framework we present the results of a pilot study in the field of spatial navigation that was conducted using the framework. Specifically, we recruited over 2000 subjects with various cultural backgrounds and consequently analyzed performance difference in dependence on the factors culture, gender and age. Overall, our results demonstrate a strong influence of cultural factors in spatial cognition. Such an influence has not yet been reported before and would not have been possible to show without the massive amount of data collected via our framework. In fact, these findings shed new lights on cultural differences in spatial navigation. As a consequence we conclude that our new framework constitutes a wide range of advantages for online research and a methodological innovation, by which new insights can be revealed on the basis of massive data collection.

Keywords: cultural differences, crowdsourcing, JavaScript framework, methodological innovation, online data collection, online study, spatial cognition

Procedia PDF Downloads 257
640 Applying Semi-Automatic Digital Aerial Survey Technology and Canopy Characters Classification for Surface Vegetation Interpretation of Archaeological Sites

Authors: Yung-Chung Chuang

Abstract:

The cultural layers of archaeological sites are mainly affected by surface land use, land cover, and root system of surface vegetation. For this reason, continuous monitoring of land use and land cover change is important for archaeological sites protection and management. However, in actual operation, on-site investigation and orthogonal photograph interpretation require a lot of time and manpower. For this reason, it is necessary to perform a good alternative for surface vegetation survey in an automated or semi-automated manner. In this study, we applied semi-automatic digital aerial survey technology and canopy characters classification with very high-resolution aerial photographs for surface vegetation interpretation of archaeological sites. The main idea is based on different landscape or forest type can easily be distinguished with canopy characters (e.g., specific texture distribution, shadow effects and gap characters) extracted by semi-automatic image classification. A novel methodology to classify the shape of canopy characters using landscape indices and multivariate statistics was also proposed. Non-hierarchical cluster analysis was used to assess the optimal number of canopy character clusters and canonical discriminant analysis was used to generate the discriminant functions for canopy character classification (seven categories). Therefore, people could easily predict the forest type and vegetation land cover by corresponding to the specific canopy character category. The results showed that the semi-automatic classification could effectively extract the canopy characters of forest and vegetation land cover. As for forest type and vegetation type prediction, the average prediction accuracy reached 80.3%~91.7% with different sizes of test frame. It represented this technology is useful for archaeological site survey, and can improve the classification efficiency and data update rate.

Keywords: digital aerial survey, canopy characters classification, archaeological sites, multivariate statistics

Procedia PDF Downloads 142
639 A Study of the Effects of Temperatures and Optimum pH on the Specific Methane Production of Perennial Ryegrass during Anaerobic Digestion Process under a Discontinuous Daily Feeding Condition

Authors: Uchenna Egwu, Paul Jonathan Sallis

Abstract:

Perennial ryegrass is an abundant renewable lignocellulosic biofuel feedstock for biomethane production through anaerobic digestion (AD). In this study, six anaerobic continuously stirred tank reactors (CSTRs) were set up in three pairs. Each pair of the CSTRs was then used to study the effects of operating temperatures – psychrophilic, mesophilic, and thermophilic, and optimum pH on the specific methane production (SMP) of the ryegrass during AD under discontinuous daily feeding conditions. The reactors were fed at an organic loading rate (OLR) ranging from 1-1.5 kgVS.L⁻¹d⁻¹ and hydraulic residence time, HRT=20 days for 140 days. The pH of the digesters was maintained at the range of 6.8-7.2 using 1 M NH₄HCO₃ solution, but this was replaced with biomass ash-extracts from day 105-140. The results obtained showed that the mean SMP of ryegrass measured between HRT 3 and 4 were 318.4, 425.4 and 335 N L CH₄ kg⁻¹VS.d⁻¹ for the psychrophilic (25 ± 2°C), mesophilic (40 ± 1°C) and thermophilic (60 ± 1°C) temperatures respectively. It was also observed that the buffering ability of the reactors increased with operating temperature, probably due to an increase in the solubility of ammonium bicarbonate (NH₄HCO₃) with temperature. The reactors also achieved a mean VS destruction of 61.9, 68.5 and 63.5%, respectively, which signifies that the mesophilic reactors achieved the highest specific methane production (SMP), while the psychrophilic reactors achieved the lowest. None of the reactors attained steady-state condition due to the discontinuous daily feeding times, and therefore, such feeding practice may not be the most effective for maximum biogas production over long periods of time. The addition of NH₄HCO₃ as supplement provided a good buffering condition in these AD digesters, but the digesters failed in the long run due to inhibition from the accumulation of free ammonia, which later led to decrease in pH, acidification, and souring of the digesters. However, the addition of biomass ash extracts was shown to potentially revive failed AD reactors by providing an adequate buffering and essential trace nutrient supplements necessary for optimal bacterial growth.

Keywords: anaerobic digestion, discontinuous feeding, perennial ryegrass, specific methane production, supplements, temperature

Procedia PDF Downloads 127
638 Diffusion MRI: Clinical Application in Radiotherapy Planning of Intracranial Pathology

Authors: Pomozova Kseniia, Gorlachev Gennadiy, Chernyaev Aleksandr, Golanov Andrey

Abstract:

In clinical practice, and especially in stereotactic radiosurgery planning, the significance of diffusion-weighted imaging (DWI) is growing. This makes the existence of software capable of quickly processing and reliably visualizing diffusion data, as well as equipped with tools for their analysis in terms of different tasks. We are developing the «MRDiffusionImaging» software on the standard C++ language. The subject part has been moved to separate class libraries and can be used on various platforms. The user interface is Windows WPF (Windows Presentation Foundation), which is a technology for managing Windows applications with access to all components of the .NET 5 or .NET Framework platform ecosystem. One of the important features is the use of a declarative markup language, XAML (eXtensible Application Markup Language), with which you can conveniently create, initialize and set properties of objects with hierarchical relationships. Graphics are generated using the DirectX environment. The MRDiffusionImaging software package has been implemented for processing diffusion magnetic resonance imaging (dMRI), which allows loading and viewing images sorted by series. An algorithm for "masking" dMRI series based on T2-weighted images was developed using a deformable surface model to exclude tissues that are not related to the area of interest from the analysis. An algorithm of distortion correction using deformable image registration based on autocorrelation of local structure has been developed. Maximum voxel dimension was 1,03 ± 0,12 mm. In an elementary brain's volume, the diffusion tensor is geometrically interpreted using an ellipsoid, which is an isosurface of the probability density of a molecule's diffusion. For the first time, non-parametric intensity distributions, neighborhood correlations, and inhomogeneities are combined in one segmentation of white matter (WM), grey matter (GM), and cerebrospinal fluid (CSF) algorithm. A tool for calculating the coefficient of average diffusion and fractional anisotropy has been created, on the basis of which it is possible to build quantitative maps for solving various clinical problems. Functionality has been created that allows clustering and segmenting images to individualize the clinical volume of radiation treatment and further assess the response (Median Dice Score = 0.963 ± 0,137). White matter tracts of the brain were visualized using two algorithms: deterministic (fiber assignment by continuous tracking) and probabilistic using the Hough transform. The proposed algorithms test candidate curves in the voxel, assigning to each one a score computed from the diffusion data, and then selects the curves with the highest scores as the potential anatomical connections. White matter fibers were visualized using a Hough transform tractography algorithm. In the context of functional radiosurgery, it is possible to reduce the irradiation volume of the internal capsule receiving 12 Gy from 0,402 cc to 0,254 cc. The «MRDiffusionImaging» will improve the efficiency and accuracy of diagnostics and stereotactic radiotherapy of intracranial pathology. We develop software with integrated, intuitive support for processing, analysis, and inclusion in the process of radiotherapy planning and evaluating its results.

Keywords: diffusion-weighted imaging, medical imaging, stereotactic radiosurgery, tractography

Procedia PDF Downloads 85
637 Effect of Geometric Imperfections on the Vibration Response of Hexagonal Lattices

Authors: P. Caimmi, E. Bele, A. Abolfathi

Abstract:

Lattice materials are cellular structures composed of a periodic network of beams. They offer high weight-specific mechanical properties and lend themselves to numerous weight-sensitive applications. The periodic internal structure responds to external vibrations through characteristic frequency bandgaps, making these materials suitable for the reduction of noise and vibration. However, the deviation from architectural homogeneity, due to, e.g., manufacturing imperfections, has a strong influence on the mechanical properties and vibration response of these materials. In this work, we present results on the influence of geometric imperfections on the vibration response of hexagonal lattices. Three classes of geometrical variables are used: the characteristics of the architecture (relative density, ligament length/cell size ratio), imperfection type (degree of non-periodicity, cracks, hard inclusions) and defect morphology (size, distribution). Test specimens with controlled size and distribution of imperfections are manufactured through selective laser sintering. The Frequency Response Functions (FRFs) in the form of accelerance are measured, and the modal shapes are captured through a high-speed camera. The finite element method is used to provide insights on the extension of these results to semi-infinite lattices. An updating procedure is conducted to increase the reliability of numerical simulation results compared to experimental measurements. This is achieved by updating the boundary conditions and material stiffness. Variations in FRFs of periodic structures due to changes in the relative density of the constituent unit cell are analysed. The effects of geometric imperfections on the dynamic response of periodic structures are investigated. The findings can be used to open up the opportunity for tailoring these lattice materials to achieve optimal amplitude attenuations at specific frequency ranges.

Keywords: lattice architectures, geometric imperfections, vibration attenuation, experimental modal analysis

Procedia PDF Downloads 122
636 Biosurfactants Production by Bacillus Strain from an Environmental Sample in Egypt

Authors: Mervat Kassem, Nourhan Fanaki, F. Dabbous, Hamida Abou-Shleib, Y. R. Abdel-Fattah

Abstract:

With increasing environmental awareness and emphasis on a sustainable society in harmony with the global environment, biosurfactants are gaining prominence and have already taken over for a number of important industrial uses. They are produced by living organisms, for examples Pseudomonas aeruginosa which produces rhamnolipids, Candida (formerly Torulopsis) bombicola, which produces high yields of sophorolipids from vegetable oils and sugars and Bacillus subtilis which produces a lipopeptide called surfactin. The main goal of this work was to optimize biosurfactants production by an environmental Gram positive isolate for large scale production with maximum yield and low cost. After molecular characterization, phylogenetic tree was constructed where it was found to be B. subtilis, which close matches to B. subtilis subsp. subtilis strain CICC 10260. For optimizing its biosurfactants production, sequential statistical design using Plackett-Burman and response surface methodology, was applied where 11 variables were screened. When analyzing the regression coefficients for the 11 variables, pH, glucose, glycerol, yeast extract, ammonium chloride and ammonium nitrate were found to have a positive effect on the biosurfactants production. Ammonium nitrate, pH and glucose were further studied as significant independent variables for Box-Behnken design and their optimal levels were estimated and were found to be 7.328 pH value, 3 g% glucose and 0.21g % ammonium nitrate yielding high biosurfactants concentration that reduced the surface tension of the culture medium from 72 to 18.16 mN/m. Next, kinetics of cell growth and biosurfactants production by the tested B. subtilis isolate, in bioreactor was compared with that of shake flask where the maximum growth and specific growth (µ) in the bioreactor was higher by about 25 and 53%, respectively, than in shake flask experiment, while the biosurfactants production kinetics was almost the same in both shake flask and bioreactor experiments.

Keywords: biosurfactants, B. subtilis, molecular identification, phylogenetic trees, Plackett-Burman design, Box-Behnken design, 16S rRNA

Procedia PDF Downloads 410
635 Optimization of SOL-Gel Copper Oxide Layers for Field-Effect Transistors

Authors: Tomas Vincze, Michal Micjan, Milan Pavuk, Martin Weis

Abstract:

In recent years, alternative materials are gaining attention to replace polycrystalline and amorphous silicon, which are a standard for low requirement devices, where silicon is unnecessarily and high cost. For that reason, metal oxides are envisioned as the new materials for these low-requirement applications such as sensors, solar cells, energy storage devices, or field-effect transistors. Their most common way of layer growth is sputtering; however, this is a high-cost fabrication method, and a more industry-suitable alternative is the sol-gel method. In this group of materials, many oxides exhibit a semiconductor-like behavior with sufficiently high mobility to be applied as transistors. The sol-gel method is a cost-effective deposition technique for semiconductor-based devices. Copper oxides, as p-type semiconductors with free charge mobility up to 1 cm2/Vs., are suitable replacements for poly-Si or a-Si:H devices. However, to reach the potential of silicon devices, a fine-tuning of material properties is needed. Here we focus on the optimization of the electrical parameters of copper oxide-based field-effect transistors by modification of precursor solvent (usually 2-methoxy ethanol). However, to achieve solubility and high-quality films, a better solvent is required. Since almost no solvents have both high dielectric constant and high boiling point, an alternative approach was proposed with blend solvents. By mixing isopropyl alcohol (IPA) and 2-methoxy ethanol (2ME) the precursor reached better solubility. The quality of the layers fabricated using mixed solutions was evaluated in accordance with the surface morphology and electrical properties. The IPA:2ME solution mixture reached optimum results for the weight ratio of 1:3. The cupric oxide layers for optimal mixture had the highest crystallinity and highest effective charge mobility.

Keywords: copper oxide, field-effect transistor, semiconductor, sol-gel method

Procedia PDF Downloads 135
634 Scheduling Building Projects: The Chronographical Modeling Concept

Authors: Adel Francis

Abstract:

Most of scheduling methods and software apply the critical path logic. This logic schedule activities, apply constraints between these activities and try to optimize and level the allocated resources. The extensive use of this logic produces a complex an erroneous network hard to present, follow and update. Planning and management building projects should tackle the coordination of works and the management of limited spaces, traffic, and supplies. Activities cannot be performed without the resources available and resources cannot be used beyond the capacity of workplaces. Otherwise, workspace congestion will negatively affect the flow of works. The objective of the space planning is to link the spatial and temporal aspects, promote efficient use of the site, define optimal site occupancy rates, and ensures suitable rotation of the workforce in the different spaces. The Chronographic scheduling modelling belongs to this category and models construction operations as well as their processes, logical constraints, association and organizational models, which help to better illustrate the schedule information using multiple flexible approaches. The model defined three categories of areas (punctual, surface and linear) and four different layers (space creation, systems, closing off space, finishing, and reduction of space). The Chronographical modelling is a more complete communication method, having the ability to alternate from one visual approach to another by manipulation of graphics via a set of parameters and their associated values. Each individual approach can help to schedule a certain project type or specialty. Visual communication can also be improved through layering, sheeting, juxtaposition, alterations, and permutations, allowing for groupings, hierarchies, and classification of project information. In this way, graphic representation becomes a living, transformable image, showing valuable information in a clear and comprehensible manner, simplifying the site management while simultaneously utilizing the visual space as efficiently as possible.

Keywords: building projects, chronographic modelling, CPM, critical path, precedence diagram, scheduling

Procedia PDF Downloads 155
633 Thermal Regulation of Channel Flows Using Phase Change Material

Authors: Kira Toxopeus, Kamran Siddiqui

Abstract:

Channel flows are common in a wide range of engineering applications. In some types of channel flows, particularly the ones involving chemical or biological processes, the control of the flow temperature is crucial to maintain the optimal conditions for the chemical reaction or to control the growth of biological species. This often becomes an issue when the flow experiences temperature fluctuations due to external conditions. While active heating and cooling could regulate the channel temperature, it may not be feasible logistically or economically and is also regarded as a non-sustainable option. Thermal energy storage utilizing phase change material (PCM) could provide the required thermal regulation sustainably by storing the excess heat from the channel and releasing it back as required, thus regulating the channel temperature within a range in the proximity of the PCM melting temperature. However, in designing such systems, the configuration of the PCM storage within the channel is critical as it could influence the channel flow dynamics, which would, in turn, affect the heat exchange between the channel fluid and the PCM. The present research is focused on the investigation of the flow dynamical behavior in the channel during heat transfer from the channel flow to the PCM thermal energy storage. Offset vertical columns in a narrow channel were used that contained the PCM. Two different column shapes, square and circular, were considered. Water was used as the channel fluid that entered the channel at a temperature higher than that of the PCM melting temperature. Hence, as the water was passing through the channel, the heat was being transferred from the water to the PCM, causing the PCM to store the heat through a phase transition from solid to liquid. Particle image velocimetry (PIV) was used to measure the two-dimensional velocity field of the channel flow as it flows between the PCM columns. Thermocouples were also attached to the PCM columns to measure the PCM temperature at three different heights. Three different water flow rates (0.5, 0.75 and 1.2 liters/min) were considered. At each flow rate, experiments were conducted at three different inlet water temperatures (28ᵒC, 33ᵒC and 38ᵒC). The results show that the flow rate and the inlet temperature influenced the flow behavior inside the channel.

Keywords: channel flow, phase change material, thermal energy storage, thermal regulation

Procedia PDF Downloads 140
632 High Level Expression of Fluorinase in Escherichia Coli and Pichia Pastoris

Authors: Lee A. Browne, K. Rumbold

Abstract:

The first fluorinating enzyme, 5'-fluoro-5'-deoxyadenosine synthase (fluorinase) was isolated from the soil bacterium Streptomyces cattleya. Such an enzyme, with the ability to catalyze a C-F bond, presents great potential as a biocatalyst. Naturally fluorinated compounds are extremely rare in nature. As a result, the number of fluorinases identified remains relatively few. The field of fluorination is almost completely synthetic. However, with the increasing demand for fluorinated organic compounds of commercial value in the agrochemical, pharmaceutical and materials industries, it has become necessary to utilize biologically based methods such as biocatalysts. A key step in this crucial process is the large-scale production of the fluorinase enzyme in considerable quantities for industrial applications. Thus, this study aimed to optimize expression of the fluorinase enzyme in both prokaryotic and eukaryotic expression systems in order to obtain high protein yields. The fluorinase gene was cloned into the pET 41b(+) and pPinkα-HC vectors and used to transform the expression hosts, E.coli BL21(DE3) and Pichia pastoris (PichiaPink™ strains) respectively. Expression trials were conducted to select optimal conditions for expression in both expression systems. Fluorinase catalyses a reaction between S-adenosyl-L-Methionine (SAM) and fluoride ion to produce 5'-fluorodeoxyadenosine (5'FDA) and L-Methionine. The activity of the enzyme was determined using HPLC by measuring the product of the reaction 5'FDA. A gradient mobile phase of 95:5 v/v 50mM potassium phosphate buffer to a final mobile phase containing 80:20 v/v 50mM potassium phosphate buffer and acetonitrile were used. This resulted in the complete separation of SAM and 5’-FDA which eluted at 1.3 minutes and 3.4 minutes respectively. This proved that the fluorinase enzyme was active. Optimising expression of the fluorinase enzyme was successful in both E.coli and PichiaPink™ where high expression levels in both expression systems were achieved. Protein production will be scaled up in PichiaPink™ using fermentation to achieve large-scale protein production. High level expression of protein is essential in biocatalysis for the availability of enzymes for industrial applications.

Keywords: biocatalyst, expression, fluorinase, PichiaPink™

Procedia PDF Downloads 552
631 The Contribution of Experience Scapes to Building Resilience in Communities: A Comparative Case Study Approach in Germany and the Netherlands

Authors: Jorn Fricke, Frans Melissen

Abstract:

Citizens in urban areas are prone to increased levels of stress due to urbanization, inadequate and overburdened infrastructure and services, and environmental degradation. Moreover, communities are fragile and subject to shocks and stresses through various social and political processes. A loss of (a sense of) community is often seen as related to increasing political and civic disintegration. Feelings of community can manifest themselves in various ways but underlying all these manifestations is the need for trust between people. One of the main drivers of trust between individuals is (shared) experiences. It is these shared experiences that may play an important role in building resilience, i.e., the ability of a community and its members to adapt to and deal with stresses, as well as ensure the ongoing development of a community. So far, experience design, as a discipline and academic field, has mainly focused on designing products or services. However, people-to-people experiences are the ones that play a pivotal role in building inclusiveness, safety, and resilience in communities. These experiences represent challenging objects of design as they develop in an interactive space of spontaneity, serendipity, and uniqueness that is based on intuition, freedom of expression, and interaction. Therefore, there is a need for research to identify which elements are required in designing the social and physical environment (or ‘experience scape’) to increase the chance for people-to-people experiences to be successful and what elements are required for these experiences to help in building resilience in urban communities that can resist shocks and stresses. By means of a comparative case study approach in urban areas in Germany and the Netherlands, using a range of qualitative research methods such as in-depth interviews, focus groups, participant observation, storytelling techniques, and life stories, this research identifies relevant actors and their roles in creating building blocks of optimal experience scrapes for building resilience in communities.

Keywords: community development, experiences, experience scapes, resilience

Procedia PDF Downloads 182
630 Improvement of the Q-System Using the Rock Engineering System: A Case Study of Water Conveyor Tunnel of Azad Dam

Authors: Sahand Golmohammadi, Sana Hosseini Shirazi

Abstract:

Because the status and mechanical parameters of discontinuities in the rock mass are included in the calculations, various methods of rock engineering classification are often used as a starting point for the design of different types of structures. The Q-system is one of the most frequently used methods for stability analysis and determination of support systems of underground structures in rock, including tunnel. In this method, six main parameters of the rock mass, namely, the rock quality designation (RQD), joint set number (Jn), joint roughness number (Jr), joint alteration number (Ja), joint water parameter (Jw) and stress reduction factor (SRF) are required. In this regard, in order to achieve a reasonable and optimal design, identifying the effective parameters for the stability of the mentioned structures is one of the most important goals and the most necessary actions in rock engineering. Therefore, it is necessary to study the relationships between the parameters of a system and how they interact with each other and, ultimately, the whole system. In this research, it has attempted to determine the most effective parameters (key parameters) from the six parameters of rock mass in the Q-system using the rock engineering system (RES) method to improve the relationships between the parameters in the calculation of the Q value. The RES system is, in fact, a method by which one can determine the degree of cause and effect of a system's parameters by making an interaction matrix. In this research, the geomechanical data collected from the water conveyor tunnel of Azad Dam were used to make the interaction matrix of the Q-system. For this purpose, instead of using the conventional methods that are always accompanied by defects such as uncertainty, the Q-system interaction matrix is coded using a technique that is actually a statistical analysis of the data and determining the correlation coefficient between them. So, the effect of each parameter on the system is evaluated with greater certainty. The results of this study show that the formed interaction matrix provides a reasonable estimate of the effective parameters in the Q-system. Among the six parameters of the Q-system, the SRF and Jr parameters have the maximum and minimum impact on the system, respectively, and also the RQD and Jw parameters have the maximum and minimum impact on the system, respectively. Therefore, by developing this method, we can obtain a more accurate relation to the rock mass classification by weighting the required parameters in the Q-system.

Keywords: Q-system, rock engineering system, statistical analysis, rock mass, tunnel

Procedia PDF Downloads 73
629 Sliding Mode Power System Stabilizer for Synchronous Generator Stability Improvement

Authors: J. Ritonja, R. Brezovnik, M. Petrun, B. Polajžer

Abstract:

Many modern synchronous generators in power systems are extremely weakly damped. The reasons are cost optimization of the machine building and introduction of the additional control equipment into power systems. Oscillations of the synchronous generators and related stability problems of the power systems are harmful and can lead to failures in operation and to damages. The only useful solution to increase damping of the unwanted oscillations represents the implementation of the power system stabilizers. Power system stabilizers generate the additional control signal which changes synchronous generator field excitation voltage. Modern power system stabilizers are integrated into static excitation systems of the synchronous generators. Available commercial power system stabilizers are based on linear control theory. Due to the nonlinear dynamics of the synchronous generator, current stabilizers do not assure optimal damping of the synchronous generator’s oscillations in the entire operating range. For that reason the use of the robust power system stabilizers which are convenient for the entire operating range is reasonable. There are numerous robust techniques applicable for the power system stabilizers. In this paper the use of sliding mode control for synchronous generator stability improvement is studied. On the basis of the sliding mode theory, the robust power system stabilizer was developed. The main advantages of the sliding mode controller are simple realization of the control algorithm, robustness to parameter variations and elimination of disturbances. The advantage of the proposed sliding mode controller against conventional linear controller was tested for damping of the synchronous generator oscillations in the entire operating range. Obtained results show the improved damping in the entire operating range of the synchronous generator and the increase of the power system stability. The proposed study contributes to the progress in the development of the advanced stabilizer, which will replace conventional linear stabilizers and improve damping of the synchronous generators.

Keywords: control theory, power system stabilizer, robust control, sliding mode control, stability, synchronous generator

Procedia PDF Downloads 224
628 Exploring Smartphone Applications for Enhancing Second Language Vocabulary Learning

Authors: Abdulmajeed Almansour

Abstract:

Learning a foreign language with the assistant of technological tools has become an interest of learners and educators. Increased use of smartphones among undergraduate students has made them popular for not only social communication but also for entertainment and educational purposes. Smartphones have provided remarkable advantages in language learning process. Learning vocabulary is an important part of learning a language. The use of smartphone applications for English vocabulary learning provides an opportunity for learners to improve vocabulary knowledge beyond the classroom wall anytime anywhere. Recently, various smartphone applications were created specifically for vocabulary learning. This paper aims to explore the use of smartphone application Memrise designed for vocabulary learning to enhance academic vocabulary among undergraduate students. It examines whether the use of a Memrise smartphone application designed course enhances the academic vocabulary learning among ESL learners. The research paradigm used in this paper followed a mixed research model combining quantitative and qualitative research. The study included two hundred undergraduate students randomly assigned to the experimental and controlled group during the first academic year at the Faculty of English Language, Imam University. The research instruments included an attitudinal questionnaire and an English vocabulary pre-test administered to students at the beginning of the semester whereas post-test and semi-structured interviews administered at the end of the semester. The findings of the attitudinal questionnaire revealed a positive attitude towards using smartphones in learning vocabulary. The post-test scores showed a significant difference in the experimental group performance. The results from the semi-structure interviews showed that there were positive attitudes towards Memrise smartphone application. The students found the application enjoyable, convenient and efficient learning tool. From the study, the use of the Memrise application is seen to have long-term and motivational benefits to students. For this reason, there is a need for further research to identify the long-term optimal effects of learning a language using smartphone applications.

Keywords: second language vocabulary learning, academic vocabulary, mobile learning technologies, smartphone applications

Procedia PDF Downloads 160
627 Carbon Sequestration Modeling in the Implementation of REDD+ Programmes in Nigeria

Authors: Oluwafemi Samuel Oyamakin

Abstract:

The forest in Nigeria is currently estimated to extend to around 9.6 million hectares, but used to expand over central and southern Nigeria decades ago. The forest estate is shrinking due to long-term human exploitation for agricultural development, fuel wood demand, uncontrolled forest harvesting and urbanization, amongst other factors, compounded by population growth in rural areas. Nigeria has lost more than 50% of its forest cover since 1990 and currently less than 10% of the country is forested. The current deforestation rate is estimated at 3.7%, which is one of the highest in the world. Reducing Emissions from Deforestation and forest Degradation plus conservation, sustainable management of forests and enhancement of forest carbon stocks constituted what is referred to as REDD+. This study evaluated some of the existing way of computing carbon stocks using eight indigenous tree species like Mansonia, Shorea, Bombax, Terminalia superba, Khaya grandifolia, Khaya senegalenses, Pines and Gmelina arborea. While these components are the essential elements of REDD+ programme, they can be brought under a broader framework of systems analysis designed to arrive at optimal solutions for future predictions through statistical distribution pattern of carbon sequestrated by various species of tree. Available data on height and diameter of trees in Ibadan were studied and their respective potentials of carbon sequestration level were assessed and subjected to tests so as to determine the best statistical distribution that would describe the carbon sequestration pattern of trees. The result of this study suggests a reasonable statistical distribution for carbons sequestered in simulation studies and hence, allow planners and government in determining resources forecast for sustainable development especially where experiments with real-life systems are infeasible. Sustainable management of forest can then be achieved by projecting future condition of forests under different management regimes thereby supporting conservation and REDD+ programmes in Nigeria.

Keywords: REDD+, carbon, climate change, height and diameter

Procedia PDF Downloads 167
626 DNA Methylation Changes in Response to Ocean Acidification at the Time of Larval Metamorphosis in the Edible Oyster, Crassostrea hongkongensis

Authors: Yong-Kian Lim, Khan Cheung, Xin Dang, Steven Roberts, Xiaotong Wang, Vengatesen Thiyagarajan

Abstract:

Unprecedented rate of increased CO₂ level in the ocean and the subsequent changes in carbonate system including decreased pH, known as ocean acidification (OA), is predicted to disrupt not only the calcification process but also several other physiological and developmental processes in a variety of marine organisms, including edible oysters. Nonetheless, not all species are vulnerable to those OA threats, e.g., some species may be able to cope with OA stress using environmentally induced modifications on gene and protein expressions. For example, external environmental stressors, including OA, can influence the addition and removal of methyl groups through epigenetic modification (e.g., DNA methylation) process to turn gene expression “on or off” as part of a rapid adaptive mechanism to cope with OA. In this study, the above hypothesis was tested through testing the effect of OA, using decreased pH 7.4 as a proxy, on the DNA methylation pattern of an endemic and a commercially important estuary oyster species, Crassostrea hongkongensis, at the time of larval habitat selection and metamorphosis. Larval growth rate did not differ between control pH 8.1 and treatment pH 7.4. The metamorphosis rate of the pediveliger larvae was higher at pH 7.4 than those in control pH 8.1; however, over one-third of the larvae raised at pH 7.4 failed to attach to an optimal substrate as defined by biofilm presence. During larval development, a total of 130 genes were differentially methylated across the two treatments. The differential methylation in the larval genes may have partially accounted for the higher metamorphosis success rate under decreased pH 7.4 but with poor substratum selection ability. Differentially methylated loci were concentrated in the exon regions and appear to be associated with cytoskeletal and signal transduction, oxidative stress, metabolic processes, and larval metamorphosis, which implies the high potential of C. hongkongensis larvae to acclimate and adapt through non-genetic ways to OA threats within a single generation.

Keywords: adaptive plasticity, DNA methylation, larval metamorphosis, ocean acidification

Procedia PDF Downloads 139
625 Big Data Applications for the Transport Sector

Authors: Antonella Falanga, Armando Cartenì

Abstract:

Today, an unprecedented amount of data coming from several sources, including mobile devices, sensors, tracking systems, and online platforms, characterizes our lives. The term “big data” not only refers to the quantity of data but also to the variety and speed of data generation. These data hold valuable insights that, when extracted and analyzed, facilitate informed decision-making. The 4Vs of big data - velocity, volume, variety, and value - highlight essential aspects, showcasing the rapid generation, vast quantities, diverse sources, and potential value addition of these kinds of data. This surge of information has revolutionized many sectors, such as business for improving decision-making processes, healthcare for clinical record analysis and medical research, education for enhancing teaching methodologies, agriculture for optimizing crop management, finance for risk assessment and fraud detection, media and entertainment for personalized content recommendations, emergency for a real-time response during crisis/events, and also mobility for the urban planning and for the design/management of public and private transport services. Big data's pervasive impact enhances societal aspects, elevating the quality of life, service efficiency, and problem-solving capacities. However, during this transformative era, new challenges arise, including data quality, privacy, data security, cybersecurity, interoperability, the need for advanced infrastructures, and staff training. Within the transportation sector (the one investigated in this research), applications span planning, designing, and managing systems and mobility services. Among the most common big data applications within the transport sector are, for example, real-time traffic monitoring, bus/freight vehicle route optimization, vehicle maintenance, road safety and all the autonomous and connected vehicles applications. Benefits include a reduction in travel times, road accidents and pollutant emissions. Within these issues, the proper transport demand estimation is crucial for sustainable transportation planning. Evaluating the impact of sustainable mobility policies starts with a quantitative analysis of travel demand. Achieving transportation decarbonization goals hinges on precise estimations of demand for individual transport modes. Emerging technologies, offering substantial big data at lower costs than traditional methods, play a pivotal role in this context. Starting from these considerations, this study explores the usefulness impact of big data within transport demand estimation. This research focuses on leveraging (big) data collected during the COVID-19 pandemic to estimate the evolution of the mobility demand in Italy. Estimation results reveal in the post-COVID-19 era, more than 96 million national daily trips, about 2.6 trips per capita, with a mobile population of more than 37.6 million Italian travelers per day. Overall, this research allows us to conclude that big data better enhances rational decision-making for mobility demand estimation, which is imperative for adeptly planning and allocating investments in transportation infrastructures and services.

Keywords: big data, cloud computing, decision-making, mobility demand, transportation

Procedia PDF Downloads 62
624 Evaluation of Stress Relief using Ultrasonic Peening in GTAW Welding and Stress Corrosion Cracking (SCC) in Stainless Steel, and Comparison with the Thermal Method

Authors: Hamidreza Mansouri

Abstract:

In the construction industry, the lifespan of a metal structure is directly related to the quality of welding. In most metal structures, the welded area is considered critical and is one of the most important factors in design. To date, many fracture incidents caused by these types of cracks have occurred. Various methods exist to increase the lifespan of welds to prevent failure in the welded area. Among these methods, the application of ultrasonic peening, in addition to the stress relief process, can manually and more precisely adjust the geometry of the weld toe and prevent stress concentration in this part. This research examined Gas Tungsten Arc Welding (GTAW) on common structural steels and 316 stainless steel, which require precise welding, to predict the optimal condition. The GTAW method was used to create residual stress; two samples underwent ultrasonic stress relief, and for comparison, two samples underwent thermal stress relief. Also, no treatment was considered for two samples. The residual stress of all six pieces was measured by X-Ray Diffraction (XRD) method. Then, the two ultrasonically stress-relieved samples and two untreated samples were exposed to a corrosive environment to initiate cracking and determine the effectiveness of the ultrasonic stress relief method. Thus, the residual stress caused by GTAW in the samples decreased by 3.42% with thermal treatment and by 7.69% with ultrasonic peening. Furthermore, the results show that the untreated sample developed cracks after 740 hours, while the ultrasonically stress-relieved piece showed no cracks. Given the high costs of welding and post-welding zone modification processes, finding an economical, effective, and comprehensive method that has the least limitations alongside a broad spectrum of usage is of great importance. Therefore, the impact of various ultrasonic peening stress relief parameters and the selection of the best stress relief parameter to achieve the longest lifespan for the weld area is highly significant.

Keywords: GTAW welding, stress corrosion cracking(SCC), thermal method, ultrasonic peening.

Procedia PDF Downloads 50
623 The Use of Coronary Calcium Scanning for Cholesterol Assessment and Management

Authors: Eva Kirzner

Abstract:

Based on outcome studies published over the past two decades, in 2018, the ACC/AHA published new guidelines for the management of hypercholesterolemia that incorporate the use of coronary artery calcium (CAC) scanning as a decision tool for ascertaining which patients may benefit from statin therapy. This use is based on the recognition that the absence of calcium on CAC scanning (i.e., a CAC score of zero) usually signifies the absence of significant atherosclerotic deposits in the coronary arteries. Specifically, in patients with a high risk for atherosclerotic cardiovascular disease (ASCVD), initiation of statin therapy is generally recommended to decrease ASCVD risk. However, among patients with intermediate ASCVD risk, the need for statin therapy is less certain. However, there is a need for new outcome studies that provide evidence that the management of hypercholesterolemia based on these new ACC/AHA recommendations is safe for patients. Based on a Pub-Med and Google Scholar literature search, four relevant population-based or patient-based cohort studies that studied the relationship between CAC scanning, risk assessment or mortality, and statin therapy that were published between 2017 and 2021 were identified (see references). In each of these studies, patients were assessed for their baseline risk for atherosclerotic cardiovascular disease (ASCVD) using the Pooled Cohorts Equation (PCE), an ACC/AHA calculator for determining patient risk based on assessment of patient age, gender, ethnicity, and coronary artery disease risk factors. The combined findings of these four studies provided concordant evidence that a zero CAC score defines patients who remain at low clinical risk despite the non-use of statin therapy. Thus, these new studies confirm the use of CAC scanning as a safe tool for reducing the potential overuse of statin therapy among patients with zero CAC scores. Incorporating these new data suggest the following best practice: (1) ascertain ASCVD risk according to the PCE in all patients; (2) following an initial attempt trial to lower ASCVD risk with optimal diet among patients with elevated ASCVD risk, initiate statin therapy for patients who have a high ASCVD risk score; (3) if the ASCVD score is intermediate, refer patients for CAC scanning; and (4) and if the CAC score is zero among the intermediate risk ASCVD patients, statin therapy can be safely withheld despite the presence of an elevated serum cholesterol level.

Keywords: cholesterol, cardiovascular disease, statin therapy, coronary calcium

Procedia PDF Downloads 115
622 Electron Beam Melting Process Parameter Optimization Using Multi Objective Reinforcement Learning

Authors: Michael A. Sprayberry, Vincent C. Paquit

Abstract:

Process parameter optimization in metal powder bed electron beam melting (MPBEBM) is crucial to ensure the technology's repeatability, control, and industry-continued adoption. Despite continued efforts to address the challenges via the traditional design of experiments and process mapping techniques, there needs to be more successful in an on-the-fly optimization framework that can be adapted to MPBEBM systems. Additionally, data-intensive physics-based modeling and simulation methods are difficult to support by a metal AM alloy or system due to cost restrictions. To mitigate the challenge of resource-intensive experiments and models, this paper introduces a Multi-Objective Reinforcement Learning (MORL) methodology defined as an optimization problem for MPBEBM. An off-policy MORL framework based on policy gradient is proposed to discover optimal sets of beam power (P) – beam velocity (v) combinations to maintain a steady-state melt pool depth and phase transformation. For this, an experimentally validated Eagar-Tsai melt pool model is used to simulate the MPBEBM environment, where the beam acts as the agent across the P – v space to maximize returns for the uncertain powder bed environment producing a melt pool and phase transformation closer to the optimum. The culmination of the training process yields a set of process parameters {power, speed, hatch spacing, layer depth, and preheat} where the state (P,v) with the highest returns corresponds to a refined process parameter mapping. The resultant objects and mapping of returns to the P-v space show convergence with experimental observations. The framework, therefore, provides a model-free multi-objective approach to discovery without the need for trial-and-error experiments.

Keywords: additive manufacturing, metal powder bed fusion, reinforcement learning, process parameter optimization

Procedia PDF Downloads 90