Search results for: free software
77 Economic Analysis of a Carbon Abatement Technology
Authors: Hameed Rukayat Opeyemi, Pericles Pilidis Pagone Emmanuele, Agbadede Roupa, Allison Isaiah
Abstract:
Climate change represents one of the single most challenging problems facing the world today. According to the National Oceanic and Administrative Association, Atmospheric temperature rose almost 25% since 1958, Artic sea ice has shrunk 40% since 1959 and global sea levels have risen more than 5.5cm since 1990. Power plants are the major culprits of GHG emission to the atmosphere. Several technologies have been proposed to reduce the amount of GHG emitted to the atmosphere from power plant, one of which is the less researched Advanced zero-emission power plant. The advanced zero emission power plants make use of mixed conductive membrane (MCM) reactor also known as oxygen transfer membrane (OTM) for oxygen transfer. The MCM employs membrane separation process. The membrane separation process was first introduced in 1899 when Walter Hermann Nernst investigated electric current between metals and solutions. He found that when a dense ceramic is heated, the current of oxygen molecules move through it. In the bid to curb the amount of GHG emitted to the atmosphere, the membrane separation process was applied to the field of power engineering in the low carbon cycle known as the Advanced zero emission power plant (AZEP cycle). The AZEP cycle was originally invented by Norsk Hydro, Norway and ABB Alstom power (now known as Demag Delaval Industrial turbomachinery AB), Sweden. The AZEP drew a lot of attention because its ability to capture ~100% CO2 and also boasts of about 30-50% cost reduction compared to other carbon abatement technologies, the penalty in efficiency is also not as much as its counterparts and crowns it with almost zero NOx emissions due to very low nitrogen concentrations in the working fluid. The advanced zero emission power plants differ from a conventional gas turbine in the sense that its combustor is substituted with the mixed conductive membrane (MCM-reactor). The MCM-reactor is made up of the combustor, low-temperature heat exchanger LTHX (referred to by some authors as air preheater the mixed conductive membrane responsible for oxygen transfer and the high-temperature heat exchanger and in some layouts, the bleed gas heat exchanger. Air is taken in by the compressor and compressed to a temperature of about 723 Kelvin and pressure of 2 Mega-Pascals. The membrane area needed for oxygen transfer is reduced by increasing the temperature of 90% of the air using the LTHX; the temperature is also increased to facilitate oxygen transfer through the membrane. The air stream enters the LTHX through the transition duct leading to inlet of the LTHX. The temperature of the air stream is then increased to about 1150 K depending on the design point specification of the plant and the efficiency of the heat exchanging system. The amount of oxygen transported through the membrane is directly proportional to the temperature of air going through the membrane. The AZEP cycle was developed using the Fortran software and economic analysis was conducted using excel and Matlab followed by optimization case study. The Simple bleed gas heat exchange layout (100 % CO2 capture), Bleed gas heat exchanger layout with flue gas turbine (100 % CO2 capture), Pre-expansion reheating layout (Sequential burning layout)–AZEP 85% (85% CO2 capture) and Pre-expansion reheating layout (Sequential burning layout) with flue gas turbine–AZEP 85% (85% CO2 capture). This paper discusses monte carlo risk analysis of four possible layouts of the AZEP cycle.Keywords: gas turbine, global warming, green house gas, fossil fuel power plants
Procedia PDF Downloads 39776 High Pressure Thermophysical Properties of Complex Mixtures Relevant to Liquefied Natural Gas (LNG) Processing
Authors: Saif Al Ghafri, Thomas Hughes, Armand Karimi, Kumarini Seneviratne, Jordan Oakley, Michael Johns, Eric F. May
Abstract:
Knowledge of the thermophysical properties of complex mixtures at extreme conditions of pressure and temperature have always been essential to the Liquefied Natural Gas (LNG) industry’s evolution because of the tremendous technical challenges present at all stages in the supply chain from production to liquefaction to transport. Each stage is designed using predictions of the mixture’s properties, such as density, viscosity, surface tension, heat capacity and phase behaviour as a function of temperature, pressure, and composition. Unfortunately, currently available models lead to equipment over-designs of 15% or more. To achieve better designs that work more effectively and/or over a wider range of conditions, new fundamental property data are essential, both to resolve discrepancies in our current predictive capabilities and to extend them to the higher-pressure conditions characteristic of many new gas fields. Furthermore, innovative experimental techniques are required to measure different thermophysical properties at high pressures and over a wide range of temperatures, including near the mixture’s critical points where gas and liquid become indistinguishable and most existing predictive fluid property models used breakdown. In this work, we present a wide range of experimental measurements made for different binary and ternary mixtures relevant to LNG processing, with a particular focus on viscosity, surface tension, heat capacity, bubble-points and density. For this purpose, customized and specialized apparatus were designed and validated over the temperature range (200 to 423) K at pressures to 35 MPa. The mixtures studied were (CH4 + C3H8), (CH4 + C3H8 + CO2) and (CH4 + C3H8 + C7H16); in the last of these the heptane contents was up to 10 mol %. Viscosity was measured using a vibrating wire apparatus, while mixture densities were obtained by means of a high-pressure magnetic-suspension densimeter and an isochoric cell apparatus; the latter was also used to determine bubble-points. Surface tensions were measured using the capillary rise method in a visual cell, which also enabled the location of the mixture critical point to be determined from observations of critical opalescence. Mixture heat capacities were measured using a customised high-pressure differential scanning calorimeter (DSC). The combined standard relative uncertainties were less than 0.3% for density, 2% for viscosity, 3% for heat capacity and 3 % for surface tension. The extensive experimental data gathered in this work were compared with a variety of different advanced engineering models frequently used for predicting thermophysical properties of mixtures relevant to LNG processing. In many cases the discrepancies between the predictions of different engineering models for these mixtures was large, and the high quality data allowed erroneous but often widely-used models to be identified. The data enable the development of new or improved models, to be implemented in process simulation software, so that the fluid properties needed for equipment and process design can be predicted reliably. This in turn will enable reduced capital and operational expenditure by the LNG industry. The current work also aided the community of scientists working to advance theoretical descriptions of fluid properties by allowing to identify deficiencies in theoretical descriptions and calculations.Keywords: LNG, thermophysical, viscosity, density, surface tension, heat capacity, bubble points, models
Procedia PDF Downloads 27475 Accountability of Artificial Intelligence: An Analysis Using Edgar Morin’s Complex Thought
Authors: Sylvie Michel, Sylvie Gerbaix, Marc Bidan
Abstract:
Artificial intelligence (AI) can be held accountable for its detrimental impacts. This question gains heightened relevance given AI's pervasive reach across various domains, magnifying its power and potential. The expanding influence of AI raises fundamental ethical inquiries, primarily centering on biases, responsibility, and transparency. This encompasses discriminatory biases arising from algorithmic criteria or data, accidents attributed to autonomous vehicles or other systems, and the imperative of transparent decision-making. This article aims to stimulate reflection on AI accountability, denoting the necessity to elucidate the effects it generates. Accountability comprises two integral aspects: adherence to legal and ethical standards and the imperative to elucidate the underlying operational rationale. The objective is to initiate a reflection on the obstacles to this "accountability," facing the challenges of the complexity of artificial intelligence's system and its effects. Then, this article proposes to mobilize Edgar Morin's complex thought to encompass and face the challenges of this complexity. The first contribution is to point out the challenges posed by the complexity of A.I., with fractional accountability between a myriad of human and non-human actors, such as software and equipment, which ultimately contribute to the decisions taken and are multiplied in the case of AI. Accountability faces three challenges resulting from the complexity of the ethical issues combined with the complexity of AI. The challenge of the non-neutrality of algorithmic systems as fully ethically non-neutral actors is put forward by a revealing ethics approach that calls for assigning responsibilities to these systems. The challenge of the dilution of responsibility is induced by the multiplicity and distancing between the actors. Thus, a dilution of responsibility is induced by a split in decision-making between developers, who feel they fulfill their duty by strictly respecting the requests they receive, and management, which does not consider itself responsible for technology-related flaws. Accountability is confronted with the challenge of transparency of complex and scalable algorithmic systems, non-human actors self-learning via big data. A second contribution involves leveraging E. Morin's principles, providing a framework to grasp the multifaceted ethical dilemmas and subsequently paving the way for establishing accountability in AI. When addressing the ethical challenge of biases, the "hologrammatic" principle underscores the imperative of acknowledging the non-ethical neutrality of algorithmic systems inherently imbued with the values and biases of their creators and society. The "dialogic" principle advocates for the responsible consideration of ethical dilemmas, encouraging the integration of complementary and contradictory elements in solutions from the very inception of the design phase. Aligning with the principle of organizing recursiveness, akin to the "transparency" of the system, it promotes a systemic analysis to account for the induced effects and guides the incorporation of modifications into the system to rectify deviations and reintroduce modifications into the system to rectify its drifts. In conclusion, this contribution serves as an inception for contemplating the accountability of "artificial intelligence" systems despite the evident ethical implications and potential deviations. Edgar Morin's principles, providing a lens to contemplate this complexity, offer valuable perspectives to address these challenges concerning accountability.Keywords: accountability, artificial intelligence, complexity, ethics, explainability, transparency, Edgar Morin
Procedia PDF Downloads 6374 Design and Construction of a Home-Based, Patient-Led, Therapeutic, Post-Stroke Recovery System Using Iterative Learning Control
Authors: Marco Frieslaar, Bing Chu, Eric Rogers
Abstract:
Stroke is a devastating illness that is the second biggest cause of death in the world (after heart disease). Where it does not kill, it leaves survivors with debilitating sensory and physical impairments that not only seriously harm their quality of life, but also cause a high incidence of severe depression. It is widely accepted that early intervention is essential for recovery, but current rehabilitation techniques largely favor hospital-based therapies which have restricted access, expensive and specialist equipment and tend to side-step the emotional challenges. In addition, there is insufficient funding available to provide the long-term assistance that is required. As a consequence, recovery rates are poor. The relatively unexplored solution is to develop therapies that can be harnessed in the home and are formulated from technologies that already exist in everyday life. This would empower individuals to take control of their own improvement and provide choice in terms of when and where they feel best able to undertake their own healing. This research seeks to identify how effective post-stroke, rehabilitation therapy can be applied to upper limb mobility, within the physical context of a home rather than a hospital. This is being achieved through the design and construction of an automation scheme, based on iterative learning control and the Riener muscle model, that has the ability to adapt to the user and react to their level of fatigue and provide tangible physical recovery. It utilizes a SMART Phone and laptop to construct an iterative learning control (ILC) system, that monitors upper arm movement in three dimensions, as a series of exercises are undertaken. The equipment generates functional electrical stimulation to assist in muscle activation and thus improve directional accuracy. In addition, it monitors speed, accuracy, areas of motion weakness and similar parameters to create a performance index that can be compared over time and extrapolated to establish an independent and objective assessment scheme, plus an approximate estimation of predicted final outcome. To further extend its assessment capabilities, nerve conduction velocity readings are taken by the software, between the shoulder and hand muscles. This is utilized to measure the speed of response of neuron signal transfer along the arm and over time, an online indication of regeneration levels can be obtained. This will prove whether or not sufficient training intensity is being achieved even before perceivable movement dexterity is observed. The device also provides the option to connect to other users, via the internet, so that the patient can avoid feelings of isolation and can undertake movement exercises together with others in a similar position. This should create benefits not only for the encouragement of rehabilitation participation, but also an emotional support network potential. It is intended that this approach will extend the availability of stroke recovery options, enable ease of access at a low cost, reduce susceptibility to depression and through these endeavors, enhance the overall recovery success rate.Keywords: home-based therapy, iterative learning control, Riener muscle model, SMART phone, stroke rehabilitation
Procedia PDF Downloads 26473 Integrating the Modbus SCADA Communication Protocol with Elliptic Curve Cryptography
Authors: Despoina Chochtoula, Aristidis Ilias, Yannis Stamatiou
Abstract:
Modbus is a protocol that enables the communication among devices which are connected to the same network. This protocol is, often, deployed in connecting sensor and monitoring units to central supervisory servers in Supervisory Control and Data Acquisition, or SCADA, systems. These systems monitor critical infrastructures, such as factories, power generation stations, nuclear power reactors etc. in order to detect malfunctions and ignite alerts and corrective actions. However, due to their criticality, SCADA systems are vulnerable to attacks that range from simple eavesdropping on operation parameters, exchanged messages, and valuable infrastructure information to malicious modification of vital infrastructure data towards infliction of damage. Thus, the SCADA research community has been active over strengthening SCADA systems with suitable data protection mechanisms based, to a large extend, on cryptographic methods for data encryption, device authentication, and message integrity protection. However, due to the limited computation power of many SCADA sensor and embedded devices, the usual public key cryptographic methods are not appropriate due to their high computational requirements. As an alternative, Elliptic Curve Cryptography has been proposed, which requires smaller key sizes and, thus, less demanding cryptographic operations. Until now, however, no such implementation has been proposed in the SCADA literature, to the best of our knowledge. In order to fill this gap, our methodology was focused on integrating Modbus, a frequently used SCADA communication protocol, with Elliptic Curve based cryptography and develop a server/client application to demonstrate the proof of concept. For the implementation we deployed two C language libraries, which were suitably modify in order to be successfully integrated: libmodbus (https://github.com/stephane/libmodbus) and ecc-lib https://www.ceid.upatras.gr/webpages/faculty/zaro/software/ecc-lib/). The first library provides a C implementation of the Modbus/TCP protocol while the second one offers the functionality to develop cryptographic protocols based on Elliptic Curve Cryptography. These two libraries were combined, after suitable modifications and enhancements, in order to give a modified version of the Modbus/TCP protocol focusing on the security of the data exchanged among the devices and the supervisory servers. The mechanisms we implemented include key generation, key exchange/sharing, message authentication, data integrity check, and encryption/decryption of data. The key generation and key exchange protocols were implemented with the use of Elliptic Curve Cryptography primitives. The keys established by each device are saved in their local memory and are retained during the whole communication session and are used in encrypting and decrypting exchanged messages as well as certifying entities and the integrity of the messages. Finally, the modified library was compiled for the Android environment in order to run the server application as an Android app. The client program runs on a regular computer. The communication between these two entities is an example of the successful establishment of an Elliptic Curve Cryptography based, secure Modbus wireless communication session between a portable device acting as a supervisor station and a monitoring computer. Our first performance measurements are, also, very promising and demonstrate the feasibility of embedding Elliptic Curve Cryptography into SCADA systems, filling in a gap in the relevant scientific literature.Keywords: elliptic curve cryptography, ICT security, modbus protocol, SCADA, TCP/IP protocol
Procedia PDF Downloads 27272 Microplastic Concentrations and Fluxes in Urban Compartments: A Systemic Approach at the Scale of the Paris Megacity
Authors: Rachid Dris, Robin Treilles, Max Beaurepaire, Minh Trang Nguyen, Sam Azimi, Vincent Rocher, Johnny Gasperi, Bruno Tassin
Abstract:
Microplastic sources and fluxes in urban catchments are only poorly studied. Most often, the approaches taken focus on a single source and only carry out a description of the contamination levels and type (shape, size, polymers). In order to gain an improved knowledge of microplastic inputs at urban scales, estimating and comparing various fluxes is necessary. The Laboratoire Eau, Environnement et Systèmes Urbains (LEESU), the Laboratoire Eau Environnement (LEE) and the SIAAP (Service public de l’assainissement francilien) initiated several projects to investigate different urban sources and flows of microplastics. A systemic approach is undertaken at the scale of Paris Megacity, and several compartments are considered, including atmospheric fallout, wastewater treatments plants, runoff and combined sewer overflows. These investigations are carried out within the Limnoplast and OPUR projects. Atmospheric fallout was sampled during consecutive periods ranging from 2 to 3 weeks with a stainless-steel funnel. Both wet and dry periods were considered. Different treatment steps were sampled in 2 wastewater treatment plants (Seine-Amont for activated sludge and Seine-Centre for biofiltration) of the SIAAP, including sludge samples. Microplastics were also investigated in combined sewer overflows as well as in stormwater at the outlet suburban catchment (Sucy-en-Brie, France) during four rain events. Samples are treated using hydroperoxide digestion (H₂O₂ 30 %) in order to reduce organic material. Microplastics are then extracted from the samples with a density separation step using NaI (d=1.6 g.cm⁻³). Samples are filtered on metallic filters with a porosity of 14 µm between steps to separate them from the solutions (H₂O₂ and NaI). The last filtration was carried out on alumina filters. Infrared mapping analysis (using a micro-FTIR with an MCT detector) is performed on each alumina filter. The resulting maps are analyzed using a microplastic analysis software simple, developed by Aalborg University, Denmark and Alfred Wegener Institute, Germany. Blanks were systematically carried out to consider sample contamination. This presentation aims at synthesizing the data found in the various projects. In order to carry out a systemic approach and compare the various inputs, all the data were converted into annual microplastic fluxes (number of microplastics per year), and extrapolated to the Parisian agglomeration. PP, PE and alkyd are the most prevalent polymers found in storm water samples. Rain intensity and microplastic concentrations did not show any clear correlation. Considering the runoff volumes and the impervious surface area of the studied catchment, a flux of 4*107–9*107 MPs.yr⁻¹.ha⁻¹ was estimated. Samples of wastewater treatment plants and atmospheric fallout are currently being analyzed in order to finalize this assessment. The representativeness of such samplings and uncertainties related to the extrapolations will be discussed and gaps in knowledge will be identified. The data provided by such an approach will help to prioritize future research as well as policy efforts.Keywords: microplastics, atmosphere, wastewater, urban runoff, Paris megacity, urban waters
Procedia PDF Downloads 18071 Utilization of Informatics to Transform Clinical Data into a Simplified Reporting System to Examine the Analgesic Prescribing Practices of a Single Urban Hospital’s Emergency Department
Authors: Rubaiat S. Ahmed, Jemer Garrido, Sergey M. Motov
Abstract:
Clinical informatics (CI) enables the transformation of data into a systematic organization that improves the quality of care and the generation of positive health outcomes.Innovative technology through informatics that compiles accurate data on analgesic utilization in the emergency department can enhance pain management in this important clinical setting. We aim to establish a simplified reporting system through CI to examine and assess the analgesic prescribing practices in the EDthrough executing a U.S. federal grant project on opioid reduction initiatives. Queried data points of interest from a level-one trauma ED’s electronic medical records were used to create data sets and develop informational/visual reporting dashboards (on Microsoft Excel and Google Sheets) concerning analgesic usage across several pre-defined parameters and performance metrics using CI. The data was then qualitatively analyzed to evaluate ED analgesic prescribing trends by departmental clinicians and leadership. During a 12-month reporting period (Dec. 1, 2020 – Nov. 30, 2021) for the ongoing project, about 41% of all ED patient visits (N = 91,747) were for pain conditions, of which 81.6% received analgesics in the ED and at discharge (D/C). Of those treated with analgesics, 24.3% received opioids compared to 75.7% receiving opioid alternatives in the ED and at D/C, including non-pharmacological modalities. Demographics showed among patients receiving analgesics, 56.7% were aged between 18-64, 51.8% were male, 51.7% were white, and 66.2% had government funded health insurance. Ninety-one percent of all opioids prescribed were in the ED, with intravenous (IV) morphine, IV fentanyl, and morphine sulfate immediate release (MSIR) tablets accounting for 88.0% of ED dispensed opioids. With 9.3% of all opioids prescribed at D/C, MSIR was dispensed 72.1% of the time. Hydrocodone, oxycodone, and tramadol usage to only 10-15% of the time, and hydromorphone at 0%. Of opioid alternatives, non-steroidal anti-inflammatory drugs were utilized 60.3% of the time, 23.5% with local anesthetics and ultrasound-guided nerve blocks, and 7.9% with acetaminophen as the primary non-opioid drug categories prescribed by ED providers. Non-pharmacological analgesia included virtual reality and other modalities. An average of 18.5 ED opioid orders and 1.9 opioid D/C prescriptions per 102.4 daily ED patient visits was observed for the period. Compared to other specialties within our institution, 2.0% of opioid D/C prescriptions are given by ED providers, compared to the national average of 4.8%. Opioid alternatives accounted for 69.7% and 30.3% usage, versus 90.7% and 9.3% for opioids in the ED and D/C, respectively.There is a pressing need for concise, relevant, and reliable clinical data on analgesic utilization for ED providers and leadership to evaluate prescribing practices and make data-driven decisions. Basic computer software can be used to create effective visual reporting dashboards with indicators that convey relevant and timely information in an easy-to-digest manner. We accurately examined our ED's analgesic prescribing practices using CI through dashboard reporting. Such reporting tools can quickly identify key performance indicators and prioritize data to enhance pain management and promote safe prescribing practices in the emergency setting.Keywords: clinical informatics, dashboards, emergency department, health informatics, healthcare informatics, medical informatics, opioids, pain management, technology
Procedia PDF Downloads 14470 Surface Acoustic Wave (SAW)-Induced Mixing Enhances Biomolecules Kinetics in a Novel Phase-Interrogation Surface Plasmon Resonance (SPR) Microfluidic Biosensor
Authors: M. Agostini, A. Sonato, G. Greco, M. Travagliati, G. Ruffato, E. Gazzola, D. Liuni, F. Romanato, M. Cecchini
Abstract:
Since their first demonstration in the early 1980s, surface plasmon resonance (SPR) sensors have been widely recognized as useful tools for detecting chemical and biological species, and the interest of the scientific community toward this technology has known a rapid growth in the past two decades owing to their high sensitivity, label-free operation and possibility of real-time detection. Recent works have suggested that a turning point in SPR sensor research would be the combination of SPR strategies with other technologies in order to reduce human handling of samples, improve integration and plasmonic sensitivity. In this light, microfluidics has been attracting growing interest. By properly designing microfluidic biochips it is possible to miniaturize the analyte-sensitive areas with an overall reduction of the chip dimension, reduce the liquid reagents and sample volume, improve automation, and increase the number of experiments in a single biochip by multiplexing approaches. However, as the fluidic channel dimensions approach the micron scale, laminar flows become dominant owing to the low Reynolds numbers that typically characterize microfluidics. In these environments mixing times are usually dominated by diffusion, which can be prohibitively long and lead to long-lasting biochemistry experiments. An elegant method to overcome these issues is to actively perturb the liquid laminar flow by exploiting surface acoustic waves (SAWs). With this work, we demonstrate a new approach for SPR biosensing based on the combination of microfluidics, SAW-induced mixing and the real-time phase-interrogation grating-coupling SPR technology. On a single lithium niobate (LN) substrate the nanostructured SPR sensing areas, interdigital transducer (IDT) for SAW generation and polydimethylsiloxane (PDMS) microfluidic chambers were fabricated. SAWs, impinging on the microfluidic chamber, generate acoustic streaming inside the fluid, leading to chaotic advection and thus improved fluid mixing, whilst analytes binding detection is made via SPR method based on SPP excitation via gold metallic grating upon azimuthal orientation and phase interrogation. Our device has been fully characterized in order to separate for the very first time the unwanted SAW heating effect with respect to the fluid stirring inside the microchamber that affect the molecules binding dynamics. Avidin/biotin assay and thiol-polyethylene glycol (bPEG-SH) were exploited as model biological interaction and non-fouling layer respectively. Biosensing kinetics time reduction with SAW-enhanced mixing resulted in a ≈ 82% improvement for bPEG-SH adsorption onto gold and ≈ 24% for avidin/biotin binding—≈ 50% and 18% respectively compared to the heating only condition. These results demonstrate that our biochip can significantly reduce the duration of bioreactions that usually require long times (e.g., PEG-based sensing layer, low concentration analyte detection). The sensing architecture here proposed represents a new promising technology satisfying the major biosensing requirements: scalability and high throughput capabilities. The detection system size and biochip dimension could be further reduced and integrated; in addition, the possibility of reducing biological experiment duration via SAW-driven active mixing and developing multiplexing platforms for parallel real-time sensing could be easily combined. In general, the technology reported in this study can be straightforwardly adapted to a great number of biological system and sensing geometry.Keywords: biosensor, microfluidics, surface acoustic wave, surface plasmon resonance
Procedia PDF Downloads 28169 Evaluation of Random Forest and Support Vector Machine Classification Performance for the Prediction of Early Multiple Sclerosis from Resting State FMRI Connectivity Data
Authors: V. Saccà, A. Sarica, F. Novellino, S. Barone, T. Tallarico, E. Filippelli, A. Granata, P. Valentino, A. Quattrone
Abstract:
The work aim was to evaluate how well Random Forest (RF) and Support Vector Machine (SVM) algorithms could support the early diagnosis of Multiple Sclerosis (MS) from resting-state functional connectivity data. In particular, we wanted to explore the ability in distinguishing between controls and patients of mean signals extracted from ICA components corresponding to 15 well-known networks. Eighteen patients with early-MS (mean-age 37.42±8.11, 9 females) were recruited according to McDonald and Polman, and matched for demographic variables with 19 healthy controls (mean-age 37.55±14.76, 10 females). MRI was acquired by a 3T scanner with 8-channel head coil: (a)whole-brain T1-weighted; (b)conventional T2-weighted; (c)resting-state functional MRI (rsFMRI), 200 volumes. Estimated total lesion load (ml) and number of lesions were calculated using LST-toolbox from the corrected T1 and FLAIR. All rsFMRIs were pre-processed using tools from the FMRIB's Software Library as follows: (1) discarding of the first 5 volumes to remove T1 equilibrium effects, (2) skull-stripping of images, (3) motion and slice-time correction, (4) denoising with high-pass temporal filter (128s), (5) spatial smoothing with a Gaussian kernel of FWHM 8mm. No statistical significant differences (t-test, p < 0.05) were found between the two groups in the mean Euclidian distance and the mean Euler angle. WM and CSF signal together with 6 motion parameters were regressed out from the time series. We applied an independent component analysis (ICA) with the GIFT-toolbox using the Infomax approach with number of components=21. Fifteen mean components were visually identified by two experts. The resulting z-score maps were thresholded and binarized to extract the mean signal of the 15 networks for each subject. Statistical and machine learning analysis were then conducted on this dataset composed of 37 rows (subjects) and 15 features (mean signal in the network) with R language. The dataset was randomly splitted into training (75%) and test sets and two different classifiers were trained: RF and RBF-SVM. We used the intrinsic feature selection of RF, based on the Gini index, and recursive feature elimination (rfe) for the SVM, to obtain a rank of the most predictive variables. Thus, we built two new classifiers only on the most important features and we evaluated the accuracies (with and without feature selection) on test-set. The classifiers, trained on all the features, showed very poor accuracies on training (RF:58.62%, SVM:65.52%) and test sets (RF:62.5%, SVM:50%). Interestingly, when feature selection by RF and rfe-SVM were performed, the most important variable was the sensori-motor network I in both cases. Indeed, with only this network, RF and SVM classifiers reached an accuracy of 87.5% on test-set. More interestingly, the only misclassified patient resulted to have the lowest value of lesion volume. We showed that, with two different classification algorithms and feature selection approaches, the best discriminant network between controls and early MS, was the sensori-motor I. Similar importance values were obtained for the sensori-motor II, cerebellum and working memory networks. These findings, in according to the early manifestation of motor/sensorial deficits in MS, could represent an encouraging step toward the translation to the clinical diagnosis and prognosis.Keywords: feature selection, machine learning, multiple sclerosis, random forest, support vector machine
Procedia PDF Downloads 24068 High Cycle Fatigue Analysis of a Lower Hopper Knuckle Connection of a Large Bulk Carrier under Dynamic Loading
Authors: Vaso K. Kapnopoulou, Piero Caridis
Abstract:
The fatigue of ship structural details is of major concern in the maritime industry as it can generate fracture issues that may compromise structural integrity. In the present study, a fatigue analysis of the lower hopper knuckle connection of a bulk carrier was conducted using the Finite Element Method by means of ABAQUS/CAE software. The fatigue life was calculated using Miner’s Rule and the long-term distribution of stress range by the use of the two-parameter Weibull distribution. The cumulative damage ratio was estimated using the fatigue damage resulting from the stress range occurring at each load condition. For this purpose, a cargo hold model was first generated, which extends over the length of two holds (the mid-hold and half of each of the adjacent holds) and transversely over the full breadth of the hull girder. Following that, a submodel of the area of interest was extracted in order to calculate the hot spot stress of the connection and to estimate the fatigue life of the structural detail. Two hot spot locations were identified; one at the top layer of the inner bottom plate and one at the top layer of the hopper plate. The IACS Common Structural Rules (CSR) require that specific dynamic load cases for each loading condition are assessed. Following this, the dynamic load case that causes the highest stress range at each loading condition should be used in the fatigue analysis for the calculation of the cumulative fatigue damage ratio. Each load case has a different effect on ship hull response. Of main concern, when assessing the fatigue strength of the lower hopper knuckle connection, was the determination of the maximum, i.e. the critical value of the stress range, which acts in a direction normal to the weld toe line. This acts in the transverse direction, that is, perpendicularly to the ship's centerline axis. The load cases were explored both theoretically and numerically in order to establish the one that causes the highest damage to the location examined. The most severe one was identified to be the load case induced by beam sea condition where the encountered wave comes from the starboard. At the level of the cargo hold model, the model was assumed to be simply supported at its ends. A coarse mesh was generated in order to represent the overall stiffness of the structure. The elements employed were quadrilateral shell elements, each having four integration points. A linear elastic analysis was performed because linear elastic material behavior can be presumed, since only localized yielding is allowed by most design codes. At the submodel level, the displacements of the analysis of the cargo hold model to the outer region nodes of the submodel acted as boundary conditions and applied loading for the submodel. In order to calculate the hot spot stress at the hot spot locations, a very fine mesh zone was generated and used. The fatigue life of the detail was found to be 16.4 years which is lower than the design fatigue life of the structure (25 years), making this location vulnerable to fatigue fracture issues. Moreover, the loading conditions that induce the most damage to the location were found to be the various ballasting conditions.Keywords: dynamic load cases, finite element method, high cycle fatigue, lower hopper knuckle
Procedia PDF Downloads 41967 Future Research on the Resilience of Tehran’s Urban Areas Against Pandemic Crises Horizon 2050
Authors: Farzaneh Sasanpour, Saeed Amini Varaki
Abstract:
Resilience is an important goal for cities as urban areas face an increasing range of challenges in the 21st century; therefore, according to the characteristics of risks, adopting an approach that responds to sensitive conditions in the risk management process is the resilience of cities. In the meantime, most of the resilience assessments have dealt with natural hazards and less attention has been paid to pandemics.In the covid-19 pandemic, the country of Iran and especially the metropolis of Tehran, was not immune from the crisis caused by its effects and consequences and faced many challenges. One of the methods that can increase the resilience of Tehran's metropolis against possible crises in the future is future studies. This research is practical in terms of type. The general pattern of the research will be descriptive-analytical and from the point of view that it is trying to communicate between the components and provide urban resilience indicators with pandemic crises and explain the scenarios, its future studies method is exploratory. In order to extract and determine the key factors and driving forces effective on the resilience of Tehran's urban areas against pandemic crises (Covid-19), the method of structural analysis of mutual effects and Micmac software was used. Therefore, the primary factors and variables affecting the resilience of Tehran's urban areas were set in 5 main factors, including physical-infrastructural (transportation, spatial and physical organization, streets and roads, multi-purpose development) with 39 variables based on mutual effects analysis. Finally, key factors and variables in five main areas, including managerial-institutional with five variables; Technology (intelligence) with 3 variables; economic with 2 variables; socio-cultural with 3 variables; and physical infrastructure, were categorized with 7 variables. These factors and variables have been used as key factors and effective driving forces on the resilience of Tehran's urban areas against pandemic crises (Covid-19), in explaining and developing scenarios. In order to develop the scenarios for the resilience of Tehran's urban areas against pandemic crises (Covid-19), intuitive logic, scenario planning as one of the future research methods and the Global Business Network (GBN) model were used. Finally, four scenarios have been drawn and selected with a creative method using the metaphor of weather conditions, which is indicative of the general outline of the conditions of the metropolis of Tehran in that situation. Therefore, the scenarios of Tehran metropolis were obtained in the form of four scenarios: 1- solar scenario (optimal governance and management leading in smart technology) 2- cloud scenario (optimal governance and management following in intelligent technology) 3- dark scenario (optimal governance and management Unfavorable leader in intelligence technology) 4- Storm scenario (unfavorable governance and management of follower in intelligence technology). The solar scenario shows the best situation and the stormy scenario shows the worst situation for the Tehran metropolis. According to the findings obtained in this research, city managers can, in order to achieve a better tomorrow for the metropolis of Tehran, in all the factors and components of urban resilience against pandemic crises by using future research methods, a coherent picture with the long-term horizon of 2050, from the path Provide urban resilience movement and platforms for upgrading and increasing the capacity to deal with the crisis. To create the necessary platforms for the realization, development and evolution of the urban areas of Tehran in a way that guarantees long-term balance and stability in all dimensions and levels.Keywords: future research, resilience, crisis, pandemic, covid-19, Tehran
Procedia PDF Downloads 6766 Household Water Practices in a Rapidly Urbanizing City and Its Implications for the Future of Potable Water: A Case Study of Abuja Nigeria
Authors: Emmanuel Maiyanga
Abstract:
Access to sufficiently good quality freshwater has been a global challenge, but more notably in low-income countries, particularly in the Sub-Saharan countries, which Nigeria is one. Urban population is soaring, especially in many low-income countries, the existing centralised water supply infrastructures are ageing and inadequate, moreover in households peoples’ lifestyles have become more water-demanding. So, people mostly device coping strategies where municipal supply is perceived to have failed. This development threatens the futures of groundwater and calls for a review of management strategy and research approach. The various issues associated with water demand management in low-income countries and Nigeria, in particular, are well documented in the literature. However, the way people use water daily in households and the reasons they do so, and how the situation is constructing demand among the middle-class population in Abuja Nigeria is poorly understood. This is what this research aims to unpack. This is achieved by using the social practices research approach (which is based on the Theory of Practices) to understand how this situation impacts on the shared groundwater resource. A qualitative method was used for data gathering. This involved audio-recorded interviews of householders and water professionals in the private and public sectors. It also involved observation, note-taking, and document study. The data were analysed thematically using NVIVO software. The research reveals the major household practices that draw on the water on a domestic scale, and they include water sourcing, body hygiene and sanitation, laundry, kitchen, and outdoor practices (car washing, domestic livestock farming, and gardening). Among all the practices, water sourcing, body hygiene, kitchen, and laundry practices, are identified to impact most on groundwater, with impact scale varying with household peculiarities. Water sourcing practices involve people sourcing mostly from personal boreholes because the municipal water supply is perceived inadequate and unreliable in terms of service delivery and water quality, and people prefer easier and unlimited access and control using boreholes. Body hygiene practices reveal that every respondent prefers bucket bathing at least once daily, and the majority bathe twice or more every day. Frequency is determined by the feeling of hotness and dirt on the skin. Thus, people bathe to cool down, stay clean, and satisfy perceived social, religious, and hygiene demand. Kitchen practice consumes water significantly as people run the tap for vegetable washing in daily food preparation and dishwashing after each meal. Laundry practice reveals that most people wash clothes most frequently (twice in a week) during hot and dusty weather, and washing with hands in basins and buckets is the most prevalent and water wasting due to soap overdose. The research also reveals poor water governance as a major cause of current inadequate municipal water delivery. The implication poor governance and widespread use of boreholes is an uncontrolled abstraction of groundwater to satisfy desired household practices, thereby putting the future of the shared aquifer at great risk of total depletion with attendant multiplying effects on the people and the environment and population continues to soar.Keywords: boreholes, groundwater, household water practices, self-supply
Procedia PDF Downloads 12365 Acute Severe Hyponatremia in Patient with Psychogenic Polydipsia, Learning Disability and Epilepsy
Authors: Anisa Suraya Ab Razak, Izza Hayat
Abstract:
Introduction: The diagnosis and management of severe hyponatremia in neuropsychiatric patients present a significant challenge to physicians. Several factors contribute, including diagnostic shadowing and attributing abnormal behavior to intellectual disability or psychiatric conditions. Hyponatraemia is the commonest electrolyte abnormality in the inpatient population, ranging from mild/asymptomatic, moderate to severe levels with life-threatening symptoms such as seizures, coma and death. There are several documented fatal case reports in the literature of severe hyponatremia secondary to psychogenic polydipsia, often diagnosed only in autopsy. This paper presents a case study of acute severe hyponatremia in a neuropsychiatric patient with early diagnosis and admission to intensive care. Case study: A 21-year old Caucasian male with known epilepsy and learning disability was admitted from residential living with generalized tonic-clonic self-terminating seizures after refusing medications for several weeks. Evidence of superficial head injury was detected on physical examination. His laboratory data demonstrated mild hyponatremia (125 mmol/L). Computed tomography imaging of his brain demonstrated no acute bleed or space-occupying lesion. He exhibited abnormal behavior - restlessness, drinking water from bathroom taps, inability to engage, paranoia, and hypersexuality. No collateral history was available to establish his baseline behavior. He was loaded with intravenous sodium valproate and leveritircaetam. Three hours later, he developed vomiting and a generalized tonic-clonic seizure lasting forty seconds. He remained drowsy for several hours and regained minimal recovery of consciousness. A repeat set of blood tests demonstrated profound hyponatremia (117 mmol/L). Outcomes: He was referred to intensive care for peripheral intravenous infusion of 2.7% sodium chloride solution with two-hourly laboratory monitoring of sodium concentration. Laboratory monitoring identified dangerously rapid correction of serum sodium concentration, and hypertonic saline was switched to a 5% dextrose solution to reduce the risk of acute large-volume fluid shifts from the cerebral intracellular compartment to the extracellular compartment. He underwent urethral catheterization and produced 8 liters of urine over 24 hours. Serum sodium concentration remained stable after 24 hours of correction fluids. His GCS recovered to baseline after 48 hours with improvement in behavior -he engaged with healthcare professionals, understood the importance of taking medications, admitted to illicit drug use and drinking massive amounts of water. He was transferred from high-dependency care to ward level and was initiated on multiple trials of anti-epileptics before achieving seizure-free days two weeks after resolution of acute hyponatremia. Conclusion: Psychogenic polydipsia is often found in young patients with intellectual disability or psychiatric disorders. Patients drink large volumes of water daily ranging from ten to forty liters, resulting in acute severe hyponatremia with mortality rates as high as 20%. Poor outcomes are due to challenges faced by physicians in making an early diagnosis and treating acute hyponatremia safely. A low index of suspicion of water intoxication is required in this population, including patients with known epilepsy. Monitoring urine output proved to be clinically effective in aiding diagnosis. Early referral and admission to intensive care should be considered for safe correction of sodium concentration while minimizing risk of fatal complications e.g. central pontine myelinolysis.Keywords: epilepsy, psychogenic polydipsia, seizure, severe hyponatremia
Procedia PDF Downloads 12264 Introducing Transport Engineering through Blended Learning Initiatives
Authors: Kasun P. Wijayaratna, Lauren Gardner, Taha Hossein Rashidi
Abstract:
Undergraduate students entering university across the last 2 to 3 years tend to be born during the middle years of the 1990s. This generation of students has been exposed to the internet and the desire and dependency on technology since childhood. Brains develop based on environmental influences and technology has wired this generation of student to be attuned to sophisticated complex visual imagery, indicating visual forms of learning may be more effective than the traditional lecture or discussion formats. Furthermore, post-millennials perspectives on career are not focused solely on stability and income but are strongly driven by interest, entrepreneurship and innovation. Accordingly, it is important for educators to acknowledge the generational shift and tailor the delivery of learning material to meet the expectations of the students and the needs of industry. In the context of transport engineering, effectively teaching undergraduate students the basic principles of transport planning, traffic engineering and highway design is fundamental to the progression of the profession from a practice and research perspective. Recent developments in technology have transformed the discipline as practitioners and researchers move away from the traditional “pen and paper” approach to methods involving the use of computer programs and simulation. Further, enhanced accessibility of technology for students has changed the way they understand and learn material being delivered at tertiary education institutions. As a consequence, blended learning approaches, which aim to integrate face to face teaching with flexible self-paced learning resources, have become prevalent to provide scalable education that satisfies the expectations of students. This research study involved the development of a series of ‘Blended Learning’ initiatives implemented within an introductory transport planning and geometric design course, CVEN2401: Sustainable Transport and Highway Engineering, taught at the University of New South Wales, Australia. CVEN2401 was modified by conducting interactive polling exercises during lectures, including weekly online quizzes, offering a series of supplementary learning videos, and implementing a realistic design project that students needed to complete using modelling software that is widely used in practice. These activities and resources were aimed to improve the learning environment for a large class size in excess of 450 students and to ensure that practical industry valued skills were introduced. The case study compared the 2016 and 2017 student cohorts based on their performance across assessment tasks as well as their reception to the material revealed through student feedback surveys. The initiatives were well received with a number of students commenting on the ability to complete self-paced learning and an appreciation of the exposure to a realistic design project. From an educator’s perspective, blending the course made it feasible to interact and engage with students. Personalised learning opportunities were made available whilst delivering a considerable volume of complex content essential for all undergraduate Civil and Environmental Engineering students. Overall, this case study highlights the value of blended learning initiatives, especially in the context of large class size university courses.Keywords: blended learning, highway design, teaching, transport planning
Procedia PDF Downloads 14963 Establishment of Farmed Fish Welfare Biomarkers Using an Omics Approach
Authors: Pedro M. Rodrigues, Claudia Raposo, Denise Schrama, Marco Cerqueira
Abstract:
Farmed fish welfare is a very recent concept, widely discussed among the scientific community. Consumers’ interest regarding farmed animal welfare standards has significantly increased in the last years posing a huge challenge to producers in order to maintain an equilibrium between good welfare principles and productivity, while simultaneously achieve public acceptance. The major bottleneck of standard aquaculture is to impair considerably fish welfare throughout the production cycle and with this, the quality of fish protein. Welfare assessment in farmed fish is undertaken through the evaluation of fish stress responses. Primary and secondary stress responses include release of cortisol and glucose and lactate to the blood stream, respectively, which are currently the most commonly used indicators of stress exposure. However, the reliability of these indicators is highly dubious, due to a high variability of fish responses to an acute stress and the adaptation of the animal to a repetitive chronic stress. Our objective is to use comparative proteomics to identify and validate a fingerprint of proteins that can present an more reliable alternative to the already established welfare indicators. In this way, the culture conditions will improve and there will be a higher perception of mechanisms and metabolic pathway involved in the produced organism’s welfare. Due to its high economical importance in Portuguese aquaculture Gilthead seabream will be the elected species for this study. Protein extracts from Gilthead Seabream fish muscle, liver and plasma, reared for a 3 month period under optimized culture conditions (control) and induced stress conditions (Handling, high densities, and Hipoxia) are collected and used to identify a putative fish welfare protein markers fingerprint using a proteomics approach. Three tanks per condition and 3 biological replicates per tank are used for each analisys. Briefly, proteins from target tissue/fluid are extracted using standard established protocols. Protein extracts are then separated using 2D-DIGE (Difference gel electrophoresis). Proteins differentially expressed between control and induced stress conditions will be identified by mass spectrometry (LC-Ms/Ms) using NCBInr (taxonomic level - Actinopterygii) databank and Mascot search engine. The statistical analysis is performed using the R software environment, having used a one-tailed Mann-Whitney U-test (p < 0.05) to assess which proteins were differentially expressed in a statistically significant way. Validation of these proteins will be done by comparison of the RT-qPCR (Quantitative reverse transcription polymerase chain reaction) expressed genes pattern with the proteomic profile. Cortisol, glucose, and lactate are also measured in order to confirm or refute the reliability of these indicators. The identified liver proteins under handling and high densities induced stress conditions are responsible and involved in several metabolic pathways like primary metabolism (i.e. glycolysis, gluconeogenesis), ammonia metabolism, cytoskeleton proteins, signalizing proteins, lipid transport. Validition of these proteins as well as identical analysis in muscle and plasma are underway. Proteomics is a promising high-throughput technique that can be successfully applied to identify putative welfare protein biomarkers in farmed fish.Keywords: aquaculture, fish welfare, proteomics, welfare biomarkers
Procedia PDF Downloads 15662 Effects of Heart Rate Variability Biofeedback to Improve Autonomic Nerve Function, Inflammatory Response and Symptom Distress in Patients with Chronic Kidney Disease: A Randomized Control Trial
Authors: Chia-Pei Chen, Yu-Ju Chen, Yu-Juei Hsu
Abstract:
The prevalence and incidence of end-stage renal disease in Taiwan ranks the highest in the world. According to the statistical survey of the Ministry of Health and Welfare in 2019, kidney disease is the ninth leading cause of death in Taiwan. It leads to autonomic dysfunction, inflammatory response and symptom distress, and further increases the damage to the structure and function of the kidneys, leading to increased demand for renal replacement therapy and risks of cardiovascular disease, which also has medical costs for the society. If we can intervene in a feasible manual to effectively regulate the autonomic nerve function of CKD patients, reduce the inflammatory response and symptom distress. To prolong the progression of the disease, it will be the main goal of caring for CKD patients. This study aims to test the effect of heart rate variability biofeedback (HRVBF) on improving autonomic nerve function (Heart Rate Variability, HRV), inflammatory response (Interleukin-6 [IL-6], C reaction protein [CRP] ), symptom distress (Piper fatigue scale, Pittsburgh Sleep Quality Index [PSQI], and Beck Depression Inventory-II [BDI-II] ) in patients with chronic kidney disease. This study was experimental research, with a convenience sampling. Participants were recruited from the nephrology clinic at a medical center in northern Taiwan. With signed informed consent, participants were randomly assigned to the HRVBF or control group by using the Excel BINOMDIST function. The HRVBF group received four weekly hospital-based HRVBF training, and 8 weeks of home-based self-practice was done with StressEraser. The control group received usual care. We followed all participants for 3 months, in which we repeatedly measured their autonomic nerve function (HRV), inflammatory response (IL-6, CRP), and symptom distress (Piper fatigue scale, PSQI, and BDI-II) on their first day of study participation (baselines), 1 month, and 3 months after the intervention to test the effects of HRVBF. The results were analyzed by SPSS version 23.0 statistical software. The data of demographics, HRV, IL-6, CRP, Piper fatigue scale, PSQI, and BDI-II were analyzed by descriptive statistics. To test for differences between and within groups in all outcome variables, it was used by paired sample t-test, independent sample t-test, Wilcoxon Signed-Rank test and Mann-Whitney U test. Results: Thirty-four patients with chronic kidney disease were enrolled, but three of them were lost to follow-up. The remaining 31 patients completed the study, including 15 in the HRVBF group and 16 in the control group. The characteristics of the two groups were not significantly different. The four-week hospital-based HRVBF training combined with eight-week home-based self-practice can effectively enhance the parasympathetic nerve performance for patients with chronic kidney disease, which may against the disease-related parasympathetic nerve inhibition. In the inflammatory response, IL-6 and CRP in the HRVBF group could not achieve significant improvement when compared with the control group. Self-reported fatigue and depression significantly decreased in the HRVBF group, but they still failed to achieve a significant difference between the two groups. HRVBF has no significant effect on improving the sleep quality for CKD patients.Keywords: heart rate variability biofeedback, autonomic nerve function, inflammatory response, symptom distress, chronic kidney disease
Procedia PDF Downloads 18061 Monte Carlo Risk Analysis of a Carbon Abatement Technology
Authors: Hameed Rukayat Opeyemi, Pericles Pilidis, Pagone Emanuele
Abstract:
Climate change represents one of the single most challenging problems facing the world today. According to the National Oceanic and Administrative Association, Atmospheric temperature rose almost 25% since 1958, Artic sea ice has shrunk 40% since 1959 and global sea levels have risen more than 5.5 cm since 1990. Power plants are the major culprits of GHG emission to the atmosphere. Several technologies have been proposed to reduce the amount of GHG emitted to the atmosphere from power plant, one of which is the less researched Advanced zero emission power plant. The advanced zero emission power plants make use of mixed conductive membrane (MCM) reactor also known as oxygen transfer membrane (OTM) for oxygen transfer. The MCM employs membrane separation process. The membrane separation process was first introduced in 1899 when Walter Hermann Nernst investigated electric current between metals and solutions. He found that when a dense ceramic is heated, current of oxygen molecules move through it. In the bid to curb the amount of GHG emitted to the atmosphere, the membrane separation process was applied to the field of power engineering in the low carbon cycle known as the Advanced zero emission power plant (AZEP cycle). The AZEP cycle was originally invented by Norsk Hydro, Norway and ABB Alstom power (now known as Demag Delaval Industrial turbo machinery AB), Sweden. The AZEP drew a lot of attention because its ability to capture ~100% CO2 and also boasts of about 30-50 % cost reduction compared to other carbon abatement technologies, the penalty in efficiency is also not as much as its counterparts and crowns it with almost zero NOx emissions due to very low nitrogen concentrations in the working fluid. The advanced zero emission power plants differ from a conventional gas turbine in the sense that its combustor is substituted with the mixed conductive membrane (MCM-reactor). The MCM-reactor is made up of the combustor, low temperature heat exchanger LTHX (referred to by some authors as air pre-heater the mixed conductive membrane responsible for oxygen transfer and the high temperature heat exchanger and in some layouts, the bleed gas heat exchanger. Air is taken in by the compressor and compressed to a temperature of about 723 Kelvin and pressure of 2 Mega-Pascals. The membrane area needed for oxygen transfer is reduced by increasing the temperature of 90% of the air using the LTHX; the temperature is also increased to facilitate oxygen transfer through the membrane. The air stream enters the LTHX through the transition duct leading to inlet of the LTHX. The temperature of the air stream is then increased to about 1150 K depending on the design point specification of the plant and the efficiency of the heat exchanging system. The amount of oxygen transported through the membrane is directly proportional to the temperature of air going through the membrane. The AZEP cycle was developed using the Fortran software and economic analysis was conducted using excel and Matlab followed by optimization case study. This paper discusses techno-economic analysis of four possible layouts of the AZEP cycle. The Simple bleed gas heat exchange layout (100 % CO2 capture), Bleed gas heat exchanger layout with flue gas turbine (100 % CO2 capture), Pre-expansion reheating layout (Sequential burning layout) – AZEP 85 % (85 % CO2 capture) and Pre-expansion reheating layout (Sequential burning layout) with flue gas turbine– AZEP 85 % (85 % CO2 capture). This paper discusses Montecarlo risk analysis of four possible layouts of the AZEP cycle.Keywords: gas turbine, global warming, green house gases, power plants
Procedia PDF Downloads 47260 Finite Element Simulation of Four Point Bending of Laminated Veneer Lumber (LVL) Arch
Authors: Eliska Smidova, Petr Kabele
Abstract:
This paper describes non-linear finite element simulation of laminated veneer lumber (LVL) under tensile and shear loads that induce cracking along fibers. For this purpose, we use 2D homogeneous orthotropic constitutive model of tensile and shear fracture in timber that has been recently developed and implemented into ATENA® finite element software by the authors. The model captures (i) material orthotropy for small deformations in both linear and non-linear range, (ii) elastic behavior until anisotropic failure criterion is fulfilled, (iii) inelastic behavior after failure criterion is satisfied, (iv) different post-failure response for cracks along and across the grain, (v) unloading/reloading behavior. The post-cracking response is treated by fixed smeared crack model where Reinhardt-Hordijk function is used. The model requires in total 14 input parameters that can be obtained from standard tests, off-axis test results and iterative numerical simulation of compact tension (CT) or compact tension-shear (CTS) test. New engineered timber composites, such as laminated veneer lumber (LVL), offer improved structural parameters compared to sawn timber. LVL is manufactured by laminating 3 mm thick wood veneers aligned in one direction using water-resistant adhesives (e.g. polyurethane). Thus, 3 main grain directions, namely longitudinal (L), tangential (T), and radial (R), are observed within the layered LVL product. The core of this work consists in 3 numerical simulations of experiments where Radiata Pine LVL and Yellow Poplar LVL were involved. The first analysis deals with calibration and validation of the proposed model through off-axis tensile test (at a load-grain angle of 0°, 10°, 45°, and 90°) and CTS test (at a load-grain angle of 30°, 60°, and 90°), both of which were conducted for Radiata Pine LVL. The second finite element simulation reproduces load-CMOD curve of compact tension (CT) test of Yellow Poplar with the aim of obtaining cohesive law parameters to be used as an input in the third finite element analysis. That is four point bending test of small-size arch of 780 mm span that is made of Yellow Poplar LVL. The arch is designed with a through crack between two middle layers in the crown. Curved laminated beams are exposed to high radial tensile stress compared to timber strength in radial tension in the crown area. Let us note that in this case the latter parameter stands for tensile strength in perpendicular direction with respect to the grain. Standard tests deliver most of the relevant input data whereas traction-separation law for crack along the grain can be obtained partly by inverse analysis of compact tension (CT) test or compact tension-shear test (CTS). The initial crack was modeled as a narrow gap separating two layers in the middle the arch crown. Calculated load-deflection curve is in good agreement with the experimental ones. Furthermore, crack pattern given by numerical simulation coincides with the most important observed crack paths.Keywords: compact tension (CT) test, compact tension shear (CTS) test, fixed smeared crack model, four point bending test, laminated arch, laminated veneer lumber LVL, off-axis test, orthotropic elasticity, orthotropic fracture criterion, Radiata Pine LVL, traction-separation law, yellow poplar LVL, 2D constitutive model
Procedia PDF Downloads 29059 Oncolytic Efficacy of Thymidine Kinase-Deleted Vaccinia Virus Strain Tiantan (oncoVV-TT) in Glioma
Authors: Seyedeh Nasim Mirbahari, Taha Azad, Mehdi Totonchi
Abstract:
Oncolytic viruses, which only replicate in tumor cells, are being extensively studied for their use in cancer therapy. A particular virus known as the vaccinia virus, a member of the poxvirus family, has demonstrated oncolytic abilities glioma. Treating Glioma with traditional methods such as chemotherapy and radiotherapy is quite challenging. Even though oncolytic viruses have shown immense potential in cancer treatment, their effectiveness in glioblastoma treatment is still low. Therefore, there is a need to improve and optimize immunotherapies for better results. In this study, we have designed oncoVV-TT, which can more effectively target tumor cells while minimizing replication in normal cells by replacing the thymidine kinase gene with a luc-p2a-GFP gene expression cassette. Human glioblastoma cell line U251 MG, rat glioblastoma cell line C6, and non-tumor cell line HFF were plated at 105 cells in a 12-well plates in 2 mL of DMEM-F2 medium with 10% FBS added to each well. Then incubated at 37°C. After 16 hours, the cells were treated with oncoVV-TT at an MOI of 0.01, 0.1 and left in the incubator for a further 24, 48, 72 and 96 hours. Viral replication assay, fluorescence imaging and viability tests, including trypan blue and crystal violet, were conducted to evaluate the cytotoxic effect of oncoVV-TT. The finding shows that oncoVV-TT had significantly higher cytotoxic activity and proliferation rates in tumor cells in a dose and time-dependent manner, with the strongest effect observed in U251 MG. To conclude, oncoVV-TT has the potential to be a promising oncolytic virus for cancer treatment, with a more cytotoxic effect in human glioblastoma cells versus rat glioma cells. To assess the effectiveness of vaccinia virus-mediated viral therapy, we have tested U251mg and C6 tumor cell lines taken from human and rat gliomas, respectively. The study evaluated oncoVV-TT's ability to replicate and lyse cells and analyzed the survival rates of the tested cell lines when treated with different doses of oncoVV-TT. Additionally, we compared the sensitivity of human and mouse glioma cell lines to the oncolytic vaccinia virus. All experiments regarding viruses were conducted under biosafety level 2. We engineered a Vaccinia-based oncolytic virus called oncoVV-TT to replicate specifically in tumor cells. To propagate the oncoVV-TT virus, HeLa cells (5 × 104/well) were plated in 24-well plates and incubated overnight to attach to the bottom of the wells. Subsequently, 10 MOI virus was added. After 48 h, cells were harvested by scraping, and viruses were collected by 3 sequential freezing and thawing cycles followed by removal of cell debris by centrifugation (1500 rpm, 5 min). The supernatant was stored at −80 ◦C for the following experiments. To measure the replication of the virus in Hela, cells (5 × 104/well) were plated in 24-well plates and incubated overnight to attach to the bottom of the wells. Subsequently, 5 MOI virus or equal dilution of PBS was added. At the treatment time of 0 h, 24 h, 48 h, 72 h and 96 h, the viral titers were determined under the fluorescence microscope (BZ-X700; Keyence, Osaka, Japan). Fluorescence intensity was quantified using the imagej software according to the manufacturer’s protocol. For the isolation of single-virus clones, HeLa cells seeded in six-well plates (5×105 cells/well). After 24 h (100% confluent), the cells were infected with a 10-fold dilution series of TianTan green fluorescent protein (GFP)virus and incubated for 4 h. To examine the cytotoxic effect of oncoVV-TT virus ofn U251mg and C6 cell, trypan blue and crystal violet assay was used.Keywords: oncolytic virus, immune therapy, glioma, vaccinia virus
Procedia PDF Downloads 7958 Explanation of the Main Components of the Unsustainability of Cooperative Institutions in Cooperative Management Projects to Combat Desertification in South Khorasan Province
Authors: Yaser Ghasemi Aryan, Firoozeh Moghiminejad, Mohammadreza Shahraki
Abstract:
Background: The cooperative institution is considered the first and most essential pillar of strengthening social capital, whose sustainability is the main guarantee of survival and continued participation of local communities in natural resource management projects. The Village Development Group and the Microcredit Fund are two important social and economic institutions in the implementation of the International Project for the Restoration of Degraded Forest Lands (RFLDL) in Sarayan City, South Khorasan Province, which has learned positive lessons from the participation of the beneficiaries in the implementation. They have brought more effective projects to deal with desertification. However, the low activity or liquidation of some of these institutions has become one of the important challenges and concerns of project executive experts. The current research was carried out with the aim of explaining the main components of the instability of these institutions. Materials and Methods: This research is descriptive-analytical in terms of method, practical in terms of purpose, and the method of collecting information is two documentary and survey methods. The statistical population of the research included all the members of the village development groups and microcredit funds in the target villages of the RFLDL project of Sarayan city, based on the Kochran formula and matching with the Karjesi and Morgan table. Net people were selected as a statistical sample. After confirming the validity of the expert's opinions, the reliability of the questionnaire was 0.83, which shows the appropriate reliability of the researcher-made questionnaire. Data analysis was done using SPSS software. Results: The results related to the extraction of obstacles to the stability of social and economic networks were classified and prioritized in the form of 5 groups of social-cultural, economic, administrative, educational-promotional and policy-management factors. Based on this, in the socio-cultural factors, the items ‘not paying attention to the structural characteristics and composition of groups’, ‘lack of commitment and moral responsibility in some members of the group,’ and ‘lack of a clear pattern for the preservation and survival of groups’, in the disciplinary factors, The items ‘Irregularity in holding group meetings’ and ‘Irregularity of members to participate in meetings’, in the economic factors of the items "small financial capital of the fund’, ‘the low amount of loans of the fund’ and ‘the fund's inability to conclude contracts and attract capital from other sources’, in the educational-promotional factors of the items ‘non-simultaneity of job training with the granting of loans to create jobs’ and ‘insufficient training for the effective use of loans and job creation’ and in the policy-management factors of the item ‘failure to provide government facilities for support From the funds, they had the highest priority. Conclusion: In general, the results of this research show that policy-management factors and social factors, especially the structure and composition of social and economic institutions, are the most important obstacles to their sustainability. Therefore, it is suggested to form cooperative institutions based on network analysis studies in order to achieve the appropriate composition of members.Keywords: cooperative institution, social capital, network analysis, participation, Sarayan.
Procedia PDF Downloads 5557 Development of Advanced Virtual Radiation Detection and Measurement Laboratory (AVR-DML) for Nuclear Science and Engineering Students
Authors: Lily Ranjbar, Haori Yang
Abstract:
Online education has been around for several decades, but the importance of online education became evident after the COVID-19 pandemic. Eventhough the online delivery approach works well for knowledge building through delivering content and oversight processes, it has limitations in developing hands-on laboratory skills, especially in the STEM field. During the pandemic, many education institutions faced numerous challenges in delivering lab-based courses, especially in the STEM field. Also, many students worldwide were unable to practice working with lab equipment due to social distancing or the significant cost of highly specialized equipment. The laboratory plays a crucial role in nuclear science and engineering education. It can engage students and improve their learning outcomes. In addition, online education and virtual labs have gained substantial popularity in engineering and science education. Therefore, developing virtual labs is vital for institutions to deliver high-class education to their students, including their online students. The School of Nuclear Science and Engineering (NSE) at Oregon State University, in partnership with SpectralLabs company, has developed an Advanced Virtual Radiation Detection and Measurement Lab (AVR-DML) to offer a fully online Master of Health Physics program. It was essential for us to use a system that could simulate nuclear modules that accurately replicate the underlying physics, the nature of radiation and radiation transport, and the mechanics of the instrumentations used in the real radiation detection lab. It was all accomplished using a Realistic, Adaptive, Interactive Learning System (RAILS). RAILS is a comprehensive software simulation-based learning system for use in training. It is comprised of a web-based learning management system that is located on a central server, as well as a 3D-simulation package that is downloaded locally to user machines. Users will find that the graphics, animations, and sounds in RAILS create a realistic, immersive environment to practice detecting different radiation sources. These features allow students to coexist, interact and engage with a real STEM lab in all its dimensions. It enables them to feel like they are in a real lab environment and to see the same system they would in a lab. Unique interactive interfaces were designed and developed by integrating all the tools and equipment needed to run each lab. These interfaces provide students full functionality for data collection, changing the experimental setup, and live data collection with real-time updates for each experiment. Students can manually do all experimental setups and parameter changes in this lab. Experimental results can then be tracked and analyzed in an oscilloscope, a multi-channel analyzer, or a single-channel analyzer (SCA). The advanced virtual radiation detection and measurement laboratory developed in this study enabled the NSE school to offer a fully online MHP program. This flexibility of course modality helped us to attract more non-traditional students, including international students. It is a valuable educational tool as students can walk around the virtual lab, make mistakes, and learn from them. They have an unlimited amount of time to repeat and engage in experiments. This lab will also help us speed up training in nuclear science and engineering.Keywords: advanced radiation detection and measurement, virtual laboratory, realistic adaptive interactive learning system (rails), online education in stem fields, student engagement, stem online education, stem laboratory, online engineering education
Procedia PDF Downloads 9156 Numerical Analysis of Mandible Fracture Stabilization System
Authors: Piotr Wadolowski, Grzegorz Krzesinski, Piotr Gutowski
Abstract:
The aim of the presented work is to recognize the impact of mini-plate application approach on the stress and displacement within the stabilization devices and surrounding bones. The mini-plate osteosynthesis technique is widely used by craniofacial surgeons as an improved replacement of wire connection approach. Many different types of metal plates and screws are used to the physical connection of fractured bones. Below investigation is based on a clinical observation of patient hospitalized with mini-plate stabilization system. Analysis was conducted on a solid mandible geometry, which was modeled basis on the computed tomography scan of the hospitalized patient. In order to achieve most realistic connected system behavior, the cortical and cancellous bone layers were assumed. The temporomandibular joint was simplified to the elastic element to allow physiological movement of loaded bone. The muscles of mastication system were reduced to three pairs, modeled as shell structures. Finite element grid was created by the ANSYS software, where hexahedral and tetrahedral variants of SOLID185 element were used. A set of nonlinear contact conditions were applied on connecting devices and bone common surfaces. Properties of particular contact pair depend on screw - mini-plate connection type and possible gaps between fractured bone around osteosynthesis region. Some of the investigated cases contain prestress introduced to the mini-plate during the application, what responds the initial bending of the connecting device to fit the retromolar fossa region. Assumed bone fracture occurs within the mandible angle zone. Due to the significant deformation of the connecting plate in some of the assembly cases the elastic-plastic model of titanium alloy was assumed. The bone tissues were covered by the orthotropic material. As a loading were used the gauge force of magnitude of 100N applied in three different locations. Conducted analysis shows significant impact of mini-plate application methodology on the stress distribution within the miniplate. Prestress effect introduces additional loading, which leads to locally exceed the titanium alloy yield limit. Stress in surrounding bone increases rapidly around the screws application region, exceeding assumed bone yield limit, what indicate the local bone destruction. Approach with the doubled mini-plate shows increased stress within the connector due to the too rigid connection, where the main path of loading leads through the mini-plates instead of plates and connected bones. Clinical observations confirm more frequent plate destruction of stiffer connections. Some of them could be an effect of decreased low cyclic fatigue capability caused by the overloading. The executed analysis prove that the mini-plate system provides sufficient support to mandible fracture treatment, however, many applicable solutions shifts the entire system to the allowable material limits. The results show that connector application with the initial loading needs to be carefully established due to the small material capability tolerances. Comparison to the clinical observations allows optimizing entire connection to prevent future incidents.Keywords: mandible fracture, mini-plate connection, numerical analysis, osteosynthesis
Procedia PDF Downloads 27555 The Use of Non-Parametric Bootstrap in Computing of Microbial Risk Assessment from Lettuce Consumption Irrigated with Contaminated Water by Sanitary Sewage in Infulene Valley
Authors: Mario Tauzene Afonso Matangue, Ivan Andres Sanchez Ortiz
Abstract:
The Metropolitan area of Maputo (Mozambique Capital City) is located in semi-arid zone (800 mm annual rainfall) with 1101170 million inhabitants. On the west side, there are the flatlands of Infulene where the Mulauze River flows towards to the Indian Ocean, receiving at this site, the storm water contaminated with sanitary sewage from Maputo, transported through a concrete open channel. In Infulene, local communities grow salads crops such as tomato, onion, garlic, lettuce, and cabbage, which are then commercialized and consumed in several markets in Maputo City. Lettuce is the most daily consumed salad crop in different meals, generally in fast-foods, breakfasts, lunches, and dinners. However, the risk of infection by several pathogens due to the consumption of lettuce, using the Quantitative Microbial Risk Assessment (QMRA) tools, is still unknown since there are few studies or publications concerning to this matter in Mozambique. This work is aimed at determining the annual risk arising from the consumption of lettuce grown in Infulene valley, in Maputo, using QMRA tools. The exposure model was constructed upon the volume of contaminated water remaining in the lettuce leaves, the empirical relations between the number of pathogens and the indicator of microorganisms (E. coli), the consumption of lettuce (g) and reduction of pathogens (days). The reference pathogens were Vibrio cholerae, Cryptosporidium, norovirus, and Ascaris. The water quality samples (E. coli) were collected in the storm water channel from January 2016 to December 2018, comprising 65 samples, and the urban lettuce consumption data were collected through inquiry in Maputo Metropolis covering 350 persons. A non-parametric bootstrap was performed involving 10,000 iterations over the collected dataset, namely, water quality (E. coli) and lettuce consumption. The dose-response models were: Exponential for Cryptosporidium, Kummer Confluent hypergeomtric function (1F1) for Vibrio and Ascaris Gaussian hypergeometric function (2F1-(a,b;c;z) for norovirus. The annual infection risk estimates were performed using R 3.6.0 (CoreTeam) software by Monte Carlo (Latin hypercubes), a sampling technique involving 10,000 iterations. The annual infection risks values expressed by Median and the 95th percentile, per person per year (pppy) arising from the consumption of lettuce are as follows: Vibrio cholerae (1.00, 1.00), Cryptosporidium (3.91x10⁻³, 9.72x 10⁻³), nororvirus (5.22x10⁻¹, 9.99x10⁻¹) and Ascaris (2.59x10⁻¹, 9.65x10⁻¹). Thus, the consumption of the lettuce would result in greater risks than the tolerable levels ( < 10⁻³ pppy or 10⁻⁶ DALY) for all pathogens, and the Vibrio cholerae is the most virulent pathogens, according to the hit-single models followed by the Ascaris lumbricoides and norovirus. The sensitivity analysis carried out in this work pointed out that in the whole QMRA, the most important input variable was the reduction of pathogens (Spearman rank value was 0.69) between harvest and consumption followed by water quality (Spearman rank value was 0.69). The decision-makers (Mozambique Government) must strengthen the prevention measures related to pathogens reduction in lettuce (i.e., washing) and engage in wastewater treatment engineering.Keywords: annual infections risk, lettuce, non-parametric bootstrapping, quantitative microbial risk assessment tools
Procedia PDF Downloads 12054 MANIFEST-2, a Global, Phase 3, Randomized, Double-Blind, Active-Control Study of Pelabresib (CPI-0610) and Ruxolitinib vs. Placebo and Ruxolitinib in JAK Inhibitor-Naïve Myelofibrosis Patients
Authors: Claire Harrison, Raajit K. Rampal, Vikas Gupta, Srdan Verstovsek, Moshe Talpaz, Jean-Jacques Kiladjian, Ruben Mesa, Andrew Kuykendall, Alessandro Vannucchi, Francesca Palandri, Sebastian Grosicki, Timothy Devos, Eric Jourdan, Marielle J. Wondergem, Haifa Kathrin Al-Ali, Veronika Buxhofer-Ausch, Alberto Alvarez-Larrán, Sanjay Akhani, Rafael Muñoz-Carerras, Yury Sheykin, Gozde Colak, Morgan Harris, John Mascarenhas
Abstract:
Myelofibrosis (MF) is characterized by bone marrow fibrosis, anemia, splenomegaly and constitutional symptoms. Progressive bone marrow fibrosis results from aberrant megakaryopoeisis and expression of proinflammatory cytokines, both of which are heavily influenced by bromodomain and extraterminal domain (BET)-mediated gene regulation and lead to myeloproliferation and cytopenias. Pelabresib (CPI-0610) is an oral small-molecule investigational inhibitor of BET protein bromodomains currently being developed for the treatment of patients with MF. It is designed to downregulate BET target genes and modify nuclear factor kappa B (NF-κB) signaling. MANIFEST-2 was initiated based on data from Arm 3 of the ongoing Phase 2 MANIFEST study (NCT02158858), which is evaluating the combination of pelabresib and ruxolitinib in Janus kinase inhibitor (JAKi) treatment-naïve patients with MF. Primary endpoint analyses showed splenic and symptom responses in 68% and 56% of 84 enrolled patients, respectively. MANIFEST-2 (NCT04603495) is a global, Phase 3, randomized, double-blind, active-control study of pelabresib and ruxolitinib versus placebo and ruxolitinib in JAKi treatment-naïve patients with primary MF, post-polycythemia vera MF or post-essential thrombocythemia MF. The aim of this study is to evaluate the efficacy and safety of pelabresib in combination with ruxolitinib. Here we report updates from a recent protocol amendment. The MANIFEST-2 study schema is shown in Figure 1. Key eligibility criteria include a Dynamic International Prognostic Scoring System (DIPSS) score of Intermediate-1 or higher, platelet count ≥100 × 10^9/L, spleen volume ≥450 cc by computerized tomography or magnetic resonance imaging, ≥2 symptoms with an average score ≥3 or a Total Symptom Score (TSS) of ≥10 using the Myelofibrosis Symptom Assessment Form v4.0, peripheral blast count <5% and Eastern Cooperative Oncology Group performance status ≤2. Patient randomization will be stratified by DIPSS risk category (Intermediate-1 vs Intermediate-2 vs High), platelet count (>200 × 10^9/L vs 100–200 × 10^9/L) and spleen volume (≥1800 cm^3 vs <1800 cm^3). Double-blind treatment (pelabresib or matching placebo) will be administered once daily for 14 consecutive days, followed by a 7 day break, which is considered one cycle of treatment. Ruxolitinib will be administered twice daily for all 21 days of the cycle. The primary endpoint is SVR35 response (≥35% reduction in spleen volume from baseline) at Week 24, and the key secondary endpoint is TSS50 response (≥50% reduction in TSS from baseline) at Week 24. Other secondary endpoints include safety, pharmacokinetics, changes in bone marrow fibrosis, duration of SVR35 response, duration of TSS50 response, progression-free survival, overall survival, conversion from transfusion dependence to independence and rate of red blood cell transfusion for the first 24 weeks. Study recruitment is ongoing; 400 patients (200 per arm) from North America, Europe, Asia and Australia will be enrolled. The study opened for enrollment in November 2020. MANIFEST-2 was initiated based on data from the ongoing Phase 2 MANIFEST study with the aim of assessing the efficacy and safety of pelabresib and ruxolitinib in JAKi treatment-naïve patients with MF. MANIFEST-2 is currently open for enrollment.Keywords: CPI-0610, JAKi treatment-naïve, MANIFEST-2, myelofibrosis, pelabresib
Procedia PDF Downloads 20153 Extracellular Polymeric Substances (EPS) Attribute to Biofouling of Anaerobic Membrane Bioreactor: Adhesion and Viscoelastic Properties
Authors: Kbrom Mearg Haile
Abstract:
Introduction: Membrane fouling is the bottleneck for the anaerobic membrane bioreactor (AnMBR) robust continuous operation, primarily caused by the mixed liquor suspended solids (MLSS) characteristics formed by aggregated flocs and a scaffold of microbial self-produced extracellular polymeric substances (EPS), which dictates the flocs integrity. Accordingly, the adhesion of EPS to the membrane surface versus their role in forming firm, elastic, and mechanically stable flocs under the reactor’s hydraulic shear is critical for minimizing interactions between EPS and colloids originating from the MLSS flocs with the membrane. This study aims to gain insight and investigate the effect of MLSS flocs properties, EPS adhesion and viscoelasticity, viscoelastic properties of the sludge, and membrane fouling propensity. Experimental: As a working hypothesis, to alter the aforementioned flocs’ and EPS’s properties, the addition of either coagulant or surfactant was carried out during the AnMBR operation. In the AnMBR, two flat-sheet 300 kDa pore size polyether sulfone (PES) membranes with a total filtration area of 352 cm2 were immersed in the AnMBR system treating municipal wastewater of Midreshet Ben-Gurion village at the Negev highlands, Israel. The system temperature, pH, biogas recirculation, and hydraulic retention time were regulated. TMP fluctuations during a 30-day experiment were recorded under three operating conditions: Baseline (without the addition of coagulating or dispersing agent), coagulant addition (FeCl3), and surfactant addition (sodium dodecyl sulfate). At the end of each experiment, EPS were extracted from the MLSS and from the fouled membrane, characterized for their protein, polysaccharides, and DOC contents, and correlated with the fouling tendency of the submerged UF membrane. The EPS adherence and viscoelastic properties were revealed using QCM-D via the PES-coated gold sensor used as a membrane-mimicking surface providing a detailed real-time EPS adhesion. The associated shifts in the resonance frequency and dissipation at different overtones were further modeled using the Voigt-based viscoelastic model (using Dfind software, Q-Sense Biolin Scientific) in which the thickness, shear modulus, and shear viscosity values of the adsorbed EPS layers on the PES coated sensor were calculated. Results and discussion: The observations obtained from the QCM-D analysis indicate a greater decrease in the frequency shift for the elevated membrane fouling scenarios, likely due to an observed decrease in the calculated shear viscosity and shear modulus of the EPS adsorbed layer, coupled with an increase in EPS layer hydrated thickness and fluidity (ΔD/Δf slopes). Further analysis is being conducted for the three major operating conditions-analyzing their effects on sludge rheology, dewaterability (capillary suction time-CST) and settle ability (SVI). The biofouling layer is further characterized microscopically using a confocal laser scanning microscope (CLSM) and scanning electron microscope (SEM), for analyzing the consistency of the development of the biofouling layer with sludge characteristics, i.e., thicker biofouling layer on the membrane surface when operated with surfactant addition, due to flocs with reduced integrity and availability of EPS/colloids to the membrane. Conversely, a thinner layer when operated with coagulant compared to the baseline experiment, due to elevation in flocs integrity.Keywords: viscoelasticity, biofouling, viscoelastic, AnMBR, EPS, elocintegrity
Procedia PDF Downloads 2352 Prevalence and Diagnostic Evaluation of Schistosomiasis in School-Going Children in Nelson Mandela Bay Municipality: Insights from Urinalysis and Point-of-Care Testing
Authors: Maryline Vere, Wilma ten Ham-Baloyi, Lucy Ochola, Opeoluwa Oyedele, Lindsey Beyleveld, Siphokazi Tili, Takafira Mduluza, Paula E. Melariri
Abstract:
Schistosomiasis, caused by Schistosoma (S.) haematobium and Schistosoma (S.) mansoni parasites poses a significant public health challenge in low-income regions. Diagnosis typically relies on identifying specific urine biomarkers such as haematuria, protein, and leukocytes for S. haematobium, while the Point-of-Care Circulating Cathodic Antigen (POC-CCA) assay is employed for detecting S. mansoni. Urinalysis and the POC-CCA assay are favoured for their rapid, non-invasive nature and cost-effectiveness. However, traditional diagnostic methods such as Kato-Katz and urine filtration lack sensitivity in low-transmission areas, which can lead to underreporting of cases and hinder effective disease control efforts. Therefore, in this study, urinalysis and the POC-CCA assay was utilised to diagnose schistosomiasis effectively among school-going children in Nelson Mandela Bay Municipality. This was a cross-sectional study with a total of 759 children, aged 5 to 14 years, who provided urine samples. Urinalysis was performed using urinary dipstick tests, which measure multiple parameters, including haematuria, protein, leukocytes, bilirubin, urobilinogen, ketones, pH, specific gravity and other biomarkers. Urinalysis was performed by dipping the strip into the urine sample and observing colour changes on specific reagent pads. The POC-CCA test was conducted by applying a drop of urine onto a cassette containing CCA-specific antibodies, and the presence of a visible test line indicated a positive result for S. mansoni infection. Descriptive statistics were used to summarize urine parameters, and Pearson correlation coefficients (r) were calculated to analyze associations among urine parameters using R software (version 4.3.1). Among the 759 children, the prevalence of S. haematobium using haematuria as a diagnostic marker was 33.6%. Additionally, leukocytes were detected in 21.3% of the samples, and protein was present in 15%. The prevalence of positive POC-CCA test results for S. mansoni was 3.7%. Urine parameters exhibited low to moderate associations, suggesting complex interrelationships. For instance, specific gravity and pH showed a negative correlation (r = -0.37), indicating that higher specific gravity was associated with lower pH. Weak correlations were observed between haematuria and pH (r = -0.10), bilirubin and ketones (r = 0.14), protein and bilirubin (r = 0.13), and urobilinogen and pH (r = 0.12). A mild positive correlation was found between leukocytes and blood (r = 0.23), reflecting some association between these inflammation markers. In conclusion, the study identified a significant prevalence of schistosomiasis among school-going children in Nelson Mandela Bay Municipality, with S. haematobium detected through haematuria and S. mansoni identified using the POC-CCA assay. The detection of leukocytes and protein in urine samples serves as critical biomarkers for schistosomiasis infections, reinforcing the presence of schistosomiasis in the study area when considered alongside haematuria. These urine parameters are indicative of inflammatory responses associated with schistosomiasis, underscoring the necessity for effective diagnostic methodologies. Such findings highlight the importance of comprehensive diagnostic assessments to accurately identify and monitor schistosomiasis prevalence and its associated health impacts. The significant burden of schistosomiasis in this population highlights the urgent need to develop targeted control interventions to effectively reduce its prevalence in the study area.Keywords: schistosomiasis, urinalysis, haematuria, POC-CCA
Procedia PDF Downloads 2351 Identifying Effective Strategies to Promote Vietnamese Fashion Brands in an Internationally Dominated Market
Authors: Lam Hong Lan, Gabor Sarlos
Abstract:
It is hard to search for best practices in promotion for local fashion brands in Vietnam as the industry is still very young. Local fashion start-ups have grown quickly in the last five years, thanks in part to the internet and social media. However, local designer/owners can face a huge challenge when competing with international brands in the Vietnamese market – and few local case studies are available for guidance. In response, this paper studied how local small- to medium-sized enterprises (SMEs) promote to their target customers in order to compete with international brands. Knowledge of both successful and unsuccessful approaches generated by this study is intended to both contribute to the academic literature on local fashion in Vietnam as well as to help local designers to learn from and improve their brand-building strategy. The primary study featured qualitative data collection via semi-structured depth interviews. Transcription and data analysis were conducted manually in order to identify success factors that local brands should consider as part of their promotion strategy. Purposive sampling of SMEs identified five designers in Ho Chi Minh City (the biggest city in Vietnam) and three designers in Hanoi (the second biggest) as interviewees. Participant attributes included: born in the 1980s or 1990s; familiar with internet and social media; designer/owner of a successful local fashion brand in the key middle market and/or mass market segments (which are crucial to the growth of local brands). A secondary study was conducted using social listening software to gather further qualitative data on what were considered to be successful or unsuccessful approaches to local fashion brand promotion on social media. Both the primary and secondary studies indicated that local designers had maximized their promotion budget by using owned media and earned media instead of paid media. Findings from the qualitative interviews indicate that internet and social media have been used as effective promotion platforms by local fashion start-ups. Facebook and Instagram were the most popular social networks used by the SMEs interviewed, and these social platforms were believed to offer a more affordable promotional strategy than traditional media such as TV and/or print advertising. Online stores were considered an important factor in helping the SMEs to reach customers beyond the physical store. Furthermore, a successful online store allowed some SMEs to reduce their business rental costs by maintaining their physical store in a cheaper, less central city area as opposed to a more traditional city center store location. In addition, the small comparative size of the SMEs allowed them to be more attentive to their customers, leading to higher customer satisfaction and rate of return. In conclusion, this study found that these kinds of cost savings helped the SMEs interviewed to focus their scarce resources on producing unique, high-quality collections in order to differentiate themselves from international brands. Facebook and Instagram were the main platforms used for promotion and brand-building. The main challenge to this promotion strategy identified by the SMEs interviewed was to continue to find innovative ways to maximize the impact of a limited marketing budget.Keywords: Vietnam, SMEs, fashion brands, promotion, marketing, social listening
Procedia PDF Downloads 12550 Numerical Prediction of Width Crack of Concrete Dapped-End Beams
Authors: Jatziri Y. Moreno-Martinez, Arturo Galvan, Xavier Chavez Cardenas, Hiram Arroyo
Abstract:
Several methods have been utilized to study the prediction of cracking of concrete structural under loading. The finite element analysis is an alternative that shows good results. The aim of this work was the numerical study of the width crack in reinforced concrete beams with dapped ends, these are frequently found in bridge girders and precast concrete construction. Properly restricting cracking is an important aspect of the design in dapped ends, it has been observed that the cracks that exceed the allowable widths are unacceptable in an aggressive environment for reinforcing steel. For simulating the crack width, the discrete crack approach was considered by means of a Cohesive Zone (CZM) Model using a function to represent the crack opening. Two cases of dapped-end were constructed and tested in the laboratory of Structures and Materials of Engineering Institute of UNAM. The first case considers a reinforcement based on hangers as well as on vertical and horizontal ring, the second case considers 50% of the vertical stirrups in the dapped end to the main part of the beam were replaced by an equivalent area (vertically projected) of diagonal bars under. The loading protocol consisted on applying symmetrical loading to reach the service load. The models were performed using the software package ANSYS v. 16.2. The concrete structure was modeled using three-dimensional solid elements SOLID65 capable of cracking in tension and crushing in compression. Drucker-Prager yield surface was used to include the plastic deformations. The reinforcement was introduced with smeared approach. Interface delamination was modeled by traditional fracture mechanics methods such as the nodal release technique adopting softening relationships between tractions and the separations, which in turn introduce a critical fracture energy that is also the energy required to break apart the interface surfaces. This technique is called CZM. The interface surfaces of the materials are represented by a contact elements Surface-to-Surface (CONTA173) with bonded (initial contact). The Mode I dominated bilinear CZM model assumes that the separation of the material interface is dominated by the displacement jump normal to the interface. Furthermore, the opening crack was taken into consideration according to the maximum normal contact stress, the contact gap at the completion of debonding, and the maximum equivalent tangential contact stress. The contact elements were placed in the crack re-entrant corner. To validate the proposed approach, the results obtained with the previous procedure are compared with experimental test. A good correlation between the experimental and numerical Load-Displacement curves was presented, the numerical models also allowed to obtain the load-crack width curves. In these two cases, the proposed model confirms the capability of predicting the maximum crack width, with an error of ± 30 %. Finally, the orientation of the crack is a fundamental for the prediction of crack width. The results regarding the crack width can be considered as good from the practical point view. Load-Displacement curve of the test and the location of the crack were able to obtain favorable results.Keywords: cohesive zone model, dapped-end beams, discrete crack approach, finite element analysis
Procedia PDF Downloads 16849 Experimental Characterisation of Composite Panels for Railway Flooring
Authors: F. Pedro, S. Dias, A. Tadeu, J. António, Ó. López, A. Coelho
Abstract:
Railway transportation is considered the most economical and sustainable way to travel. However, future mobility brings important challenges to railway operators. The main target is to develop solutions that stimulate sustainable mobility. The research and innovation goals for this domain are efficient solutions, ensuring an increased level of safety and reliability, improved resource efficiency, high availability of the means (train), and satisfied passengers with the travel comfort level. These requirements are in line with the European Strategic Agenda for the 2020 rail sector, promoted by the European Rail Research Advisory Council (ERRAC). All these aspects involve redesigning current equipment and, in particular, the interior of the carriages. Recent studies have shown that two of the most important requirements for passengers are reasonable ticket prices and comfortable interiors. Passengers tend to use their travel time to rest or to work, so train interiors and their systems need to incorporate features that meet these requirements. Among the various systems that integrate train interiors, the flooring system is one of the systems with the greatest impact on passenger safety and comfort. It is also one of the systems that takes more time to install on the train, and which contributes seriously to the weight (mass) of all interior systems. Additionally, it presents a strong impact on manufacturing costs. The design of railway floor, in the development phase, is usually made relying on a design software that allows to draw and calculate several solutions in a short period of time. After obtaining the best solution, considering the goals previously defined, experimental data is always necessary and required. This experimental phase has such great significance, that its outcome can provoke the revision of the designed solution. This paper presents the methodology and some of the results of an experimental characterisation of composite panels for railway application. The mechanical tests were made for unaged specimens and for specimens that suffered some type of aging, i.e. heat, cold and humidity cycles or freezing/thawing cycles. These conditionings aim to simulate not only the time effect, but also the impact of severe environmental conditions. Both full solutions and separated components/materials were tested. For the full solution, (panel) these were: four-point bending tests, tensile shear strength, tensile strength perpendicular to the plane, determination of the spreading of water, and impact tests. For individual characterisation of the components, more specifically for the covering, the following tests were made: determination of the tensile stress-strain properties, determination of flexibility, determination of tear strength, peel test, tensile shear strength test, adhesion resistance test and dimensional stability. The main conclusions were that experimental characterisation brings a huge contribution to understand the behaviour of the materials both individually and assembled. This knowledge contributes to the increase the quality and improvements of premium solutions. This research work was framed within the POCI-01-0247-FEDER-003474 (coMMUTe) Project funded by Portugal 2020 through the COMPETE 2020.Keywords: durability, experimental characterization, mechanical tests, railway flooring system
Procedia PDF Downloads 15548 A Study of Seismic Design Approaches for Steel Sheet Piles: Hydrodynamic Pressures and Reduction Factors Using CFD and Dynamic Calculations
Authors: Helena Pera, Arcadi Sanmartin, Albert Falques, Rafael Rebolo, Xavier Ametller, Heiko Zillgen, Cecile Prum, Boris Even, Eric Kapornyai
Abstract:
Sheet piles system can be an interesting solution when dealing with harbors or quays designs. However, current design methods lead to conservative approaches due to the lack of specific basis of design. For instance, some design features still deal with pseudo-static approaches, although being a dynamic problem. Under this concern, the study particularly focuses on hydrodynamic water pressure definition and stability analysis of sheet pile system under seismic loads. During a seismic event, seawater produces hydrodynamic pressures on structures. Currently, design methods introduce hydrodynamic forces by means of Westergaard formulation and Eurocodes recommendations. They apply constant hydrodynamic pressure on the front sheet pile during the entire earthquake. As a result, the hydrodynamic load may represent 20% of the total forces produced on the sheet pile. Nonetheless, some studies question that approach. Hence, this study assesses the soil-structure-fluid interaction of sheet piles under seismic action in order to evaluate if current design strategies overestimate hydrodynamic pressures. For that purpose, this study performs various simulations by Plaxis 2D, a well-known geotechnical software, and CFD models, which treat fluid dynamic behaviours. Knowing that neither Plaxis nor CFD can resolve a soil-fluid coupled problem, the investigation imposes sheet pile displacements from Plaxis as input data for the CFD model. Then, it provides hydrodynamic pressures under seismic action, which fit theoretical Westergaard pressures if calculated using the acceleration at each moment of the earthquake. Thus, hydrodynamic pressures fluctuate during seismic action instead of remaining constant, as design recommendations propose. Additionally, these findings detect that hydrodynamic pressure contributes a 5% to the total load applied on sheet pile due to its instantaneous nature. These results are in line with other studies that use added masses methods for hydrodynamic pressures. Another important feature in sheet pile design is the assessment of the geotechnical overall stability. It uses pseudo-static analysis since the dynamic analysis cannot provide a safety calculation. Consequently, it estimates the seismic action. One of its relevant factors is the selection of the seismic reduction factor. A huge amount of studies discusses the importance of it but also about all its uncertainties. Moreover, current European standards do not propose a clear statement on that, and they recommend using a reduction factor equal to 1. This leads to conservative requirements when compared with more advanced methods. Under this situation, the study calibrates seismic reduction factor by fitting results from pseudo-static to dynamic analysis. The investigation concludes that pseudo-static analyses could reduce seismic action by 40-50%. These results are in line with some studies from Japanese and European working groups. In addition, it seems suitable to account for the flexibility of the sheet pile-soil system. Nevertheless, the calibrated reduction factor is subjected to particular conditions of each design case. Further research would contribute to specifying recommendations for selecting reduction factor values in the early stages of the design. In conclusion, sheet pile design still has chances for improving its design methodologies and approaches. Consequently, design could propose better seismic solutions thanks to advanced methods such as findings of this study.Keywords: computational fluid dynamics, hydrodynamic pressures, pseudo-static analysis, quays, seismic design, steel sheet pile
Procedia PDF Downloads 142