Search results for: fast rising voltage
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3615

Search results for: fast rising voltage

465 Study of Mixing Conditions for Different Endothelial Dysfunction in Arteriosclerosis

Authors: Sara Segura, Diego Nuñez, Miryam Villamil

Abstract:

In this work, we studied the microscale interaction of foreign substances with blood inside an artificial transparent artery system that represents medium and small muscular arteries. This artery system had channels ranging from 75 μm to 930 μm and was fabricated using glass and transparent polymer blends like Phenylbis(2,4,6-trimethylbenzoyl) phosphine oxide, Poly(ethylene glycol) and PDMS in order to be monitored in real time. The setup was performed using a computer controlled precision micropump and a high resolution optical microscope capable of tracking fluids at fast capture. Observation and analysis were performed using a real time software that reconstructs the fluid dynamics determining the flux velocity, injection dependency, turbulence and rheology. All experiments were carried out with fully computer controlled equipment. Interactions between substances like water, serum (0.9% sodium chloride and electrolyte with a ratio of 4 ppm) and blood cells were studied at microscale as high as 400nm of resolution and the analysis was performed using a frame-by-frame observation and HD-video capture. These observations lead us to understand the fluid and mixing behavior of the interest substance in the blood stream and to shed a light on the use of implantable devices for drug delivery at arteries with different Endothelial dysfunction. Several substances were tested using the artificial artery system. Initially, Milli-Q water was used as a control substance for the study of the basic fluid dynamics of the artificial artery system. However, serum and other low viscous substances were pumped into the system with the presence of other liquids to study the mixing profiles and behaviors. Finally, mammal blood was used for the final test while serum was injected. Different flow conditions, pumping rates, and time rates were evaluated for the determination of the optimal mixing conditions. Our results suggested the use of a very fine controlled microinjection for better mixing profiles with and approximately rate of 135.000 μm3/s for the administration of drugs inside arteries.

Keywords: artificial artery, drug delivery, microfluidics dynamics, arteriosclerosis

Procedia PDF Downloads 250
464 Production of Pig Iron by Smelting of Blended Pre-Reduced Titaniferous Magnetite Ore and Hematite Ore Using Lean Grade Coal

Authors: Bitan Kumar Sarkar, Akashdeep Agarwal, Rajib Dey, Gopes Chandra Das

Abstract:

The rapid depletion of high-grade iron ore (Fe2O3) has gained attention on the use of other sources of iron ore. Titaniferous magnetite ore (TMO) is a special type of magnetite ore having high titania content (23.23% TiO2 present in this case). Due to high TiO2 content and high density, TMO cannot be treated by the conventional smelting reduction. In this present work, the TMO has been collected from high-grade metamorphic terrain of the Precambrian Chotanagpur gneissic complex situated in the eastern part of India (Shaltora area, Bankura district, West Bengal) and the hematite ore has been collected from Visakhapatnam Steel Plant (VSP), Visakhapatnam. At VSP, iron ore is received from Bailadila mines, Chattisgarh of M/s. National Mineral Development Corporation. The preliminary characterization of TMO and hematite ore (HMO) has been investigated by WDXRF, XRD and FESEM analyses. Similarly, good quality of coal (mainly coking coal) is also getting depleted fast. The basic purpose of this work is to find how lean grade coal can be utilised along with TMO for smelting to produce pig iron. Lean grade coal has been characterised by using TG/DTA, proximate and ultimate analyses. The boiler grade coal has been found to contain 28.08% of fixed carbon and 28.31% of volatile matter. TMO fines (below 75 μm) and HMO fines (below 75 μm) have been separately agglomerated with lean grade coal fines (below 75 μm) in the form of briquettes using binders like bentonite and molasses. These green briquettes are dried first in oven at 423 K for 30 min and then reduced isothermally in tube furnace over the temperature range of 1323 K, 1373 K and 1423 K for 30 min & 60 min. After reduction, the reduced briquettes are characterized by XRD and FESEM analyses. The best reduced TMO and HMO samples are taken and blended in three different weight percentage ratios of 1:4, 1:8 and 1:12 of TMO:HMO. The chemical analysis of three blended samples is carried out and degree of metallisation of iron is found to contain 89.38%, 92.12% and 93.12%, respectively. These three blended samples are briquetted using binder like bentonite and lime. Thereafter these blended briquettes are separately smelted in raising hearth furnace at 1773 K for 30 min. The pig iron formed is characterized using XRD, microscopic analysis. It can be concluded that 90% yield of pig iron can be achieved when the blend ratio of TMO:HMO is 1:4.5. This means for 90% yield, the maximum TMO that could be used in the blend is about 18%.

Keywords: briquetting reduction, lean grade coal, smelting reduction, TMO

Procedia PDF Downloads 291
463 Occurrence of Half-Metallicity by Sb-Substitution in Non-Magnetic Fe₂TiSn

Authors: S. Chaudhuri, P. A. Bhobe

Abstract:

Fe₂TiSn is a non-magnetic full Heusler alloy with a small gap (~ 0.07 eV) at the Fermi level. The electronic structure is highly symmetric in both the spin bands and a small percentage of substitution of holes or electrons can push the system towards spin polarization. A stable 100% spin polarization or half-metallicity is very desirable in the field of spintronics, making Fe₂TiSn a highly attractive material. However, this composition suffers from an inherent anti-site disorder between Fe and Ti sites. This paper reports on the method adopted to control the anti-site disorder and the realization of the half-metallic ground state in Fe₂TiSn, achieved by chemical substitution. Here, Sb was substituted at Sn site to obtain Fe₂TiSn₁₋ₓSbₓ compositions with x = 0, 0.1, 0.25, 0.5 and 0.6. All prepared compositions with x ≤ 0.6 exhibit long-range L2₁ ordering and a decrease in Fe – Ti anti-site disorder. The transport and magnetic properties of Fe₂TiSn₁₋ₓSbₓ compositions were investigated as a function of temperature in the range, 5 K to 400 K. Electrical resistivity, magnetization, and Hall voltage measurements were carried out. All the experimental results indicate the presence of the half-metallic ground state in x ≥ 0.25 compositions. However, the value of saturation magnetization is small, indicating the presence of compensated magnetic moments. The observed magnetic moments' values are in close agreement with the Slater–Pauling rule in half-metallic systems. Magnetic interactions in Fe₂TiSn₁₋ₓSbₓ are understood from the local crystal structural perspective using extended X-ray absorption fine structure (EXAFS) spectroscopy. The changes in bond distances extracted from EXAFS analysis can be correlated with the hybridization between constituent atoms and hence the RKKY type magnetic interactions that govern the magnetic ground state of these alloys. To complement the experimental findings, first principle electronic structure calculations were also undertaken. The spin-polarized DOS complies with the experimental results for Fe₂TiSn₁₋ₓSbₓ. Substitution of Sb (an electron excess element) at Sn–site shifts the majority spin band to the lower energy side of Fermi level, thus making the system 100% spin polarized and inducing long-range magnetic order in an otherwise non-magnetic Fe₂TiSn. The present study concludes that a stable half-metallic system can be realized in Fe₂TiSn with ≥ 50% Sb – substitution at Sn – site.

Keywords: antisite disorder, EXAFS, Full Heusler alloy, half metallic ferrimagnetism, RKKY interactions

Procedia PDF Downloads 104
462 Concept Mapping to Reach Consensus on an Antibiotic Smart Use Strategy Model to Promote and Support Appropriate Antibiotic Prescribing in a Hospital, Thailand

Authors: Phenphak Horadee, Rodchares Hanrinth, Saithip Suttiruksa

Abstract:

Inappropriate use of antibiotics has happened in several hospitals, Thailand. Drug use evaluation (DUE) is one strategy to overcome this difficulty. However, most community hospitals still encounter incomplete evaluation resulting overuse of antibiotics with high cost. Consequently, drug-resistant bacteria have been rising due to inappropriate antibiotic use. The aim of this study was to involve stakeholders in conceptualizing, developing, and prioritizing a feasible intervention strategy to promote and support appropriate antibiotic prescribing in a community hospital, Thailand. Study antibiotics included four antibiotics such as Meropenem, Piperacillin/tazobactam, Amoxicillin/clavulanic acid, and Vancomycin. The study was conducted for the 1-year period between March 1, 2018, and March 31, 2019, in a community hospital in the northeastern part of Thailand. Concept mapping was used in a purposive sample, including doctors (one was an administrator), pharmacists, and nurses who involving drug use evaluation of antibiotics. In-depth interviews for each participant and survey research were conducted to seek the problems for inappropriate use of antibiotics based on drug use evaluation system. Seventy-seven percent of DUE reported appropriate antibiotic prescribing, which still did not reach the goal of 80 percent appropriateness. Meropenem led other antibiotics for inappropriate prescribing. The causes of the unsuccessful DUE program were classified into three themes such as personnel, lack of public relation and communication, and unsupported policy and impractical regulations. During the first meeting, stakeholders (n = 21) expressed the generation of interventions. During the second meeting, participants who were almost the same group of people in the first meeting (n = 21) were requested to independently rate the feasibility and importance of each idea and to categorize them into relevant clusters to facilitate multidimensional scaling and hierarchical cluster analysis. The outputs of analysis included the idealist, cluster list, point map, point rating map, cluster map, and cluster rating map. All of these were distributed to participants (n = 21) during the third meeting to reach consensus on an intervention model. The final proposed intervention strategy included 29 feasible and crucial interventions in seven clusters: development of information technology system, establishing policy and taking it into the action plan, proactive public relations of the policy, action plan and workflow, in cooperation of multidisciplinary teams in drug use evaluation, work review and evaluation with performance reporting, promoting and developing professional and clinical skill for staff with training programs, and developing practical drug use evaluation guideline for antibiotics. These interventions are relevant and fit to several intervention strategies for antibiotic stewardship program in many international organizations such as participation of the multidisciplinary team, developing information technology to support antibiotic smart use, and communication. These interventions were prioritized for implementation over a 1-year period. Once the possibility of each activity or plan is set up, the proposed program could be applied and integrated into hospital policy after evaluating plans. Effectiveness of each intervention could be promoted to other community hospitals to promote and support antibiotic smart use.

Keywords: antibiotic, concept mapping, drug use evaluation, multidisciplinary teams

Procedia PDF Downloads 96
461 Case-Based Reasoning Application to Predict Geological Features at Site C Dam Construction Project

Authors: Shahnam Behnam Malekzadeh, Ian Kerr, Tyson Kaempffer, Teague Harper, Andrew Watson

Abstract:

The Site C Hydroelectric dam is currently being constructed in north-eastern British Columbia on sub-horizontal sedimentary strata that dip approximately 15 meters from one bank of the Peace River to the other. More than 615 pressure sensors (Vibrating Wire Piezometers) have been installed on bedding planes (BPs) since construction began, with over 80 more planned before project completion. These pressure measurements are essential to monitor the stability of the rock foundation during and after construction and for dam safety purposes. BPs are identified by their clay gouge infilling, which varies in thickness from less than 1 to 20 mm and can be challenging to identify as the core drilling process often disturbs or washes away the gouge material. Without the use of depth predictions from nearby boreholes, stratigraphic markers, and downhole geophysical data, it is difficult to confidently identify BP targets for the sensors. In this paper, a Case-Based Reasoning (CBR) method was used to develop an empirical model called the Bedding Plane Elevation Prediction (BPEP) to help geologists and geotechnical engineers to predict geological features and bedding planes at new locations in a fast and accurate manner. To develop CBR, a database was developed based on 64 pressure sensors already installed on key bedding planes BP25, BP28, and BP31 on the Right Bank, including bedding plane elevations and coordinates. Thirteen (20%) of the most recent cases were selected to validate and evaluate the accuracy of the developed model, while the similarity was defined as the distance between previous cases and recent cases to predict the depth of significant BPs. The average difference between actual BP elevations and predicted elevations for above BPs was ±55cm, while the actual results showed that 69% of predicted elevations were within ±79 cm of actual BP elevations while 100% of predicted elevations for new cases were within ±99cm range. Eventually, the actual results will be used to develop the database and improve BPEP to perform as a learning machine to predict more accurate BP elevations for future sensor installations.

Keywords: case-based reasoning, geological feature, geology, piezometer, pressure sensor, core logging, dam construction

Procedia PDF Downloads 53
460 Limbic Involvement in Visual Processing

Authors: Deborah Zelinsky

Abstract:

The retina filters millions of incoming signals into a smaller amount of exiting optic nerve fibers that travel to different portions of the brain. Most of the signals are for eyesight (called "image-forming" signals). However, there are other faster signals that travel "elsewhere" and are not directly involved with eyesight (called "non-image-forming" signals). This article centers on the neurons of the optic nerve connecting to parts of the limbic system. Eye care providers are currently looking at parvocellular and magnocellular processing pathways without realizing that those are part of an enormous "galaxy" of all the body systems. Lenses are modifying both non-image and image-forming pathways, taking A.M. Skeffington's seminal work one step further. Almost 100 years ago, he described the Where am I (orientation), Where is It (localization), and What is It (identification) pathways. Now, among others, there is a How am I (animation) and a Who am I (inclination, motivation, imagination) pathway. Classic eye testing considers pupils and often assesses posture and motion awareness, but classical prescriptions often overlook limbic involvement in visual processing. The limbic system is composed of the hippocampus, amygdala, hypothalamus, and anterior nuclei of the thalamus. The optic nerve's limbic connections arise from the intrinsically photosensitive retinal ganglion cells (ipRGC) through the "retinohypothalamic tract" (RHT). There are two main hypothalamic nuclei with direct photic inputs. These are the suprachiasmatic nucleus and the paraventricular nucleus. Other hypothalamic nuclei connected with retinal function, including mood regulation, appetite, and glucose regulation, are the supraoptic nucleus and the arcuate nucleus. The retino-hypothalamic tract is often overlooked when we prescribe eyeglasses. Each person is different, but the lenses we choose are influencing this fast processing, which affects each patient's aiming and focusing abilities. These signals arise from the ipRGC cells that were only discovered 20+ years ago and do not address the campana retinal interneurons that were only discovered 2 years ago. As eyecare providers, we are unknowingly altering such factors as lymph flow, glucose metabolism, appetite, and sleep cycles in our patients. It is important to know what we are prescribing as the visual processing evaluations expand past the 20/20 central eyesight.

Keywords: neuromodulation, retinal processing, retinohypothalamic tract, limbic system, visual processing

Procedia PDF Downloads 42
459 A Fourier Method for Risk Quantification and Allocation of Credit Portfolios

Authors: Xiaoyu Shen, Fang Fang, Chujun Qiu

Abstract:

Herewith we present a Fourier method for credit risk quantification and allocation in the factor-copula model framework. The key insight is that, compared to directly computing the cumulative distribution function of the portfolio loss via Monte Carlo simulation, it is, in fact, more efficient to calculate the transformation of the distribution function in the Fourier domain instead and inverting back to the real domain can be done in just one step and semi-analytically, thanks to the popular COS method (with some adjustments). We also show that the Euler risk allocation problem can be solved in the same way since it can be transformed into the problem of evaluating a conditional cumulative distribution function. Once the conditional or unconditional cumulative distribution function is known, one can easily calculate various risk metrics. The proposed method not only fills the niche in literature, to the best of our knowledge, of accurate numerical methods for risk allocation but may also serve as a much faster alternative to the Monte Carlo simulation method for risk quantification in general. It can cope with various factor-copula model choices, which we demonstrate via examples of a two-factor Gaussian copula and a two-factor Gaussian-t hybrid copula. The fast error convergence is proved mathematically and then verified by numerical experiments, in which Value-at-Risk, Expected Shortfall, and conditional Expected Shortfall are taken as examples of commonly used risk metrics. The calculation speed and accuracy are tested to be significantly superior to the MC simulation for real-sized portfolios. The computational complexity is, by design, primarily driven by the number of factors instead of the number of obligors, as in the case of Monte Carlo simulation. The limitation of this method lies in the "curse of dimension" that is intrinsic to multi-dimensional numerical integration, which, however, can be relaxed with the help of dimension reduction techniques and/or parallel computing, as we will demonstrate in a separate paper. The potential application of this method has a wide range: from credit derivatives pricing to economic capital calculation of the banking book, default risk charge and incremental risk charge computation of the trading book, and even to other risk types than credit risk.

Keywords: credit portfolio, risk allocation, factor copula model, the COS method, Fourier method

Procedia PDF Downloads 119
458 3D-Printing Compressible Macroporous Polymer Using Poly-Pickering-High Internal Phase Emulsions as Micromixer

Authors: Hande Barkan-Ozturk, Angelika Menner, Alexander Bismarck

Abstract:

Microfluidic mixing technology grew rapidly in the past few years due to its many advantages over the macro-scale mixing, especially the ability to use small amounts of internal volume and also very high surface-to-volume ratio. The Reynold number identify whether the mixing is operated by the laminar or turbulence flow. Therefore, mixing with very fast kinetic can be achieved by diminishing the channel dimensions to decrease Reynold number and the laminar flow can be accomplished. Moreover, by using obstacles in the micromixer, the mixing length and the contact area between the species have been increased. Therefore, the channel geometry and its surface property have great importance to reach satisfactory mixing results. Since poly(-merised) High Internal Phase Emulsions (polyHIPEs) have more than 74% porosity and their pores are connected each other with pore throats, which cause high permeability, they are ideal candidate to build a micromixer. The HIPE precursor is commonly produced by using an overhead stirrer to obtain relatively large amount of emulsion in batch process. However, we will demonstrate that a desired amount of emulsion can be prepared continuously with micromixer build from polyHIPE, and such HIPE can subsequently be employed as ink in 3D printing process. In order to produce the micromixer a poly-Pickering(St-co-DVB)HIPE with 80% porosity was prepared with modified silica particles as stabilizer and surfactant Hypermer 2296 to obtain open porous structure and after coating of the surface, the three 1/16' ' PTFE tubes to transfer continuous (CP) and internal phases (IP) and the other is to collect the emulsion were placed. Afterwards, the two phases were injected in the ratio 1:3 CP:IP with syringe dispensers, respectively, and highly viscoelastic H(M)IPE, which can be used as an ink in 3D printing process, was gathered continuously. After the polymerisation of the resultant emulsion, polyH(M)IPE has interconnected porous structure identical to the monolithic polyH(M)IPE indicating that the emulsion can be prepared constantly with poly-Pickering-HIPE as micromixer and it can be used to prepare desired pattern with a 3D printer. Moreover, the morphological properties of the emulsion can be adjustable by changing flow ratio, flow speed and structure of the micromixer.

Keywords: 3D-Printing, emulsification, macroporous polymer, micromixer, polyHIPE

Procedia PDF Downloads 135
457 Estimation of Small Hydropower Potential Using Remote Sensing and GIS Techniques in Pakistan

Authors: Malik Abid Hussain Khokhar, Muhammad Naveed Tahir, Muhammad Amin

Abstract:

Energy demand has been increased manifold due to increasing population, urban sprawl and rapid socio-economic improvements. Low water capacity in dams for continuation of hydrological power, land cover and land use are the key parameters which are creating problems for more energy production. Overall installed hydropower capacity of Pakistan is more than 35000 MW whereas Pakistan is producing up to 17000 MW and the requirement is more than 22000 that is resulting shortfall of 5000 - 7000 MW. Therefore, there is a dire need to develop small hydropower to fulfill the up-coming requirements. In this regards, excessive rainfall, snow nurtured fast flowing perennial tributaries and streams in northern mountain regions of Pakistan offer a gigantic scope of hydropower potential throughout the year. Rivers flowing in KP (Khyber Pakhtunkhwa) province, GB (Gilgit Baltistan) and AJK (Azad Jammu & Kashmir) possess sufficient water availability for rapid energy growth. In the backdrop of such scenario, small hydropower plants are believed very suitable measures for more green environment and power sustainable option for the development of such regions. Aim of this study is to estimate hydropower potential sites for small hydropower plants and stream distribution as per steam network available in the available basins in the study area. The proposed methodology will focus on features to meet the objectives i.e. site selection of maximum hydropower potential for hydroelectric generation using well emerging GIS tool SWAT as hydrological run-off model on the Neelum, Kunhar and the Dor Rivers’ basins. For validation of the results, NDWI will be computed to show water concentration in the study area while overlaying on geospatial enhanced DEM. This study will represent analysis of basins, watershed, stream links, and flow directions with slope elevation for hydropower potential to produce increasing demand of electricity by installing small hydropower stations. Later on, this study will be benefitted for other adjacent regions for further estimation of site selection for installation of such small power plants as well.

Keywords: energy, stream network, basins, SWAT, evapotranspiration

Procedia PDF Downloads 189
456 Robust Electrical Segmentation for Zone Coherency Delimitation Base on Multiplex Graph Community Detection

Authors: Noureddine Henka, Sami Tazi, Mohamad Assaad

Abstract:

The electrical grid is a highly intricate system designed to transfer electricity from production areas to consumption areas. The Transmission System Operator (TSO) is responsible for ensuring the efficient distribution of electricity and maintaining the grid's safety and quality. However, due to the increasing integration of intermittent renewable energy sources, there is a growing level of uncertainty, which requires a faster responsive approach. A potential solution involves the use of electrical segmentation, which involves creating coherence zones where electrical disturbances mainly remain within the zone. Indeed, by means of coherent electrical zones, it becomes possible to focus solely on the sub-zone, reducing the range of possibilities and aiding in managing uncertainty. It allows faster execution of operational processes and easier learning for supervised machine learning algorithms. Electrical segmentation can be applied to various applications, such as electrical control, minimizing electrical loss, and ensuring voltage stability. Since the electrical grid can be modeled as a graph, where the vertices represent electrical buses and the edges represent electrical lines, identifying coherent electrical zones can be seen as a clustering task on graphs, generally called community detection. Nevertheless, a critical criterion for the zones is their ability to remain resilient to the electrical evolution of the grid over time. This evolution is due to the constant changes in electricity generation and consumption, which are reflected in graph structure variations as well as line flow changes. One approach to creating a resilient segmentation is to design robust zones under various circumstances. This issue can be represented through a multiplex graph, where each layer represents a specific situation that may arise on the grid. Consequently, resilient segmentation can be achieved by conducting community detection on this multiplex graph. The multiplex graph is composed of multiple graphs, and all the layers share the same set of vertices. Our proposal involves a model that utilizes a unified representation to compute a flattening of all layers. This unified situation can be penalized to obtain (K) connected components representing the robust electrical segmentation clusters. We compare our robust segmentation to the segmentation based on a single reference situation. The robust segmentation proves its relevance by producing clusters with high intra-electrical perturbation and low variance of electrical perturbation. We saw through the experiences when robust electrical segmentation has a benefit and in which context.

Keywords: community detection, electrical segmentation, multiplex graph, power grid

Procedia PDF Downloads 45
455 Multi-Walled Carbon Nanotubes Doped Poly (3,4 Ethylenedioxythiophene) Composites Based Electrochemical Nano-Biosensor for Organophosphate Detection

Authors: Navpreet Kaur, Himkusha Thakur, Nirmal Prabhakar

Abstract:

One of the most publicized and controversial issue in crop production is the use of agrichemicals- also known as pesticides. This is evident in many reports that Organophosphate (OP) insecticides, among the broad range of pesticides are mainly involved in acute and chronic poisoning cases. Therefore, detection of OPs is very necessary for health protection, food and environmental safety. In our study, a nanocomposite of poly (3,4 ethylenedioxythiophene) (PEDOT) and multi-walled carbon nanotubes (MWCNTs) has been deposited electrochemically onto the surface of fluorine doped tin oxide sheets (FTO) for the analysis of malathion OP. The -COOH functionalization of MWCNTs has been done for the covalent binding with amino groups of AChE enzyme. The use of PEDOT-MWCNT films exhibited an excellent conductivity, enables fast transfer kinetics and provided a favourable biocompatible microenvironment for AChE, for the significant malathion OP detection. The prepared PEDOT-MWCNT/FTO and AChE/PEDOT-MWCNT/FTO nano-biosensors were characterized by Fourier transform infrared spectrometry (FTIR), Field emission-scanning electron microscopy (FE-SEM) and electrochemical studies. Electrochemical studies were done using Cyclic Voltammetry (CV) or Differential Pulse Voltammetry (DPV) and Electrochemical Impedance Spectroscopy (EIS). Various optimization studies were done for different parameters including pH (7.5), AChE concentration (50 mU), substrate concentration (0.3 mM) and inhibition time (10 min). The detection limit for malathion OP was calculated to be 1 fM within the linear range 1 fM to 1 µM. The activity of inhibited AChE enzyme was restored to 98% of its original value by 2-pyridine aldoxime methiodide (2-PAM) (5 mM) treatment for 11 min. The oxime 2-PAM is able to remove malathion from the active site of AChE by means of trans-esterification reaction. The storage stability and reusability of the prepared nano-biosensor is observed to be 30 days and seven times, respectively. The application of the developed nano-biosensor has also been evaluated for spiked lettuce sample. Recoveries of malathion from the spiked lettuce sample ranged between 96-98%. The low detection limit obtained by the developed nano-biosensor made them reliable, sensitive and a low cost process.

Keywords: PEDOT-MWCNT, malathion, organophosphates, acetylcholinesterase, nano-biosensor, oxime (2-PAM)

Procedia PDF Downloads 407
454 Applications of Artificial Intelligence (AI) in Cardiac imaging

Authors: Angelis P. Barlampas

Abstract:

The purpose of this study is to inform the reader, about the various applications of artificial intelligence (AI), in cardiac imaging. AI grows fast and its role is crucial in medical specialties, which use large amounts of digital data, that are very difficult or even impossible to be managed by human beings and especially doctors.Artificial intelligence (AI) refers to the ability of computers to mimic human cognitive function, performing tasks such as learning, problem-solving, and autonomous decision making based on digital data. Whereas AI describes the concept of using computers to mimic human cognitive tasks, machine learning (ML) describes the category of algorithms that enable most current applications described as AI. Some of the current applications of AI in cardiac imaging are the follows: Ultrasound: Automated segmentation of cardiac chambers across five common views and consequently quantify chamber volumes/mass, ascertain ejection fraction and determine longitudinal strain through speckle tracking. Determine the severity of mitral regurgitation (accuracy > 99% for every degree of severity). Identify myocardial infarction. Distinguish between Athlete’s heart and hypertrophic cardiomyopathy, as well as restrictive cardiomyopathy and constrictive pericarditis. Predict all-cause mortality. CT Reduce radiation doses. Calculate the calcium score. Diagnose coronary artery disease (CAD). Predict all-cause 5-year mortality. Predict major cardiovascular events in patients with suspected CAD. MRI Segment of cardiac structures and infarct tissue. Calculate cardiac mass and function parameters. Distinguish between patients with myocardial infarction and control subjects. It could potentially reduce costs since it would preclude the need for gadolinium-enhanced CMR. Predict 4-year survival in patients with pulmonary hypertension. Nuclear Imaging Classify normal and abnormal myocardium in CAD. Detect locations with abnormal myocardium. Predict cardiac death. ML was comparable to or better than two experienced readers in predicting the need for revascularization. AI emerge as a helpful tool in cardiac imaging and for the doctors who can not manage the overall increasing demand, in examinations such as ultrasound, computed tomography, MRI, or nuclear imaging studies.

Keywords: artificial intelligence, cardiac imaging, ultrasound, MRI, CT, nuclear medicine

Procedia PDF Downloads 44
453 GPU-Based Back-Projection of Synthetic Aperture Radar (SAR) Data onto 3D Reference Voxels

Authors: Joshua Buli, David Pietrowski, Samuel Britton

Abstract:

Processing SAR data usually requires constraints in extent in the Fourier domain as well as approximations and interpolations onto a planar surface to form an exploitable image. This results in a potential loss of data requires several interpolative techniques, and restricts visualization to two-dimensional plane imagery. The data can be interpolated into a ground plane projection, with or without terrain as a component, all to better view SAR data in an image domain comparable to what a human would view, to ease interpretation. An alternate but computationally heavy method to make use of more of the data is the basis of this research. Pre-processing of the SAR data is completed first (matched-filtering, motion compensation, etc.), the data is then range compressed, and lastly, the contribution from each pulse is determined for each specific point in space by searching the time history data for the reflectivity values for each pulse summed over the entire collection. This results in a per-3D-point reflectivity using the entire collection domain. New advances in GPU processing have finally allowed this rapid projection of acquired SAR data onto any desired reference surface (called backprojection). Mathematically, the computations are fast and easy to implement, despite limitations in SAR phase history data size and 3D-point cloud size. Backprojection processing algorithms are embarrassingly parallel since each 3D point in the scene has the same reflectivity calculation applied for all pulses, independent of all other 3D points and pulse data under consideration. Therefore, given the simplicity of the single backprojection calculation, the work can be spread across thousands of GPU threads allowing for accurate reflectivity representation of a scene. Furthermore, because reflectivity values are associated with individual three-dimensional points, a plane is no longer the sole permissible mapping base; a digital elevation model or even a cloud of points (collected from any sensor capable of measuring ground topography) can be used as a basis for the backprojection technique. This technique minimizes any interpolations and modifications of the raw data, maintaining maximum data integrity. This innovative processing will allow for SAR data to be rapidly brought into a common reference frame for immediate exploitation and data fusion with other three-dimensional data and representations.

Keywords: backprojection, data fusion, exploitation, three-dimensional, visualization

Procedia PDF Downloads 42
452 Treatment of Full-Thickness Rotator Cuff Tendon Tear Using Umbilical Cord Blood-Derived Mesenchymal Stem Cells and Polydeoxyribonucleotides in a Rabbit Model

Authors: Sang Chul Lee, Gi-Young Park, Dong Rak Kwon

Abstract:

Objective: The aim of this study was to investigate regenerative effects of ultrasound (US)-guided injection with human umbilical cord blood-derived mesenchymal stem cells (UCB-MSCs) and/or polydeoxyribonucleotide (PDRN) injection in a chronic traumatic full-thickness rotator cuff tendon tear (FTRCTT) in a rabbit model. Material and Methods: Rabbits (n = 32) were allocated into 4 groups. After a 5-mm sized FTRCTT just proximal to the insertion site on the subscapularis tendon was created by excision, the wound was immediately covered by silicone tube to prevent natural healing. After 6 weeks, 4 injections (0.2 mL normal saline, G1; 0.2 mL PDRN, G2; 0.2 mL UCB-MSCs, G3; and 0.2 mL UCB-MSCs with 0.2ml PDRN, G4) were injected into FTRCTT under US guidance. We evaluated gross morphologic changes on all rabbits after sacrifice. Masson’s trichrome, anti-type 1 collagen antibody, bromodeoxyuridine, proliferating cell nuclear antigen, vascular endothelial growth factor and platelet endothelial cell adhesion molecule stain were performed to evaluate histological changes. Motion analysis was also performed. Results: The gross morphologic mean tendon tear size in G3 and 4 was significantly smaller than that of G1 and 2 (p < .05). However, there were no significant differences in tendon tear size between G3 and 4. In G4, newly regenerated collagen type 1 fibers, proliferating cells activity, angiogenesis, walking distance, fast walking time, and mean walking speed were greater than in the other three groups on histological examination and motion analysis. Conclusion: Co-injection of UCB-MSCs and PDRN was more effective than UCB-MSCs injection alone in histological and motion analysis in a rabbit model of chronic traumatic FTRCTT. However, there was no significant difference in gross morphologic change of tendon tear between UCB-MSCs with/without PDRN injection. The results of this study regarding the combination of UCB-MSCs and PDRN are worth additional investigations.

Keywords: mesenchymal stem cell, umbilical cord, polydeoxyribonucleotides, shoulder, rotator cuff, ultrasonography, injections

Procedia PDF Downloads 164
451 Genetic Improvement Potential for Wood Production in Melaleuca cajuputi

Authors: Hong Nguyen Thi Hai, Ryota Konda, Dat Kieu Tuan, Cao Tran Thanh, Khang Phung Van, Hau Tran Tin, Harry Wu

Abstract:

Melaleuca cajuputi is a moderately fast-growing species and considered as a multi-purpose tree as it provides fuelwood, piles and frame poles in construction, leaf essential oil and honey. It occurs in Australia, Papua New Guinea, and South-East Asia. M. cajuputi plantation can be harvested on 6-7 year rotations for wood products. Its timber can also be used for pulp and paper, fiber and particle board, producing quality charcoal and potentially sawn timber. However, most reported M. cajuputi breeding programs have been focused on oil production rather than wood production. In this study, breeding program of M. cajuputi aimed to improve wood production was examined by estimating genetic parameters for growth (tree height, diameter at breast height (DBH), and volume), stem form, stiffness (modulus of elasticity (MOE)), bark thickness and bark ratio in a half-sib family progeny trial including 80 families in the Mekong Delta of Vietnam. MOE is one of the key wood properties of interest to the wood industry. Non-destructive wood stiffness was measured indirectly by acoustic velocity using FAKOPP Microsecond Timer and especially unaffected by bark mass. Narrow-sense heritability for the seven traits ranged from 0.13 to 0.27 at age 7 years. MOE and stem form had positive genetic correlations with growth while the negative correlation between bark ratio and growth was also favorable. Breeding for simultaneous improvement of multiple traits, faster growth with higher MOE and reduction of bark ratio should be possible in M. cajuputi. Index selection based on volume and MOE showed genetic gains of 31 % in volume, 6 % in MOE and 13 % in stem form. In addition, heritability and age-age genetic correlations for growth traits increased with time and optimal early selection age for growth of M. cajuputi based on DBH alone was 4 years. Selected thinning resulted in an increase of heritability due to considerable reduction of phenotypic variation but little effect on genetic variation.

Keywords: acoustic velocity, age-age correlation, bark thickness, heritability, Melaleuca cajuputi, stiffness, thinning effect

Procedia PDF Downloads 144
450 Estimation of Hydrogen Production from PWR Spent Fuel Due to Alpha Radiolysis

Authors: Sivakumar Kottapalli, Abdesselam Abdelouas, Christoph Hartnack

Abstract:

Spent nuclear fuel generates a mixed field of ionizing radiation to the water. This radiation field is generally dominated by gamma rays and a limited flux of fast neutrons. The fuel cladding effectively attenuates beta and alpha particle radiation. Small fraction of the spent nuclear fuel exhibits some degree of fuel cladding penetration due to pitting corrosion and mechanical failure. Breaches in the fuel cladding allow the exposure of small volumes of water in the cask to alpha and beta ionizing radiation. The safety of the transport of radioactive material is assured by the package complying with the IAEA Requirements for the Safe Transport of Radioactive Material SSR-6. It is of high interest to avoid generation of hydrogen inside the cavity which may to an explosive mixture. The risk of hydrogen production along with other radiation gases should be analyzed for a typical spent fuel for safety issues. This work aims to perform a realistic study of the production of hydrogen by radiolysis assuming most penalizing initial conditions. It consists in the calculation of the radionuclide inventory of a pellet taking into account the burn up and decays. Westinghouse 17X17 PWR fuel has been chosen and data has been analyzed for different sets of enrichment, burnup, cycles of irradiation and storage conditions. The inventory is calculated as the entry point for the simulation studies of hydrogen production by radiolysis kinetic models by MAKSIMA-CHEMIST. Dose rates decrease strongly within ~45 μm from the fuel surface towards the solution(water) in case of alpha radiation, while the dose rate decrease is lower in case of beta and even slower in case of gamma radiation. Calculations are carried out to obtain spectra as a function of time. Radiation dose rate profiles are taken as the input data for the iterative calculations. Hydrogen yield has been found to be around 0.02 mol/L. Calculations have been performed for a realistic scenario considering a capsule containing the spent fuel rod. Thus, hydrogen yield has been debated. Experiments are under progress to validate the hydrogen production rate using cyclotron at > 5MeV (at ARRONAX, Nantes).

Keywords: radiolysis, spent fuel, hydrogen, cyclotron

Procedia PDF Downloads 488
449 Climate Change Adaptation Success in a Low Income Country Setting, Bangladesh

Authors: Tanveer Ahmed Choudhury

Abstract:

Background: Bangladesh is one of the largest deltas in the world, with high population density and high rates of poverty and illiteracy. 80% of the country is on low-lying floodplains, leaving the country one of the most vulnerable to the adverse effects of climate change: sea level rise, cyclones and storms, salinity intrusion, rising temperatures and heavy monsoon downpours. Such climatic events already limit Economic Development in the country. Although Bangladesh has had little responsibility in contributing to global climatic change, it is vulnerable to both its direct and indirect impacts. Real threats include reduced agricultural production, worsening food security, increased incidence of flooding and drought, spreading disease and an increased risk of conflict over scarce land and water resources. Currently, 8.3 million Bangladeshis live in cyclone high risk areas. However, by 2050 this is expected to grow to 20.3 million people, if proper adaptive actions are not taken. Under a high emissions scenario, an additional 7.6 million people will be exposed to very high salinity by 2050 compared to current levels. It is also projected that, an average of 7.2 million people will be affected by flooding due to sea level rise every year between 2070-2100 and If global emissions decrease rapidly and adaptation interventions are taken, the population affected by flooding could be limited to only about 14,000 people. To combat the climate change adverse effects, Bangladesh government has initiated many adaptive measures specially in infrastructure and renewable energy sector. Government is investing huge money and initiated many projects which have been proved very success full. Objectives: The objective of this paper is to describe some successful measures initiated by Bangladesh government in its effort to make the country a Climate Resilient. Methodology: Review of operation plan and activities of different relevant Ministries of Bangladesh government. Result: The following initiative projects, programs and activities are considered as best practices for Climate Change adaptation successes for Bangladesh: 1. The Infrastructure Development Company Limited (IDCOL); 2. Climate Change and Health Promotion Unit (CCHPU); 3. The Climate Change Trust Fund (CCTF); 4. Community Climate Change Project (CCCP); 5. Health, Population, Nutrition Sector Development Program (HPNSDP, 2011-2016)- "Climate Change and Environmental Issues"; 6. Ministry of Health and Family Welfare, Bangladesh and WHO Collaboration; - National Adaptation Plan. -"Building adaptation to climate change in health in least developed countries through resilient WASH". 7. COP-21 “Climate and health country profile -2015 Bangladesh. Conclusion: Due to a vast coastline, low-lying land and abundance of rivers, Bangladesh is highly vulnerable to climate change. Having extensive experience with facing natural disasters, Bangladesh has developed a successful adaptation program, which led to a significant reduction in casualties from extreme weather events. In a low income country setting, Bangladesh had successfully adapted various projects and initiatives to combat future Climate Change challenges.

Keywords: climate, change, success, Bangladesh

Procedia PDF Downloads 215
448 High School Gain Analytics From National Assessment Program – Literacy and Numeracy and Australian Tertiary Admission Rankin Linkage

Authors: Andrew Laming, John Hattie, Mark Wilson

Abstract:

Nine Queensland Independent high schools provided deidentified student-matched ATAR and NAPLAN data for all 1217 ATAR graduates since 2020 who also sat NAPLAN at the school. Graduating cohorts from the nine schools contained a mean 100 ATAR graduates with previous NAPLAN data from their school. Excluded were vocational students (mean=27) and any ATAR graduates without NAPLAN data (mean=20). Based on Index of Community Socio-Educational Access (ICSEA) prediction, all schools had larger that predicted proportions of their students graduating with ATARs. There were an additional 173 students not releasing their ATARs to their school (14%), requiring this data to be inferred by schools. Gain was established by first converting each student’s strongest NAPLAN domain to a statewide percentile, then subtracting this result from final ATAR. The resulting ‘percentile shift’ was corrected for plausible ATAR participation at each NAPLAN level. Strongest NAPLAN domain had the highest correlation with ATAR (R2=0.58). RESULTS School mean NAPLAN scores fitted ICSEA closely (R2=0.97). Schools achieved a mean cohort gain of two ATAR rankings, but only 66% of students gained. This ranged from 46% of top-NAPLAN decile students gaining, rising to 75% achieving gains outside the top decile. The 54% of top-decile students whose ATAR fell short of prediction lost a mean 4.0 percentiles (or 6.2 percentiles prior to correction for regression to the mean). 71% of students in smaller schools gained, compared to 63% in larger schools. NAPLAN variability in each of the 13 ICSEA1100 cohorts was 17%, with both intra-school and inter-school variation of these values extremely low (0.3% to 1.8%). Mean ATAR change between years in each school was just 1.1 ATAR ranks. This suggests consecutive school cohorts and ICSEA-similar schools share very similar distributions and outcomes over time. Quantile analysis of the NAPLAN/ATAR revealed heteroscedasticity, but splines offered little additional benefit over simple linear regression. The NAPLAN/ATAR R2 was 0.33. DISCUSSION Standardised data like NAPLAN and ATAR offer educators a simple no-cost progression metric to analyse performance in conjunction with their internal test results. Change is expressed in percentiles, or ATAR shift per student, which is layperson intuitive. Findings may also reduce ATAR/vocational stream mismatch, reveal proportions of cohorts meeting or falling short of expectation and demonstrate by how much. Finally, ‘crashed’ ATARs well below expectation are revealed, which schools can reasonably work to minimise. The percentile shift method is neither value-add nor a growth percentile. In the absence of exit NAPLAN testing, this metric is unable to discriminate academic gain from legitimate ATAR-maximizing strategies. But by controlling for ICSEA, ATAR proportion variation and student mobility, it uncovers progression to ATAR metrics which are not currently publicly available. However achieved, ATAR maximisation is a sought-after private good. So long as standardised nationwide data is available, this analysis offers useful analytics for educators and reasonable predictivity when counselling subsequent cohorts about their ATAR prospects.  

Keywords: NAPLAN, ATAR, analytics, measurement, gain, performance, data, percentile, value-added, high school, numeracy, reading comprehension, variability, regression to the mean

Procedia PDF Downloads 36
447 A Next-Generation Pin-On-Plate Tribometer for Use in Arthroplasty Material Performance Research

Authors: Lewis J. Woollin, Robert I. Davidson, Paul Watson, Philip J. Hyde

Abstract:

Introduction: In-vitro testing of arthroplasty materials is of paramount importance when ensuring that they can withstand the performance requirements encountered in-vivo. One common machine used for in-vitro testing is a pin-on-plate tribometer, an early stage screening device that generates data on the wear characteristics of arthroplasty bearing materials. These devices test vertically loaded rotating cylindrical pins acting against reciprocating plates, representing the bearing surfaces. In this study, a pin-on-plate machine has been developed that provides several improvements over current technology, thereby progressing arthroplasty bearing research. Historically, pin-on-plate tribometers have been used to investigate the performance of arthroplasty bearing materials under conditions commonly encountered during a standard gait cycle; nominal operating pressures of 2-6 MPa and an operating frequency of 1 Hz are typical. There has been increased interest in using pin-on-plate machines to test more representative in-vivo conditions, due to the drive to test 'beyond compliance', as well as their testing speed and economic advantages over hip simulators. Current pin-on-plate machines do not accommodate the increased performance requirements associated with more extreme kinematic conditions, therefore a next-generation pin-on-plate tribometer has been developed to bridge the gap between current technology and future research requirements. Methodology: The design was driven by several physiologically relevant requirements. Firstly, an increased loading capacity was essential to replicate the peak pressures that occur in the natural hip joint during running and chair-rising, as well as increasing the understanding of wear rates in obese patients. Secondly, the introduction of mid-cycle load variation was of paramount importance, as this allows for an approximation of the loads present in a gait cycle to be applied and to test the fatigue properties of materials. Finally, the rig must be validated against previous-generation pin-on-plate and arthroplasty wear data. Results: The resulting machine is a twelve station device that is split into three sets of four stations, providing an increased testing capacity compared to most current pin-on-plate tribometers. The loading of the pins is generated using a pneumatic system, which can produce contact pressures of up to 201 MPa on a 3.2 mm² round pin face. This greatly exceeds currently achievable contact pressures in literature and opens new research avenues such as testing rim wear of mal-positioned hip implants. Additionally, the contact pressure of each set can be changed independently of the others, allowing multiple loading conditions to be tested simultaneously. Using pneumatics also allows the applied pressure to be switched ON/OFF mid-cycle, another feature not currently reported elsewhere, which allows for investigation into intermittent loading and material fatigue. The device is currently undergoing a series of validation tests using Ultra-High-Molecular-Weight-Polyethylene pins and 316L Stainless Steel Plates (polished to a Ra < 0.05 µm). The operating pressures will be between 2-6 MPa, operating at 1 Hz, allowing for validation of the machine against results reported previously in the literature. The successful production of this next-generation pin-on-plate tribometer will, following its validation, unlock multiple previously unavailable research avenues.

Keywords: arthroplasty, mechanical design, pin-on-plate, total joint replacement, wear testing

Procedia PDF Downloads 71
446 Product Separation of Green Processes and Catalyst Recycling of a Homogeneous Polyoxometalate Catalyst Using Nanofiltration Membranes

Authors: Dorothea Voß, Tobias Esser, Michael Huber, Jakob Albert

Abstract:

The growing world population and the associated increase in demand for energy and consumer goods, as well as increasing waste production, requires the development of sustainable processes. In addition, the increasing environmental awareness of our society is a driving force for the requirement that processes must be as resource and energy efficient as possible. In this context, the use of polyoxometalate catalysts (POMs) has emerged as a promising approach for the development of green processes. POMs are bifunctional polynuclear metal-oxo-anion cluster characterized by a strong Brønsted acidity, a high proton mobility combined with fast multi-electron transfer and tunable redox potential. In addition, POMs are soluble in many commonly known solvents and exhibit resistance to hydrolytic and oxidative degradation. Due to their structure and excellent physicochemical properties, POMs are efficient acid and oxidation catalysts that have attracted much attention in recent years. Oxidation processes with molecular oxygen are worth mentioning here. However, the fact that the POM catalysts are homogeneous poses a challenge for downstream processing of product solutions and recycling of the catalysts. In this regard, nanofiltration membranes have gained increasing interest in recent years, particularly due to their relative sustainability advantage over other technologies and their unique properties such as increased selectivity towards multivalent ions. In order to establish an efficient downstream process for the highly selective separation of homogeneous POM catalysts from aqueous solutions using nanofiltration membranes, a laboratory-scale membrane system was designed and constructed. By varying various process parameters, a sensitivity analysis was performed on a model system to develop an optimized method for the recovery of POM catalysts. From this, process-relevant key figures such as the rejection of various system components were derived. These results form the basis for further experiments on other systems to test the transferability to serval separation tasks with different POMs and products, as well as for recycling experiments of the catalysts in processes on laboratory scale.

Keywords: downstream processing, nanofiltration, polyoxometalates, homogeneous catalysis, green chemistry

Procedia PDF Downloads 49
445 Thermal Analysis of Adsorption Refrigeration System Using Silicagel–Methanol Pair

Authors: Palash Soni, Vivek Kumar Gaba, Shubhankar Bhowmick, Bidyut Mazumdar

Abstract:

Refrigeration technology is a fast developing field at the present era since it has very wide application in both domestic and industrial areas. It started from the usage of simple ice coolers to store food stuffs to the present sophisticated cold storages along with other air conditioning system. A variety of techniques are used to bring down the temperature below the ambient. Adsorption refrigeration technology is a novel, advanced and promising technique developed in the past few decades. It gained attention due to its attractive property of exploiting unlimited natural sources like solar energy, geothermal energy or even waste heat recovery from plants or from the exhaust of locomotives to fulfill its energy need. This will reduce the exploitation of non-renewable resources and hence reduce pollution too. This work is aimed to develop a model for a solar adsorption refrigeration system and to simulate the same for different operating conditions. In this system, the mechanical compressor is replaced by a thermal compressor. The thermal compressor uses renewable energy such as solar energy and geothermal energy which makes it useful for those areas where electricity is not available. Refrigerants normally in use like chlorofluorocarbon/perfluorocarbon have harmful effects like ozone depletion and greenhouse warming. It is another advantage of adsorption systems that it can replace these refrigerants with less harmful natural refrigerants like water, methanol, ammonia, etc. Thus the double benefit of reduction in energy consumption and pollution can be achieved. A thermodynamic model was developed for the proposed adsorber, and a universal MATLAB code was used to simulate the model. Simulations were carried out for a different operating condition for the silicagel-methanol working pair. Various graphs are plotted between regeneration temperature, adsorption capacities, the coefficient of performance, desorption rate, specific cooling power, adsorption/desorption times and mass. The results proved that adsorption system could be installed successfully for refrigeration purpose as it has saving in terms of power and reduction in carbon emission even though the efficiency is comparatively less as compared to conventional systems. The model was tested for its compliance in a cold storage refrigeration with a cooling load of 12 TR.

Keywords: adsorption, refrigeration, renewable energy, silicagel-methanol

Procedia PDF Downloads 176
444 A Case Study on How Biomedical Engineering (BME) Outreach Programmes Serve as An Alternative Educational Approach to Form and Develop the BME Community in Hong Kong

Authors: Sum Lau, Wing Chung Cleo Lau, Wing Yan Chu, Long Ching Ip, Wan Yin Lo, Jo Long Sam Yau, Ka Ho Hui, Sze Yi Mak

Abstract:

Biomedical engineering (BME) is an interdisciplinary subject where knowledge about biology and medicine is applied to novel applications, solving clinical problems. This subject is crucial for cities such as Hong Kong, where the burden on the medical system is rising due to reasons like the ageing population. Hong Kong, who is actively boosting technological advancements in recent years, sets BME, or biotechnology, as a major category, as reflected in the 2018-19 Budget, where biotechnology was one of the four pillars for development. Over the years, while resources in terms of money and space have been provided, there has been a lack of talents expressed by both the academia and industry. While exogenous factors, such as COVID, may have hindered talents from outside Hong Kong to come, endogenous factors should also be considered. In particular, since there are already a few local universities offering BME programmes, their curriculum or style of education requires to be reviewed to intensify the network of the BME community and support post-academic career development. It was observed that while undergraduate (UG) studies focus on knowledge teaching with some technical training and postgraduate (PG) programmes concentrate on upstream research, the programmes are generally confined to the academic sector and lack connections to the industry. In light of that, a “Biomedical Innovation and Outreach Programme 2022” (“B.I.O.2022”) was held to connect students and professors from academia with clinicians and engineers from the industry, serving as a comparative approach to conventional education methods (UG and PG programmes from tertiary institutions). Over 100 participants, including undergraduates, postgraduates, secondary school students, researchers, engineers, and clinicians, took part in various outreach events such as conference and site visits, all held from June to July 2022. As a case study, this programme aimed to tackle the aforementioned problems with the theme of “4Cs” (connection, communication, collaboration, and commercialisation). The effectiveness of the programme is investigated by its ability to serve as an adult and continuing education and the effectiveness of causing social change to tackle current societal challenges, with the focus on tackling the lack of talents engaging in biomedical engineering. In this study, B.I.O.2022 is found to be able to complement the traditional educational methods, particularly in terms of knowledge exchange between the academia and the industry. With enhanced communications between participants from different career stages, there were students who followed up to visit or even work with the professionals after the programme. Furthermore, connections between the academia and industry could foster the generation of new knowledge, which ultimately pointed to commercialisation, adding value to the BME industry while filling the gap in terms of human resources. With the continuation of events like B.I.O.2022, it provides a promising starting point for the development and relationship strengthening of a BME community in Hong Kong, and shows potential as an alternative way of adult education or learning with societal benefits.

Keywords: biomedical engineering, adult education for social change, comparative methods and principles, lifelong learning, faced problems, promises, challenges and pitfalls

Procedia PDF Downloads 94
443 Evaluation of Commercial Back-analysis Package in Condition Assessment of Railways

Authors: Shadi Fathi, Moura Mehravar, Mujib Rahman

Abstract:

Over the years,increased demands on railways, the emergence of high-speed trains and heavy axle loads, ageing, and deterioration of the existing tracks, is imposing costly maintenance actions on the railway sector. The need for developing a fast andcost-efficient non-destructive assessment method for the structural evaluation of railway tracksis therefore critically important. The layer modulus is the main parameter used in the structural design and evaluation of the railway track substructure (foundation). Among many recently developed NDTs, Falling Weight Deflectometer (FWD) test, widely used in pavement evaluation, has shown promising results for railway track substructure monitoring. The surface deflection data collected by FWD are used to estimate the modulus of substructure layers through the back-analysis technique. Although there are different commerciallyavailableback-analysis programs are used for pavement applications, there are onlya limited number of research-based techniques have been so far developed for railway track evaluation. In this paper, the suitability, accuracy, and reliability of the BAKFAAsoftware are investigated. The main rationale for selecting BAKFAA as it has a relatively straightforward user interfacethat is freely available and widely used in highway and airport pavement evaluation. As part of the study, a finite element (FE) model of a railway track section near Leominsterstation, Herefordshire, UK subjected to the FWD test, was developed and validated against available field data. Then, a virtual experimental database (including 218 sets of FWD testing data) was generated using theFE model and employed as the measured database for the BAKFAA software. This database was generated considering various layers’ moduli for each layer of track substructure over a predefined range. The BAKFAA predictions were compared against the cone penetration test (CPT) data (available from literature; conducted near to Leominster station same section as the FWD was performed). The results reveal that BAKFAA overestimatesthe layers’ moduli of each substructure layer. To adjust the BAKFA with the CPT data, this study introduces a correlation model to make the BAKFAA applicable in railway applications.

Keywords: back-analysis, bakfaa, railway track substructure, falling weight deflectometer (FWD), cone penetration test (CPT)

Procedia PDF Downloads 104
442 An Evaluation and Guidance for mHealth Apps

Authors: Tareq Aljaber

Abstract:

The number of mobile health apps is growing at a fast frequency as it's nearly doubled in a year between 2015 and 2016. Though, there is a lack of an effective evaluation framework to verify the usability and reliability of mobile phone health education applications which would help saving time and effort for the numerous user groups. This abstract describing a framework for evaluating mobile applications in specifically mobile health education applications, along with a guidance select tool to assist different users to select the most suitable mobile health education apps. The effective framework outcome is intended to meet the requirements and needs of the different stakeholder groups additionally to enhancing the development of mobile health education applications with software engineering approaches, by producing new and more effective techniques to evaluate such software. This abstract highlights the significance and consequences of mobile health education apps, before focusing the light on the required to create an effective evaluation framework for these apps. An explanation of the effective evaluation framework is going to be delivered in the abstract, beside with some specific evaluation metrics: an efficient hybrid of selected heuristic evaluation (HE) and usability evaluation (UE) metrics to enable the determination of the usefulness and usability of health education mobile apps. Moreover, an explanation of the qualitative and quantitative outcomes for the effective evaluation framework was accomplished using Epocrates mobile phone app in addition to some other mobile phone apps. This proposed framework-An Evaluation Framework for Mobile Health Education Apps-consists of a hybrid of 5 metrics designated from a larger set in usability evaluation and heuristic evaluation, illuminated grounded on 15 unstructured interviews from software developers (SD), health professionals (HP) and patients (P). These five metrics corresponding to explicit facets of usability recognised through a requirements analysis of typical stakeholders of mobile health apps. These five hybrid selected metrics were scattered across 24 specific questionnaire questions, which are available on request from first author. This questionnaire has been sent to 81 participants distributed in three sets of stakeholders from software developers (SD), health professionals (HP) and patients/general users (P/GU) on the purpose of ranking three sets of mobile health education applications. Finally, the outcomes from the questionnaire data helped us to approach our aims which are finding the profile for different stakeholders, finding the profile for different mobile health educations application packages, ranking different mobile health education application and guide us to build the select guidance too which is apart from the Evaluation Framework for Mobile Health Education Apps.

Keywords: evaluation framework, heuristic evaluation, usability evaluation, metrics

Procedia PDF Downloads 371
441 Social Licence to Operate Methodology to Secure Commercial, Community and Regulatory Approval for Small and Large Scale Fisheries

Authors: Kelly S. Parkinson, Katherine Y. Teh-White

Abstract:

Futureye has a bespoke social licence to operate methodology which has successfully secured community approval and commercial return for fisheries which have faced regulatory and financial risk. This unique approach to fisheries management focuses on delivering improved social and environmental outcomes to support the fishing industry make steps towards achieving the United Nations SDGs. An SLO is the community’s implicit consent for a business or project to exist. An SLO must be earned and maintained alongside regulatory licences. In current and new operations, it helps you to anticipate and measure community concerns around your operations – leading to more predictable and sensible policy outcomes that will not jeopardise your commercial returns. Rising societal expectations and increasing activist sophistication mean the international fishing industry needs to resolve community concerns at each stage their supply chain. Futureye applied our tested social licence to operate (SLO) methodology to help Austral Fisheries who was being attacked by activists concerned about the sustainability of Patagonian Toothfish. Austral was Marine Stewardship Council certified, but pirates were making the overall catch unsustainable. Austral wanted to be carbon neutral. SLO provides a lens on the risk that helps industries and companies act before regulatory and political risk escalates. To do this assessment, we have a methodology that assesses the risk that we can then translate into a process to create a strategy. 1) Audience: we understand the drivers of change and the transmission of those drivers across all audience segments. 2) Expectation: we understand the level of social norming of changing expectations. 3) Outrage: we understand the technical and perceptual aspects of risk and the opportunities to mitigate these. 4) Inter-relationships: we understand the political, regulatory, and reputation system so that we can understand the levers of change. 5) Strategy: we understand whether the strategy will achieve a social licence through bringing the internal and external stakeholders on the journey. Futureye’s SLO methodologies helped Austral to understand risks and opportunities to enhance its resilience. Futureye reviewed the issues, assessed outrage and materiality and mapped SLO threats to the company. Austral was introduced to a new way that it could manage activism, climate action, and responsible consumption. As a result of Futureye’s work, Austral worked closely with Sea Shepherd who was campaigning against pirates illegally fishing Patagonian Toothfish as well as international governments. In 2016 Austral launched the world’s first carbon neutral fish which won Austral a thirteen percent premium for tender on the open market. In 2017, Austral received the prestigious Banksia Foundation Sustainability Leadership Award for seafood that is sustainable, healthy and carbon neutral. Austral’s position as a leader in sustainable development has opened doors for retailers all over the world. Futureye’s SLO methodology can identify the societal, political and regulatory risks facing fisheries and position them to proactively address the issues and become an industry leader in sustainability.

Keywords: carbon neutral, fisheries management, risk communication, social licence to operate, sustainable development

Procedia PDF Downloads 100
440 The Feminism of Data Privacy and Protection in Africa

Authors: Olayinka Adeniyi, Melissa Omino

Abstract:

The field of data privacy and data protection in Africa is still an evolving area, with many African countries yet to enact legislation on the subject. While African Governments are bringing their legislation to speed in this field, how patriarchy pervades every sector of African thought and manifests in society needs to be considered. Moreover, the laws enacted ought to be inclusive, especially towards women. This, in a nutshell, is the essence of data feminism. Data feminism is a new way of thinking about data science and data ethics that is informed by the ideas of intersectional feminism. Feminising data privacy and protection will involve thinking women, considering women in the issues of data privacy and protection, particularly in legislation, as is the case in this paper. The line of thought of women inclusion is not uncommon when even international and regional human rights specific for women only came long after the general human rights. The consideration is that these should have been inserted or rather included in the original general instruments in the first instance. Since legislation on data privacy is coming in this century, having seen the rights and shortcomings of earlier instruments, then the cue should be taken to ensure inclusive wholistic legislation for data privacy and protection in the first instance. Data feminism is arguably an area that has been scantily researched, albeit a needful one. With the spate of increase in the violence against women spiraling in the cyber world, compounding the issue of COVID-19 and the needful response of governments, and the effect of these on women and their rights, fast forward, the research on the feminism of data privacy and protection in Africa becomes inevitable. This paper seeks to answer the questions, what is data feminism in the African context, why is it important in the issue of data privacy and protection legislation; what are the laws, if any, existing on data privacy and protection in Africa, are they women inclusive, if not, why; what are the measures put in place for the privacy and protection of women in Africa, and how can this be made possible. The paper aims to investigate the issue of data privacy and protection in Africa, the legal framework, and the protection or provision that it has for women if any. It further aims to research the importance and necessity of feminizing data privacy and protection, the effect of lack of it, the challenges or bottlenecks in attaining this feat and the possibilities of accessing data privacy and protection for African women. The paper also researches the emerging practices of data privacy and protection of women in other jurisprudences. It approaches the research through the methodology of review of papers, analysis of laws, and reports. It seeks to contribute to the existing literature in the field and is explorative in its suggestion. It suggests a draft of some clauses to make any data privacy and protection legislation women inclusive. It would be useful for policymaking, academic, and public enlightenment.

Keywords: feminism, women, law, data, Africa

Procedia PDF Downloads 159
439 Rabies Free Pakistan - Eliminating Rabies Through One Health Approach

Authors: Anzal Abbas Jaffari, Wajiha Javed, Naseem Salahuddin

Abstract:

Rationale: Rabies, a vaccine preventable disease, continues to be a critical public health issue as it kills around 2000-5000 people annually in Pakistan. Along with the disease spread among animals, the dog population remains a victim of brutal culling practices by the local authorities, which adversely affects ecosystem (sinking of poison in the soil – affecting vegetation & contaminating water) and the disease spread. The dog population has been exponentially rising primarily because a lack of a consolidated nationwide Animal Birth Control program and awareness among the local communities in general and children in particular. This is reflected in Pakistan’s low SARE score - 1.5, which makes the country trails behind other developing countries like Bangladesh (2.5) and Philippines (3.5).According to an estimate, the province of Sindh alone is home to almost 2.5 million dogs. The clustering of dogs in Peri-Urban areas and inner cities localities leads to an increase of reported dog bite cases in these areas specifically. Objective: Rabies Free Pakistan (RFP), which is a joint venture of Getz Pharma Private Limited and Indus Hospital & Health Network (IHHN); it was established in 2018 to eliminate Rabies from Pakistan by 2030 using the One Health Approach. Methodology: The RFP team is actively working on advocacy and policy front with both the Federal & Provincial government to ensure that all stakeholders currently involved in dog culling in Pakistan have a paradigm shift towards humane methods of vaccination and ABC. Along with the federal government, RFP aims to declare Rabies as a notifiable disease. Whereas RFP closely works with the provincial government of Sindh to initiate a province wide Rabies Control Program.RFP program follows international standards and WHO approved protocols for this program in Pakistan.RFP team has achieved various milestones in the fight against Rabies after successfully scaling up project operations and has vaccinated more than 30,000 dogs and neutered around 7,000 dogs since 2018. Recommendations: Effective implementation of Rabies program (MDV and ABC) requires a concentrated effort to address a variety of structural and policy challenges. This essentially demands a massive shift in the attitude of individuals towards rabies. The two most significant challenges in implementing a standard policy at the structural level are lack of institutional capacity, shortage of vaccine, and absence of inter-departmental coordination among major stakeholders: federal government, provincial ministry of health, livestock, and local bodies (including local councils). The lack of capacity in health care workers to treat dog bite cases emerges as a critical challenge at the clinical level. Conclusion: Pakistan can learn from the successful international models of Sri Lanka and Mexico as they adopted the One Health Approach to eliminate rabies like RFP. The WHO advised One Health approach provides the policymakers with an interactive and cross-sectoral guide, which involves all the essential elements of the eco system (including animals, humans, and other components).

Keywords: animal birth control, dog population, mass dog vaccination, one health, rabies elimination

Procedia PDF Downloads 148
438 Official Seals on the Russian-Qing Treaties: Material Manifestations and Visual Enunciations

Authors: Ning Chia

Abstract:

Each of the three different language texts (Manchu, Russian, and Latin) of the 1689 Treaty of Nerchinsk bore official seals from Imperial Russia and Qing China. These seals have received no academic attention, yet they can reveal a site of a layered and shared material, cultural, political, and diplomatic world of the time in Eastern Eurasia. The very different seal selections from both empires while ratifying the Treaty of Beijing in 1860 have obtained no scholarly advertency either; they can also explicate a tremendously changed relationship with visual and material manifestation. Exploring primary sources in Manchu, Russian, and Chinese languages as well as the images of the visual seals, this study investigates the reasons and purposes of utilizing official seals for the treaty agreement. A refreshed understanding of Russian-Qing diplomacy will be developed by pursuing the following aspects: (i) Analyzing the iconographic meanings of each seal insignia and unearthing a competitive, yet symbols-delivered and seal-generated, 'dialogue' between the two empires (ii) Contextualizing treaty seals within the historical seal cultures, and discovering how domestic seal system in each empire’s political institution developed into treaty-defined bilateral relations (iii) Expounding the seal confiding in each empire’s daily governing routines, and annotating the trust in the seal as a quested promise from the opponent negotiator to fulfill the treaty terms (iv) Contrasting the two seal traditions along two civilization-lines, Eastern vs. Western, and dissecting how the two styles of seal emblems affected the cross-cultural understanding or misunderstanding between the two empires (v) Comprehending the history-making events from the substantial resources such as the treaty seals, and grasping why the seals for the two treaties, so different in both visual design and symbolic value, were chosen in the two relationship eras (vi) Correlating the materialized seal 'expression' and the imperial worldviews based on each empire’s national/or power identity, and probing the seal-represented 'rule under the Heaven' assumption of China and Russian rising role in 'European-American imperialism … centered on East Asia' (Victor Shmagin, 2020). In conclusion, the impact of official seals on diplomatic treaties needs profound knowledge in seal history, insignia culture, and emblem belief to be able to comprehend. The official seals in both Imperial Russia and Qing China belonged to a particular statecraft art in a specific material and visual form. Once utilized in diplomatic treaties, the meticulously decorated and politically institutionalized seals were transformed from the determinant means for domestic administration and social control into the markers of an empire’s sovereign authority. Overlooked in historical practice, the insignia seal created a wire of 'visual contest' between the two rival powers. Through this material lens, the scholarly knowledge of the Russian-Qing diplomatic relationship will be significantly upgraded. Connecting Russian studies, Qing/Chinese studies, and Eurasian studies, this study also ties material culture, political culture, and diplomatic culture together. It promotes the study of official seals and emblem symbols in worldwide diplomatic history.

Keywords: Russia-Qing diplomatic relation, Treaty of Beijing (1860), Treaty of Nerchinsk (1689), Treaty seals

Procedia PDF Downloads 185
437 Design of Identification Based Adaptive Control for Fermentation Process in Bioreactor

Authors: J. Ritonja

Abstract:

The biochemical technology has been developing extremely fast since the middle of the last century. The main reason for such development represents a requirement for large production of high-quality biologically manufactured products such as pharmaceuticals, foods, and beverages. The impact of the biochemical industry on the world economy is enormous. The great importance of this industry also results in intensive development in scientific disciplines relevant to the development of biochemical technology. In addition to developments in the fields of biology and chemistry, which enable to understand complex biochemical processes, development in the field of control theory and applications is also very important. In the paper, the control for the biochemical reactor for the milk fermentation was studied. During the fermentation process, the biophysical quantities must be precisely controlled to obtain the high-quality product. To control these quantities, the bioreactor’s stirring drive and/or heating system can be used. Available commercial biochemical reactors are equipped with open loop or conventional linear closed loop control system. Due to the outstanding parameters variations and the partial nonlinearity of the biochemical process, the results obtained with these control systems are not satisfactory. To improve the fermentation process, the self-tuning adaptive control system was proposed. The use of the self-tuning adaptive control is suggested because the parameters’ variations of the studied biochemical process are very slow in most cases. To determine the linearized mathematical model of the fermentation process, the recursive least square identification method was used. Based on the obtained mathematical model the linear quadratic regulator was tuned. The parameters’ identification and the controller’s synthesis are executed on-line and adapt the controller’s parameters to the fermentation process’ dynamics during the operation. The use of the proposed combination represents the original solution for the control of the milk fermentation process. The purpose of the paper is to contribute to the progress of the control systems for the biochemical reactors. The proposed adaptive control system was tested thoroughly. From the obtained results it is obvious that the proposed adaptive control system assures much better following of the reference signal as a conventional linear control system with fixed control parameters.

Keywords: adaptive control, biochemical reactor, linear quadratic regulator, recursive least square identification

Procedia PDF Downloads 93
436 Triploid Rainbow Trout (Oncorhynchus mykiss) for Better Aquaculture and Ecological Risk Management

Authors: N. N. Pandey, Raghvendra Singh, Biju S. Kamlam, Bipin K. Vishwakarma, Preetam Kala

Abstract:

The rainbow trout (Oncorhynchus mykiss) is an exotic salmonid fish, well known for its fast growth, tremendous ability to thrive in diverse conditions, delicious flesh and hard fighting nature in Europe and other countries. Rainbow trout farming has a great potential for its contribution to the mainstream economy of Himalayan states in India and other temperate countries. These characteristics establish them as one of the most widely introduced and cultured fish across the globe, and its farming is also prominent in the cold water regions of India. Nevertheless, genetic fatigue, slow growth, early maturity, and low productivity are limiting the expansion of trout production. Moreover, farms adjacent to natural streams or other water sources are subject to escape of domesticated rainbow trout into the wild, which is a serious environmental concern as the escaped fish is subject to contaminate and disrupt the receiving ecosystem. A decline in production traits due to early maturity prolongs the culture duration and affects the profit margin of rainbow trout farms in India. A viable strategy that could overcome these farming constraints in large scale operation is the production of triploid fish that are sterile and more heterozygous. For better triploidy induction rate (TR), heat shock at 28°C for 10 minutes and pressure shock 9500 psi pressure for 5 minutes is applied to green eggs with 90-100% of triploidy success and 72-80% survival upto swim-up fry stage. There is 20% better growth in aquaculture with triploids rainbow trout over diploids. As compared to wild diploid fish, larger sized and fitter triploid rainbow trout in natural waters attract to trout anglers, and support the development of recreational fisheries by state fisheries departments without the risk of contaminating existing gene pools and disrupting local fish diversity. Overall, enhancement of productivity in rainbow trout farms and trout production in coldwater regions, development of lucrative trout angling and better ecological management is feasible with triploid rainbow trout.

Keywords: rainbow trout, triploids fish, heat shock, pressure shock, trout angling

Procedia PDF Downloads 98