Search results for: laser ultrasound technique
171 The Effect of Post Spinal Hypotension on Cerebral Oxygenation Using Near-Infrared Spectroscopy and Neonatal Outcomes in Full Term Parturient Undergoing Lower Segment Caesarean Section: A Prospective Observational Study
Authors: Shailendra Kumar, Lokesh Kashyap, Puneet Khanna, Nishant Patel, Rakesh Kumar, Arshad Ayub, Kelika Prakash, Yudhyavir Singh, Krithikabrindha V.
Abstract:
Introduction: Spinal anesthesia is considered a standard anesthesia technique for caesarean delivery. The incidence of spinal hypotension during caesarean delivery is 70 -80%. Spinal hypotension may cause cerebral hypoperfusion in the mother, but physiologically cerebral autoregulatory mechanisms accordingly prevent cerebral hypoxia. Cerebral blood flow remains constant in the 50-150 mmHg of Cerebral Perfusion Pressure (CPP) range. Near-infrared spectroscopy (NIRS) is a non-invasive technology that is used to detect Cerebral Desaturation Events (CDEs) immediately compared to other conventional intraoperative monitoring techniques. Objective: The primary aim of the study is to correlate the change in cerebral oxygen saturation using NIRS with respect to a fall in mean blood pressure after spinal anaesthesia and to find out the effects of spinal hypotension on neonatal APGAR score, neonatal acid-base variations, and presence of Postoperative Delirium (POD). Methodology: NIRS sensors were attached to the forehead of all the patients, and their baseline readings of cerebral oxygenation on the right and left frontal regions and mean blood pressure were noted. Subarachnoid block was given with hyperbaric 0.5% bupivacaine plus fentanyl, the dose being determined by the individual anaesthesiologist. Co-loading of IV crystalloid solutions was given to the patient. Blood pressure reading and cerebral saturation were recorded every 1 minute till 30min. Hypotension was a fall in MAP less than 20% of the baseline values. Patients going for hypotension were treated with an IV Bolus of phenylephrine/ephedrine. Umbilical cord blood samples were taken for blood gas analysis, and neonatal APGAR was noted by a neonatologist. Study design: A prospective observational study conducted in a population of Thirty ASA 2 and 3 parturients scheduled for lower segment caesarean section (LSCS). Results: Mean fall in regional cerebral saturation is 28.48 ± 14.7% with respect to the mean fall in blood pressure 38.92 ± 8.44 mm Hg. The correlation coefficient between fall in saturation and fall in mean blood pressure is 0.057, and p-value {0.7} after subarachnoid block. A fall in regional cerebral saturation occurred 2±1 min before a fall in mean blood pressure. Twenty-nine out of thirty patients required vasopressors during hypotension. The first dose of vasopressor requirement is needed at 6.02±2 min after the block. The mean APGAR score was 7.86 and 9.74 at 1 and 5 min of birth, respectively, and the mean umbilical arterial pH of 7.3±0.1. According to DRS-98 (Delirium Rating Scale), the mean delirium rating score on postoperative day 1 and day 2 were 0.1 and 0.7, respectively. Discussion: There was a fall in regional cerebral oxygen saturation, which started before with respect to a significant fall in mean blood pressure readings but was statistically not significant. Maximal fall in blood pressure requiring vasopressors occurs within 10 min of SAB. Neonatal APGAR scores and acid-base variations were in the normal range with maternal hypotension, and there was no incidence of postoperative delirium in patients with post-spinal hypotension.Keywords: cerebral oxygenation, LSCS, NIRS, spinal hypotension
Procedia PDF Downloads 69170 High Speed Motion Tracking with Magnetometer in Nonuniform Magnetic Field
Authors: Jeronimo Cox, Tomonari Furukawa
Abstract:
Magnetometers have become more popular in inertial measurement units (IMU) for their ability to correct estimations using the earth's magnetic field. Accelerometer and gyroscope-based packages fail with dead-reckoning errors accumulated over time. Localization in robotic applications with magnetometer-inclusive IMUs has become popular as a way to track the odometry of slower-speed robots. With high-speed motions, the accumulated error increases over smaller periods of time, making them difficult to track with IMU. Tracking a high-speed motion is especially difficult with limited observability. Visual obstruction of motion leaves motion-tracking cameras unusable. When motions are too dynamic for estimation techniques reliant on the observability of the gravity vector, the use of magnetometers is further justified. As available magnetometer calibration methods are limited with the assumption that background magnetic fields are uniform, estimation in nonuniform magnetic fields is problematic. Hard iron distortion is a distortion of the magnetic field by other objects that produce magnetic fields. This kind of distortion is often observed as the offset from the origin of the center of data points when a magnetometer is rotated. The magnitude of hard iron distortion is dependent on proximity to distortion sources. Soft iron distortion is more related to the scaling of the axes of magnetometer sensors. Hard iron distortion is more of a contributor to the error of attitude estimation with magnetometers. Indoor environments or spaces inside ferrite-based structures, such as building reinforcements or a vehicle, often cause distortions with proximity. As positions correlate to areas of distortion, methods of magnetometer localization include the production of spatial mapping of magnetic field and collection of distortion signatures to better aid location tracking. The goal of this paper is to compare magnetometer methods that don't need pre-productions of magnetic field maps. Mapping the magnetic field in some spaces can be costly and inefficient. Dynamic measurement fusion is used to track the motion of a multi-link system with us. Conventional calibration by data collection of rotation at a static point, real-time estimation of calibration parameters each time step, and using two magnetometers for determining local hard iron distortion are compared to confirm the robustness and accuracy of each technique. With opposite-facing magnetometers, hard iron distortion can be accounted for regardless of position, Rather than assuming that hard iron distortion is constant regardless of positional change. The motion measured is a repeatable planar motion of a two-link system connected by revolute joints. The links are translated on a moving base to impulse rotation of the links. Equipping the joints with absolute encoders and recording the motion with cameras to enable ground truth comparison to each of the magnetometer methods. While the two-magnetometer method accounts for local hard iron distortion, the method fails where the magnetic field direction in space is inconsistent.Keywords: motion tracking, sensor fusion, magnetometer, state estimation
Procedia PDF Downloads 84169 DNA Barcoding for Identification of Dengue Vectors from Assam and Arunachal Pradesh: North-Eastern States in India
Authors: Monika Soni, Shovonlal Bhowmick, Chandra Bhattacharya, Jitendra Sharma, Prafulla Dutta, Jagadish Mahanta
Abstract:
Aedes aegypti and Aedes albopictus are considered as two major vectors to transmit dengue virus. In North-east India, two states viz. Assam and Arunachal Pradesh are known to be high endemic zone for dengue and Chikungunya viral infection. The taxonomical classification of medically important vectors are important for mapping of actual evolutionary trends and epidemiological studies. However, misidentification of mosquito species in field-collected mosquito specimens could have a negative impact which may affect vector-borne disease control policy. DNA barcoding is a prominent method to record available species, differentiate from new addition and change of population structure. In this study, a combined approach of a morphological and molecular technique of DNA barcoding was adopted to explore sequence variation in mitochondrial cytochrome c oxidase subunit I (COI) gene within dengue vectors. The study has revealed the map distribution of the dengue vector from two states i.e. Assam and Arunachal Pradesh, India. Approximate five hundred mosquito specimens were collected from different parts of two states, and their morphological features were compared with the taxonomic keys. The analysis of detailed taxonomic study revealed identification of two species Aedes aegypti and Aedes albopictus. The species aegypti comprised of 66.6% of the specimen and represented as dominant dengue vector species. The sequences obtained through standard DNA barcoding protocol were compared with public databases, viz. GenBank and BOLD. The sequences of all Aedes albopictus have shown 100% similarity whereas sequence of Aedes aegypti has shown 99.77 - 100% similarity of COI gene with that of different geographically located same species based on BOLD database search. From dengue prevalent different geographical regions fifty-nine sequences were retrieved from NCBI and BOLD databases of the same and related taxa to determine the evolutionary distance model based on the phylogenetic analysis. Neighbor-Joining (NJ) and Maximum Likelihood (ML) phylogenetic tree was constructed in MEGA6.06 software with 1000 bootstrap replicates using Kimura-2-Parameter model. Data were analyzed for sequence divergence and found that intraspecific divergence ranged from 0.0 to 2.0% and interspecific divergence ranged from 11.0 to 12.0%. The transitional and transversional substitutions were tested individually. The sequences were deposited in NCBI: GenBank database. This observation claimed the first DNA barcoding analysis of Aedes mosquitoes from North-eastern states in India and also confirmed the range expansion of two important mosquito species. Overall, this study insight into the molecular ecology of the dengue vectors from North-eastern India which will enhance the understanding to improve the existing entomological surveillance and vector incrimination program.Keywords: COI, dengue vectors, DNA barcoding, molecular identification, North-east India, phylogenetics
Procedia PDF Downloads 303168 Solution Thermodynamics, Photophysical and Computational Studies of TACH2OX, a C-3 Symmetric 8-Hydroxyquinoline: Abiotic Siderophore Analogue of Enterobactin
Authors: B. K. Kanungo, Monika Thakur, Minati Baral
Abstract:
8-hydroxyquinoline, (8HQ), experiences a renaissance due to its utility as a building block in metallosupramolecular chemistry and its versatile use of its derivatives in various fields of analytical chemistry, materials science, and pharmaceutics. It forms stable complexes with a variety of metal ions. Assembly of more than one such unit to form a polydentate chelator enhances its coordinating ability and the related properties due to the chelate effect resulting in high stability constant. Keeping in view the above, a nonadentate chelator N-[3,5-bis(8-hydroxyquinoline-2-amido)cyclohexyl]-8-hydroxyquinoline-2-carboxamide, (TACH2OX), containing a central cis,cis-1,3,5-triaminocyclohexane appended to three 8-hydroxyquinoline at 2-position through amide linkage is developed, and its solution thermodynamics, photophysical and Density Functional Theory (DFT) studies were undertaken. The synthesis of TACH2OX was carried out by condensation of cis,cis-1,3,5-triaminocyclohexane, (TACH) with 8‐hydroxyquinoline‐2‐carboxylic acid. The brown colored solid has been fully characterized through melting point, infrared, nuclear magnetic resonance, electrospray ionization mass and electronic spectroscopy. In solution, TACH2OX forms protonated complexes below pH 3.4, which consecutively deprotonates to generate trinegative ion with the rise of pH. Nine protonation constants for the ligand were obtained that ranges between 2.26 to 7.28. The interaction of the chelator with two trivalent metal ion Fe3+ and Al3+ were studied in aqueous solution at 298 K. The metal-ligand formation constants (ML) obtained by potentiometric and spectrophotometric method agree with each other. The protonated and hydrolyzed species were also detected in the system. The in-silico studies of the ligand, as well as the complexes including their protonated and deprotonated species assessed by density functional theory technique, gave an accurate correlation with each observed properties such as the protonation constants, stability constants, infra-red, nmr, electronic absorption and emission spectral bands. The nature of electronic and emission spectral bands in terms of number and type were ascertained from time-dependent density functional theory study and the natural transition orbitals (NTO). The global reactivity indices parameters were used for comparison of the reactivity of the ligand and the complex molecules. The natural bonding orbital (NBO) analysis could successfully describe the structure and bonding of the metal-ligand complexes specifying the percentage of contribution in atomic orbitals in the creation of molecular orbitals. The obtained high value of metal-ligand formation constants indicates that the newly synthesized chelator is a very powerful synthetic chelator. The minimum energy molecular modeling structure of the ligand suggests that the ligand, TACH2OX, in a tripodal fashion firmly coordinates to the metal ion as hexa-coordinated chelate displaying distorted octahedral geometry by binding through three sets of N, O- donor atoms, present in each pendant arm of the central tris-cyclohexaneamine tripod.Keywords: complexes, DFT, formation constant, TACH2OX
Procedia PDF Downloads 150167 Contextual Factors of Innovation for Improving Commercial Banks' Performance in Nigeria
Authors: Tomola Obamuyi
Abstract:
The banking system in Nigeria adopted innovative banking, with the aim of enhancing financial inclusion, and making financial services readily and cheaply available to majority of the people, and to contribute to the efficiency of the financial system. Some of the innovative services include: Automatic Teller Machines (ATMs), National Electronic Fund Transfer (NEFT), Point of Sale (PoS), internet (Web) banking, Mobile Money payment (MMO), Real-Time Gross Settlement (RTGS), agent banking, among others. The introduction of these payment systems is expected to increase bank efficiency and customers' satisfaction, culminating in better performance for the commercial banks. However, opinions differ on the possible effects of the various innovative payment systems on the performance of commercial banks in the country. Thus, this study empirically determines how commercial banks use innovation to gain competitive advantage in the specific context of Nigeria's finance and business. The study also analyses the effects of financial innovation on the performance of commercial banks, when different periods of analysis are considered. The study employed secondary data from 2009 to 2018, the period that witnessed aggressive innovation in the financial sector of the country. The Vector Autoregression (VAR) estimation technique forecasts the relative variance of each random innovation to the variables in the VAR, examine the effect of standard deviation shock to one of the innovations on current and future values of the impulse response and determine the causal relationship between the variables (VAR granger causality test). The study also employed the Multi-Criteria Decision Making (MCDM) to rank the innovations and the performance criteria of Return on Assets (ROA) and Return on Equity (ROE). The entropy method of MCDM was used to determine which of the performance criteria better reflect the contributions of the various innovations in the banking sector. On the other hand, the Range of Values (ROV) method was used to rank the contributions of the seven innovations to performance. The analysis was done based on medium term (five years) and long run (ten years) of innovations in the sector. The impulse response function derived from the VAR system indicated that the response of ROA to the values of cheques transaction, values of NEFT transactions, values of POS transactions was positive and significant in the periods of analysis. The paper also confirmed with entropy and range of value that, in the long run, both the CHEQUE and MMO performed best while NEFT was next in performance. The paper concluded that commercial banks would enhance their performance by continuously improving on the services provided through Cheques, National Electronic Fund Transfer and Point of Sale since these instruments have long run effects on their performance. This will increase the confidence of the populace and encourage more usage/patronage of these services. The banking sector will in turn experience better performance which will improve the economy of the country. Keywords: Bank performance, financial innovation, multi-criteria decision making, vector autoregression,Keywords: Bank performance, financial innovation, multi-criteria decision making, vector autoregression
Procedia PDF Downloads 120166 Multiparticulate SR Formulation of Dexketoprofen Trometamol by Wurster Coating Technique
Authors: Bhupendra G. Prajapati, Alpesh R. Patel
Abstract:
The aim of this research work is to develop sustained release multi-particulates dosage form of Dexketoprofen trometamol, which is the pharmacologically active isomer of ketoprofen. The objective is to utilization of active enantiomer with minimal dose and administration frequency, extended release multi-particulates dosage form development for better patience compliance was explored. Drug loaded and sustained release coated pellets were prepared by fluidized bed coating principle by wurster coater. Microcrystalline cellulose as core pellets, povidone as binder and talc as anti-tacking agents were selected during drug loading while Kollicoat SR 30D as sustained release polymer, triethyl citrate as plasticizer and micronized talc as an anti-adherent were used in sustained release coating. Binder optimization trial in drug loading showed that there was increase in process efficiency with increase in the binder concentration. 5 and 7.5%w/w concentration of Povidone K30 with respect to drug amount gave more than 90% process efficiency while higher amount of rejects (agglomerates) were observed for drug layering trial batch taken with 7.5% binder. So for drug loading, optimum Povidone concentration was selected as 5% of drug substance quantity since this trial had good process feasibility and good adhesion of the drug onto the MCC pellets. 2% w/w concentration of talc with respect to total drug layering solid mass shows better anti-tacking property to remove unnecessary static charge as well as agglomeration generation during spraying process. Optimized drug loaded pellets were coated for sustained release coating from 16 to 28% w/w coating to get desired drug release profile and results suggested that 22% w/w coating weight gain is necessary to get the required drug release profile. Three critical process parameters of Wurster coating for sustained release were further statistically optimized for desired quality target product profile attributes like agglomerates formation, process efficiency, and drug release profile using central composite design (CCD) by Minitab software. Results show that derived design space consisting 1.0 to 1.2 bar atomization air pressure, 7.8 to 10.0 gm/min spray rate and 29-34°C product bed temperature gave pre-defined drug product quality attributes. Scanning Image microscopy study results were also dictate that optimized batch pellets had very narrow particle size distribution and smooth surface which were ideal properties for reproducible drug release profile. The study also focused on optimized dexketoprofen trometamol pellets formulation retain its quality attributes while administering with common vehicle, a liquid (water) or semisolid food (apple sauce). Conclusion: Sustained release multi-particulates were successfully developed for dexketoprofen trometamol which may be useful to improve acceptability and palatability of a dosage form for better patient compliance.Keywords: dexketoprofen trometamol, pellets, fluid bed technology, central composite design
Procedia PDF Downloads 136165 GIS and Remote Sensing Approach in Earthquake Hazard Assessment and Monitoring: A Case Study in the Momase Region of Papua New Guinea
Authors: Tingneyuc Sekac, Sujoy Kumar Jana, Indrajit Pal, Dilip Kumar Pal
Abstract:
Tectonism induced Tsunami, landslide, ground shaking leading to liquefaction, infrastructure collapse, conflagration are the common earthquake hazards that are experienced worldwide. Apart from human casualty, the damage to built-up infrastructures like roads, bridges, buildings and other properties are the collateral episodes. The appropriate planning must precede with a view to safeguarding people’s welfare, infrastructures and other properties at a site based on proper evaluation and assessments of the potential level of earthquake hazard. The information or output results can be used as a tool that can assist in minimizing risk from earthquakes and also can foster appropriate construction design and formulation of building codes at a particular site. Different disciplines adopt different approaches in assessing and monitoring earthquake hazard throughout the world. For the present study, GIS and Remote Sensing potentials were utilized to evaluate and assess earthquake hazards of the study region. Subsurface geology and geomorphology were the common features or factors that were assessed and integrated within GIS environment coupling with seismicity data layers like; Peak Ground Acceleration (PGA), historical earthquake magnitude and earthquake depth to evaluate and prepare liquefaction potential zones (LPZ) culminating in earthquake hazard zonation of our study sites. The liquefaction can eventuate in the aftermath of severe ground shaking with amenable site soil condition, geology and geomorphology. The latter site conditions or the wave propagation media were assessed to identify the potential zones. The precept has been that during any earthquake event the seismic wave is generated and propagates from earthquake focus to the surface. As it propagates, it passes through certain geological or geomorphological and specific soil features, where these features according to their strength/stiffness/moisture content, aggravates or attenuates the strength of wave propagation to the surface. Accordingly, the resulting intensity of shaking may or may not culminate in the collapse of built-up infrastructures. For the case of earthquake hazard zonation, the overall assessment was carried out through integrating seismicity data layers with LPZ. Multi-criteria Evaluation (MCE) with Saaty’s Analytical Hierarchy Process (AHP) was adopted for this study. It is a GIS technology that involves integration of several factors (thematic layers) that can have a potential contribution to liquefaction triggered by earthquake hazard. The factors are to be weighted and ranked in the order of their contribution to earthquake induced liquefaction. The weightage and ranking assigned to each factor are to be normalized with AHP technique. The spatial analysis tools i.e., Raster calculator, reclassify, overlay analysis in ArcGIS 10 software were mainly employed in the study. The final output of LPZ and Earthquake hazard zones were reclassified to ‘Very high’, ‘High’, ‘Moderate’, ‘Low’ and ‘Very Low’ to indicate levels of hazard within a study region.Keywords: hazard micro-zonation, liquefaction, multi criteria evaluation, tectonism
Procedia PDF Downloads 266164 Development and application of Humidity-Responsive Controlled Release Active Packaging Based on Electrospinning Nanofibers and In Situ Growth Polymeric Film in Food preservation
Authors: Jin Yue
Abstract:
Fresh produces especially fruits, vegetables, meats and aquatic products have limited shelf life and are highly susceptible to deterioration. Essential oils (EOs) extracted from plants have excellent antioxidant and broad-spectrum antibacterial activities, and they can play as natural food preservatives. But EOs are volatile, water insoluble, pungent, and easily decomposing under light and heat. Many approaches have been developed to improve the solubility and stability of EOs such as polymeric film, coating, nanoparticles, nano-emulsions and nanofibers. Construction of active packaging film which can incorporate EOs with high loading efficiency and controlled release of EOs has received great attention. It is still difficult to achieve accurate release of antibacterial compounds at specific target locations in active packaging. In this research, a relative humidity-responsive packaging material was designed, employing the electrospinning technique to fabricate a nanofibrous film loaded with a 4-terpineol/β-cyclodextrin inclusion complexes (4-TA/β-CD ICs). Functioning as an innovative food packaging material, the film demonstrated commendable attributes including pleasing appearance, thermal stability, mechanical properties, and effective barrier properties. The incorporation of inclusion complexes greatly enhanced the antioxidant and antibacterial activity of the film, particularly against Shewanella putrefaciens, with an inhibitory efficiency of up to 65%. Crucially, the film realized controlled release of 4-TA under 98% high relative humidity conditions by inducing the plasticization of polymers caused by water molecules, swelling of polymer chains, and destruction of hydrogen bonds within the cyclodextrin inclusion complex. This film with a long-term antimicrobial effect successfully extended the shelf life of Litopenaeus vannamei shrimp to 7 days at 4 °C. To further improve the loading efficiency and long-acting release of EOs, we synthesized the γ-cyclodextrin-metal organic frameworks (γ-CD-MOFs), and then efficiently anchored γ-CD-MOFs on chitosan-cellulose (CS-CEL) composite film by in situ growth method for controlled releasing of carvacrol (CAR). We found that the growth efficiency of γ-CD-MOFs was the highest when the concentration of CEL dispersion was 5%. The anchoring of γ-CD-MOFs on CS-CEL film significantly improved the surface area of CS-CEL film from 1.0294 m2/g to 43.3458 m2/g. The molecular docking and 1H NMR spectra indicated that γ-CD-MOF has better complexing and stabilizing ability for CAR molecules than γ-CD. In addition, the release of CAR reached 99.71±0.22% on the 10th day, while under 22% RH, the release pattern of CAR was a plateau with 14.71 ± 4.46%. The inhibition rate of this film against E. coli, S. aureus and B. cinerea was more than 99%, and extended the shelf life of strawberries to 7 days. By incorporating the merits of natural biopolymers and MOFs, this active packaging offers great potential as a substitute for traditional packaging materials.Keywords: active packaging, antibacterial activity, controlled release, essential oils, food quality control
Procedia PDF Downloads 64163 Slope Stabilisation of Highly Fractured Geological Strata Consisting of Mica Schist Layers While Construction of Tunnel Shaft
Authors: Saurabh Sharma
Abstract:
Introduction: The case study deals with the ground stabilisation of Nabi Karim Metro Station in Delhi, India, wherein an extremely complex geology was encountered while excavating the tunnelling shaft for launching Tunnel Boring Machine. The borelog investigation and the Seismic Refraction Technique (SRT) indicated towards the presence of an extremely hard rocky mass from a depth of 3-4 m itself, and accordingly, the Geotechnical Interpretation Report (GIR) concluded the presence of Grade-IV rock from 3m onwards and presence of Grade-III and better rock from 5-6m onwards. Accordingly, it was planned to retain the ground by providing secant piles all around the launching shaft and then excavating the shaft vertically after leaving a berm of 1.5m to prevent secant piles from getting exposed. To retain the side slopes, rock bolting with shotcreting and wire meshing were proposed, which is a normal practice in such strata. However, with the increase in depth of excavation, the rock quality kept on decreasing at an unexpected and surprising pace, with the Grade-III rock mass at 5-6 m converting to conglomerate formation at the depth of 15m. This worsening of geology from high grade rock to slushy conglomerate formation can never be predicted and came as a surprise to even the best geotechnical engineers. Since the excavation had already been cut down vertically to manage the shaft size, the execution was continued with enhanced cautions to stabilise the side slopes. But, when the shaft work was about to finish, a collapse was encountered on one side of the excavation shaft. This collapse was unexpected and surprising since all measures to stabilise the side slopes had been taken after face mapping, and the grid size, diameter, and depth of the rockbolts had already been readjusted to accommodate rock fractures. The above scenario was baffling even to the best geologists and geotechnical engineers, and it was decided that any further slope stabilisation scheme shall have to be designed in such a way to ensure safe completion of works. Accordingly, following revisions to excavation scheme were made: The excavation would be carried while maintaining a slope based on type of soil/rock. The rock bolt type was changed from SN rockbolts to Self Drilling type anchor. The grid size of the bolts changed on real time assessment. the excavation carried out by implementing a ‘Bench Release Approach’. Aggressive Real Time Instrumentation Scheme. Discussion: The above case Study again asserts vitality of correct interpretation of the geological strata and the need of real time revisions of the construction schemes based on the actual site data. The excavation is successfully being done with the above revised scheme, and further details of the Revised Slope Stabilisation Scheme, Instrumentation Schemes, Monitoring results, along with the actual site photographs, shall form the part of the final Paper.Keywords: unconfined compressive strength (ucs), rock mass rating (rmr), rock bolts, self drilling anchors, face mapping of rock, secant pile, shotcrete
Procedia PDF Downloads 66162 High-Resolution Facial Electromyography in Freely Behaving Humans
Authors: Lilah Inzelberg, David Rand, Stanislav Steinberg, Moshe David Pur, Yael Hanein
Abstract:
Human facial expressions carry important psychological and neurological information. Facial expressions involve the co-activation of diverse muscles. They depend strongly on personal affective interpretation and on social context and vary between spontaneous and voluntary activations. Smiling, as a special case, is among the most complex facial emotional expressions, involving no fewer than 7 different unilateral muscles. Despite their ubiquitous nature, smiles remain an elusive and debated topic. Smiles are associated with happiness and greeting on one hand and anger or disgust-masking on the other. Accordingly, while high-resolution recording of muscle activation patterns, in a non-interfering setting, offers exciting opportunities, it remains an unmet challenge, as contemporary surface facial electromyography (EMG) methodologies are cumbersome, restricted to the laboratory settings, and are limited in time and resolution. Here we present a wearable and non-invasive method for objective mapping of facial muscle activation and demonstrate its application in a natural setting. The technology is based on a recently developed dry and soft electrode array, specially designed for surface facial EMG technique. Eighteen healthy volunteers (31.58 ± 3.41 years, 13 females), participated in the study. Surface EMG arrays were adhered to participant left and right cheeks. Participants were instructed to imitate three facial expressions: closing the eyes, wrinkling the nose and smiling voluntary and to watch a funny video while their EMG signal is recorded. We focused on muscles associated with 'enjoyment', 'social' and 'masked' smiles; three categories with distinct social meanings. We developed a customized independent component analysis algorithm to construct the desired facial musculature mapping. First, identification of the Orbicularis oculi and the Levator labii superioris muscles was demonstrated from voluntary expressions. Second, recordings of voluntary and spontaneous smiles were used to locate the Zygomaticus major muscle activated in Duchenne and non-Duchenne smiles. Finally, recording with a wireless device in an unmodified natural work setting revealed expressions of neutral, positive and negative emotions in face-to-face interaction. The algorithm outlined here identifies the activation sources in a subject-specific manner, insensitive to electrode placement and anatomical diversity. Our high-resolution and cross-talk free mapping performances, along with excellent user convenience, open new opportunities for affective processing and objective evaluation of facial expressivity, objective psychological and neurological assessment as well as gaming, virtual reality, bio-feedback and brain-machine interface applications.Keywords: affective expressions, affective processing, facial EMG, high-resolution electromyography, independent component analysis, wireless electrodes
Procedia PDF Downloads 246161 Purple Spots on Historical Parchments: Confirming the Microbial Succession at the Basis of Biodeterioration
Authors: N. Perini, M. C. Thaller, F. Mercuri, S. Orlanducci, A. Rubechini, L. Migliore
Abstract:
The preservation of cultural heritage is one of the major challenges of today’s society, because of the fundamental right of future generations to inherit it as the continuity with their historical and cultural identity. Parchments, consisting of a semi-solid matrix of collagen produced from animal skin (i.e., sheep or goats), are a significant part of the cultural heritage, being used as writing material for many centuries. Due to their animal origin, parchments easily undergo biodeterioration. The most common biological damage is characterized by isolated or coalescent purple spots that often leads to the detachment of the superficial layer and the loss of the written historical content of the document. Although many parchments with the same biodegradative features were analyzed, no common causative agent has been found so far. Very recently, a study was performed on a purple-damaged parchment roll dated back 1244 A.D, the A.A. Arm. I-XVIII 3328, belonging to the oldest collection of the Vatican Secret Archive (Fondo 'Archivum Arcis'), by comparing uncolored undamaged and purple damaged areas of the same document. As a whole, the study gave interesting results to hypothesize a model of biodeterioration, consisting of a microbial succession acting in two main phases: the first one, common to all the damaged parchments, is characterized by halophilic and halotolerant bacteria fostered by the salty environment within the parchment maybe induced by bringing of the hides; the second one, changing with the individual history of each parchment, determines the identity of its colonizers. The design of this model was pivotal to this study, performed by different labs of the Tor Vergata University (Rome, Italy), in collaboration with the Vatican Secret Archive. Three documents, belonging to a collection of dramatically damaged parchments archived as 'Faldone Patrizi A 19' (dated back XVII century A.D.), were analyzed through a multidisciplinary approach, including three updated technologies: (i) Next Generation Sequencing (NGS, Illumina) to describe the microbial communities colonizing the damaged and undamaged areas, (ii) RAMAN spectroscopy to analyze the purple pigments, (iii) Light Transmitted Analysis (LTA) to evaluate the kind and entity of the damage to native collagen. The metagenomic analysis obtained from NGS revealed DNA sequences belonging to Halobacterium salinarum mainly in the undamaged areas. RAMAN spectroscopy detected pigments within the purple spots, mainly bacteriorhodopsine/rhodopsin-like pigments, a purple transmembrane protein containing retinal and present in Halobacteria. The LTA technique revealed extremely damaged collagen structures in both damaged and undamaged areas of the parchments. In the light of these data, the study represents a first confirmation of the microbial succession model described above. The demonstration of this model is pivotal to start any possible new restoration strategy to bring back historical parchments to their original beauty, but also to open opportunities for intervention on a huge amount of documents.Keywords: biodeterioration, parchments, purple spots, ecological succession
Procedia PDF Downloads 171160 Co-management Organizations: A Way to Facilitate Sustainable Management of the Sundarbans Mangrove Forests of Bangladesh
Authors: Md. Wasiul Islam, Md. Jamius Shams Sowrov
Abstract:
The Sundarbans is the largest single tract of mangrove forest in the world. This is located in the southwest corner of Bangladesh. This is a unique ecosystem which is a great breeding and nursing ground for a great biodiversity. It supports the livelihood of about 3.5 million coastal dwellers and also protects the coastal belt and inland areas from various natural calamities. Historically, the management of the Sundarbans was controlled by the Bangladesh Forest Department following top-down approach without the involvement of local communities. Such fence and fining-based blue-print approach was not effective to protect the forest which caused Sundarbans to degrade severely in the recent past. Fifty percent of the total tree cover has been lost in the last 30 years. Therefore, local multi-stakeholder based bottom-up co-management approach was introduced at some of the parts of the Sundarbans in 2006 to improve the biodiversity status by enhancing the protection level of the forest. Various co-management organizations were introduced under co-management approach where the local community people could actively involve in various activities related to the management and welfare of the Sundarbans including the decision-making process to achieve the goal. From this backdrop, the objective of the study was to assess the performance of co-management organizations to facilitate sustainable management of the Sundarbans mangrove forests. The qualitative study followed face-to-face interview to collect data using two sets of semi-structured questionnaires. A total of 40 respondents participated in the research that was from eight villagers under two forest ranges. 32 representatives from the local communities as well as 8 official representatives involved in co-management approach were interviewed using snowball sampling technique. The study shows that the co-management approach improved governance system of the Sundarbans through active participation of the local community people and their interactions with the officials via the platform of co-management organizations. It facilitated accountability and transparency system to some extent through following some formal and informal rules and regulations. It also improved the power structure of the management process by fostering local empowerment process particularly the women. Moreover, people were able to learn from their interactions with and within the co-management organizations as well as interventions improved environmental awareness and promoted social learning. The respondents considered good governance as the most important factor for achieving the goal of sustainable management and biodiversity conservation of the Sundarbans. The success of co-management planning process also depends on the active and functional participation of different stakeholders including the local communities where co-management organizations were considered as the most functional platform. However, the governance system was also facing various challenges which resulted in barriers to the sustainable management of the Sundarbans mangrove forest. But still there were some members involved in illegal forest operations and created obstacles against sustainable management of the Sundarbans. Respondents recommended greater patronization from the government, financial and logistic incentives for alternative income generation opportunities with effective participatory monitoring and evaluation system to improve sustainable management of the Sundarbans.Keywords: Bangladesh, co-management approach, co-management organizations, governance, Sundarbans, sustainable management
Procedia PDF Downloads 178159 Optimal-Based Structural Vibration Attenuation Using Nonlinear Tuned Vibration Absorbers
Authors: Pawel Martynowicz
Abstract:
Vibrations are a crucial problem for slender structures such as towers, masts, chimneys, wind turbines, bridges, high buildings, etc., that is why most of them are equipped with vibration attenuation or fatigue reduction solutions. In this work, a slender structure (i.e., wind turbine tower-nacelle model) equipped with nonlinear, semiactive tuned vibration absorber(s) is analyzed. For this study purposes, magnetorheological (MR) dampers are used as semiactive actuators. Several optimal-based approaches to structural vibration attenuation are investigated against the standard ‘ground-hook’ law and passive tuned vibration absorber(s) implementations. The common approach to optimal control of nonlinear systems is offline computation of the optimal solution, however, so determined open loop control suffers from lack of robustness to uncertainties (e.g., unmodelled dynamics, perturbations of external forces or initial conditions), and thus perturbation control techniques are often used. However, proper linearization may be an issue for highly nonlinear systems with implicit relations between state, co-state, and control. The main contribution of the author is the development as well as numerical and experimental verification of the Pontriagin maximum-principle-based vibration control concepts that produce directly actuator control input (not the demanded force), thus force tracking algorithm that results in control inaccuracy is entirely omitted. These concepts, including one-step optimal control, quasi-optimal control, and optimal-based modified ‘ground-hook’ law, can be directly implemented in online and real-time feedback control for periodic (or semi-periodic) disturbances with invariant or time-varying parameters, as well as for non-periodic, transient or random disturbances, what is a limitation for some other known solutions. No offline calculation, excitations/disturbances assumption or vibration frequency determination is necessary, moreover, all of the nonlinear actuator (MR damper) force constraints, i.e., no active forces, lower and upper saturation limits, hysteresis-type dynamics, etc., are embedded in the control technique, thus the solution is optimal or suboptimal for the assumed actuator, respecting its limitations. Depending on the selected method variant, a moderate or decisive reduction in the computational load is possible compared to other methods of nonlinear optimal control, while assuring the quality and robustness of the vibration reduction system, as well as considering multi-pronged operational aspects, such as possible minimization of the amplitude of the deflection and acceleration of the vibrating structure, its potential and/or kinetic energy, required actuator force, control input (e.g. electric current in the MR damper coil) and/or stroke amplitude. The developed solutions are characterized by high vibration reduction efficiency – the obtained maximum values of the dynamic amplification factor are close to 2.0, while for the best of the passive systems, these values exceed 3.5.Keywords: magnetorheological damper, nonlinear tuned vibration absorber, optimal control, real-time structural vibration attenuation, wind turbines
Procedia PDF Downloads 124158 Numerical Model of Crude Glycerol Autothermal Reforming to Hydrogen-Rich Syngas
Authors: A. Odoom, A. Salama, H. Ibrahim
Abstract:
Hydrogen is a clean source of energy for power production and transportation. The main source of hydrogen in this research is biodiesel. Glycerol also called glycerine is a by-product of biodiesel production by transesterification of vegetable oils and methanol. This is a reliable and environmentally-friendly source of hydrogen production than fossil fuels. A typical composition of crude glycerol comprises of glycerol, water, organic and inorganic salts, soap, methanol and small amounts of glycerides. Crude glycerol has limited industrial application due to its low purity thus, the usage of crude glycerol can significantly enhance the sustainability and production of biodiesel. Reforming techniques is an approach for hydrogen production mainly Steam Reforming (SR), Autothermal Reforming (ATR) and Partial Oxidation Reforming (POR). SR produces high hydrogen conversions and yield but is highly endothermic whereas POR is exothermic. On the downside, PO yields lower hydrogen as well as large amount of side reactions. ATR which is a fusion of partial oxidation reforming and steam reforming is thermally neutral because net reactor heat duty is zero. It has relatively high hydrogen yield, selectivity as well as limits coke formation. The complex chemical processes that take place during the production phases makes it relatively difficult to construct a reliable and robust numerical model. Numerical model is a tool to mimic reality and provide insight into the influence of the parameters. In this work, we introduce a finite volume numerical study for an 'in-house' lab-scale experiment of ATR. Previous numerical studies on this process have considered either using Comsol or nodal finite difference analysis. Since Comsol is a commercial package which is not readily available everywhere and lab-scale experiment can be considered well mixed in the radial direction. One spatial dimension suffices to capture the essential feature of ATR, in this work, we consider developing our own numerical approach using MATLAB. A continuum fixed bed reactor is modelled using MATLAB with both pseudo homogeneous and heterogeneous models. The drawback of nodal finite difference formulation is that it is not locally conservative which means that materials and momenta can be generated inside the domain as an artifact of the discretization. Control volume, on the other hand, is locally conservative and suites very well problems where materials are generated and consumed inside the domain. In this work, species mass balance, Darcy’s equation and energy equations are solved using operator splitting technique. Therefore, diffusion-like terms are discretized implicitly while advection-like terms are discretized explicitly. An upwind scheme is adapted for the advection term to ensure accuracy and positivity. Comparisons with the experimental data show very good agreements which build confidence in our modeling approach. The models obtained were validated and optimized for better results.Keywords: autothermal reforming, crude glycerol, hydrogen, numerical model
Procedia PDF Downloads 140157 Improving the Efficiency of a High Pressure Turbine by Using Non-Axisymmetric Endwall: A Comparison of Two Optimization Algorithms
Authors: Abdul Rehman, Bo Liu
Abstract:
Axial flow turbines are commonly designed with high loads that generate strong secondary flows and result in high secondary losses. These losses contribute to almost 30% to 50% of the total losses. Non-axisymmetric endwall profiling is one of the passive control technique to reduce the secondary flow loss. In this paper, the non-axisymmetric endwall profile construction and optimization for the stator endwalls are presented to improve the efficiency of a high pressure turbine. The commercial code NUMECA Fine/ Design3D coupled with Fine/Turbo was used for the numerical investigation, design of experiments and the optimization. All the flow simulations were conducted by using steady RANS and Spalart-Allmaras as a turbulence model. The non-axisymmetric endwalls of stator hub and shroud were created by using the perturbation law based on Bezier Curves. Each cut having multiple control points was supposed to be created along the virtual streamlines in the blade channel. For the design of experiments, each sample was arbitrarily generated based on values automatically chosen for the control points defined during parameterization. The Optimization was achieved by using two algorithms i.e. the stochastic algorithm and gradient-based algorithm. For the stochastic algorithm, a genetic algorithm based on the artificial neural network was used as an optimization method in order to achieve the global optimum. The evaluation of the successive design iterations was performed using artificial neural network prior to the flow solver. For the second case, the conjugate gradient algorithm with a three dimensional CFD flow solver was used to systematically vary a free-form parameterization of the endwall. This method is efficient and less time to consume as it requires derivative information of the objective function. The objective function was to maximize the isentropic efficiency of the turbine by keeping the mass flow rate as constant. The performance was quantified by using a multi-objective function. Other than these two classifications of the optimization methods, there were four optimizations cases i.e. the hub only, the shroud only, and the combination of hub and shroud. For the fourth case, the shroud endwall was optimized by using the optimized hub endwall geometry. The hub optimization resulted in an increase in the efficiency due to more homogenous inlet conditions for the rotor. The adverse pressure gradient was reduced but the total pressure loss in the vicinity of the hub was increased. The shroud optimization resulted in an increase in efficiency, total pressure loss and entropy were reduced. The combination of hub and shroud did not show overwhelming results which were achieved for the individual cases of the hub and the shroud. This may be caused by fact that there were too many control variables. The fourth case of optimization showed the best result because optimized hub was used as an initial geometry to optimize the shroud. The efficiency was increased more than the individual cases of optimization with a mass flow rate equal to the baseline design of the turbine. The results of artificial neural network and conjugate gradient method were compared.Keywords: artificial neural network, axial turbine, conjugate gradient method, non-axisymmetric endwall, optimization
Procedia PDF Downloads 225156 Effects of Glucogenic and Lipogenic Diets on Ruminal Microbiota and Metabolites in Vitro
Authors: Beihai Xiong, Dengke Hua, Wouter Hendriks, Wilbert Pellikaan
Abstract:
To improve the energy status of dairy cows in the early lactation, lots of jobs have been done on adjusting the starch to fiber ratio in the diet. As a complex ecosystem, the rumen contains a large population of microorganisms which plays a crucial role in feed degradation. Further study on the microbiota alterations and metabolic changes under different dietary energy sources is essential and valuable to better understand the function of the ruminal microorganisms and thereby to optimize the rumen function and enlarge feed efficiency. The present study will focus on the effects of two glucogenic diets (G: ground corn and corn silage; S: steam-flaked corn and corn silage) and a lipogenic diet (L: sugar beet pulp and alfalfa silage) on rumen fermentation, gas production, the ruminal microbiota and metabolome, and also their correlations in vitro. The gas production was recorded consistently, and the gas volume and producing rate at times 6, 12, 24, 48 h were calculated separately. The fermentation end-products were measured after fermenting for 48 h. The ruminal bacteria and archaea communities were determined by 16S RNA sequencing technique, the metabolome profile was tested through LC-MS methods. Compared to the diet G and S, the L diet had a lower dry matter digestibility, propionate production, and ammonia-nitrogen concentration. The two glucogenic diets performed worse in controlling methane and lactic acid production compared to the L diet. The S diet produced the greatest cumulative gas volume at any time points during incubation compared to the G and L diet. The metabolic analysis revealed that the lipid digestion was up-regulated by the diet L than other diets. On the subclass level, most metabolites belonging to the fatty acids and conjugates were higher, but most metabolites belonging to the amino acid, peptides, and analogs were lower in diet L than others. Differences in rumen fermentation characteristics were associated with (or resulting from) changes in the relative abundance of bacterial and archaeal genera. Most highly abundant bacteria were stable or slightly influenced by diets, while several amylolytic and cellulolytic bacteria were sensitive to the dietary changes. The L diet had a significantly higher number of cellulolytic bacteria, including the genera of Ruminococcus, Butyrivibrio, Eubacterium, Lachnospira, unclassified Lachnospiraceae, and unclassified Ruminococcaceae. The relative abundances of amylolytic bacteria genera including Selenomonas_1, Ruminobacter, and Succinivibrionaceae_UCG-002 were higher in diet G and S. These affected bacteria was also proved to have high associations with certain metabolites. The Selenomonas_1 and Succinivibrionaceae_UCG-002 may contribute to the higher propionate production in the diet G and S through enhancing the succinate pathway. The results indicated that the two glucogenic diets had a greater extent of gas production, a higher dry matter digestibility, and produced more propionate than diet L. The steam-flaked corn did not show a better performance on fermentation end-products than ground corn. This study has offered a deeper understanding of ruminal microbial functions which could assistant the improvement in rumen functions and thereby in the ruminant production.Keywords: gas production, metabolome, microbiota, rumen fermentation
Procedia PDF Downloads 153155 ADAM10 as a Potential Blood Biomarker of Cognitive Frailty
Authors: Izabela P. Vatanabe, Rafaela Peron, Patricia Manzine, Marcia R. Cominetti
Abstract:
Introduction: Considering the increase in life expectancy of world population, there is an emerging concern in health services to allocate better care and care to elderly, through promotion, prevention and treatment of health. It has been observed that frailty syndrome is prevalent in elderly people worldwide and this complex and heterogeneous clinical syndrome consist of the presence of physical frailty associated with cognitive dysfunction, though in absence of dementia. This can be characterized by exhaustion, unintentional weight loss, decreased walking speed, weakness and low level of physical activity, in addition, each of these symptoms may be a predictor of adverse outcomes such as hospitalization, falls, functional decline, institutionalization, and death. Cognitive frailty is a recent concept in literature, which is defined as the presence of physical frailty associated with mild cognitive impairment (MCI) however in absence of dementia. This new concept has been considered as a subtype of frailty, which along with aging process and its interaction with physical frailty, accelerates functional declines and can result in poor quality of life of the elderly. MCI represents a risk factor for Alzheimer's disease (AD) in view of high conversion rate for this disease. Comorbidities and physical frailty are frequently found in AD patients and are closely related to heterogeneity and clinical manifestations of the disease. The decreased platelets ADAM10 levels in AD patients, compared to cognitively healthy subjects, matched by sex, age and education. Objective: Based on these previous results, this study aims to evaluate whether ADAM10 platelet levels of could act as a biomarker of cognitive frailty. Methods: The study was approved by Ethics Committee of Federal University of São Carlos (UFSCar) and conducted in the municipality of São Carlos, headquarters of Federal University of São Carlos (UFSCar). Biological samples of subjects were collected, analyzed and then stored in a biorepository. ADAM10 platelet levels were analyzed by western blotting technique in subjects with MCI and compared to subjects without cognitive impairment, both with and without presence of frailty. Statistical tests of association, regression and diagnostic accuracy were performed. Results: The results have shown that ADAM10/β-actin ratio is decreased in elderly individuals with cognitive frailty compared to non-frail and cognitively healthy controls. Previous studies performed by this research group, already mentioned above, demonstrated that this reduction is still higher in AD patients. Therefore, the ADAM10/β-actin ratio appears to be a potential biomarker for cognitive frailty. The results bring important contributions to an accurate diagnosis of cognitive frailty from the perspective of ADAM10 as a biomarker for this condition, however, more experiments are being conducted, using a high number of subjects, and will help to understand the role of ADAM10 as biomarker of cognitive frailty and contribute to the implementation of tools that work in the diagnosis of cognitive frailty. Such tools can be used in public policies for the diagnosis of cognitive frailty in the elderly, resulting in a more adequate planning for health teams and better quality of life for the elderly.Keywords: ADAM10, biomarkers, cognitive frailty, elderly
Procedia PDF Downloads 236154 Aerobic Biodegradation of a Chlorinated Hydrocarbon by Bacillus Cereus 2479
Authors: Srijata Mitra, Mobina Parveen, Pranab Roy, Narayan Chandra Chattopadhyay
Abstract:
Chlorinated hydrocarbon can be a major pollution problem in groundwater as well as soil. Many people interact with these chemicals on daily accidentally or by professionally in the laboratory. One of the most common sources for Chlorinated hydrocarbon contamination of soil and groundwater are industrial effluents. The wide use and discharge of Trichloroethylene (TCE), a volatile chlorohydrocarbon from chemical industry, led to major water pollution in rural areas. TCE is an mainly used as an industrial metal degreaser in industries. Biotransformation of TCE to the potent carcinogen vinyl chloride (VC) by consortia of anaerobic bacteria might have role for the above purpose. For these reasons, the aim of current study was to isolate and characterized the genes involved in TCE metabolism and also to investigate the in silico study of those genes. To our knowledge, only one aromatic dioxygenase system, the toluene dioxygenase in Pseudomonas putida F1 has been shown to be involved in TCE degradation. This is first instance where Bacillus cereus group being used in biodegradation of trichloroethylene. A novel bacterial strain 2479 was isolated from oil depot site at Rajbandh, Durgapur (West Bengal, India) by enrichment culture technique. It was identified based on polyphasic approach and ribotyping. The bacterium was gram positive, rod shaped, endospore forming and capable of degrading trichloroethylene as the sole carbon source. On the basis of phylogenetic data and Fatty Acid Methyl Ester Analysis, strain 2479 should be placed within the genus Bacillus and species cereus. However, the present isolate (strain 2479) is unique and sharply different from the usual Bacillus strains in its biodegrading nature. Fujiwara test was done to estimate that the strain 2479 could degrade TCE efficiently. The gene for TCE biodegradation was PCR amplified from genomic DNA of Bacillus cereus 2479 by using todC1 gene specific primers. The 600bp amplicon was cloned into expression vector pUC I8 in the E. coli host XL1-Blue and expressed under the control of lac promoter and nucleotide sequence was determined. The gene sequence was deposited at NCBI under the Accession no. GU183105. In Silico approach involved predicting the physico-chemical properties of deduced Tce1 protein by using ProtParam tool. The tce1 gene contained 342 bp long ORF encoding 114 amino acids with a predicted molecular weight 12.6 kDa and the theoretical pI value of the polypeptide was 5.17, molecular formula: C559H886N152O165S8, total number of atoms: 1770, aliphatic index: 101.93, instability index: 28.60, Grand Average of Hydropathicity (GRAVY): 0.152. Three differentially expressed proteins (97.1, 40 and 30 kDa) were directly involved in TCE biodegradation, found to react immunologically to the antibodies raised against TCE inducible proteins in Western blot analysis. The present study suggested that cloned gene product (TCE1) was capable of degrading TCE as verified chemically.Keywords: cloning, Bacillus cereus, in silico analysis, TCE
Procedia PDF Downloads 398153 A Convolution Neural Network PM-10 Prediction System Based on a Dense Measurement Sensor Network in Poland
Authors: Piotr A. Kowalski, Kasper Sapala, Wiktor Warchalowski
Abstract:
PM10 is a suspended dust that primarily has a negative effect on the respiratory system. PM10 is responsible for attacks of coughing and wheezing, asthma or acute, violent bronchitis. Indirectly, PM10 also negatively affects the rest of the body, including increasing the risk of heart attack and stroke. Unfortunately, Poland is a country that cannot boast of good air quality, in particular, due to large PM concentration levels. Therefore, based on the dense network of Airly sensors, it was decided to deal with the problem of prediction of suspended particulate matter concentration. Due to the very complicated nature of this issue, the Machine Learning approach was used. For this purpose, Convolution Neural Network (CNN) neural networks have been adopted, these currently being the leading information processing methods in the field of computational intelligence. The aim of this research is to show the influence of particular CNN network parameters on the quality of the obtained forecast. The forecast itself is made on the basis of parameters measured by Airly sensors and is carried out for the subsequent day, hour after hour. The evaluation of learning process for the investigated models was mostly based upon the mean square error criterion; however, during the model validation, a number of other methods of quantitative evaluation were taken into account. The presented model of pollution prediction has been verified by way of real weather and air pollution data taken from the Airly sensor network. The dense and distributed network of Airly measurement devices enables access to current and archival data on air pollution, temperature, suspended particulate matter PM1.0, PM2.5, and PM10, CAQI levels, as well as atmospheric pressure and air humidity. In this investigation, PM2.5, and PM10, temperature and wind information, as well as external forecasts of temperature and wind for next 24h served as inputted data. Due to the specificity of the CNN type network, this data is transformed into tensors and then processed. This network consists of an input layer, an output layer, and many hidden layers. In the hidden layers, convolutional and pooling operations are performed. The output of this system is a vector containing 24 elements that contain prediction of PM10 concentration for the upcoming 24 hour period. Over 1000 models based on CNN methodology were tested during the study. During the research, several were selected out that give the best results, and then a comparison was made with the other models based on linear regression. The numerical tests carried out fully confirmed the positive properties of the presented method. These were carried out using real ‘big’ data. Models based on the CNN technique allow prediction of PM10 dust concentration with a much smaller mean square error than currently used methods based on linear regression. What's more, the use of neural networks increased Pearson's correlation coefficient (R²) by about 5 percent compared to the linear model. During the simulation, the R² coefficient was 0.92, 0.76, 0.75, 0.73, and 0.73 for 1st, 6th, 12th, 18th, and 24th hour of prediction respectively.Keywords: air pollution prediction (forecasting), machine learning, regression task, convolution neural networks
Procedia PDF Downloads 149152 3D Label-Free Bioimaging of Native Tissue with Selective Plane Illumination Optical Microscopy
Authors: Jing Zhang, Yvonne Reinwald, Nick Poulson, Alicia El Haj, Chung See, Mike Somekh, Melissa Mather
Abstract:
Biomedical imaging of native tissue using light offers the potential to obtain excellent structural and functional information in a non-invasive manner with good temporal resolution. Image contrast can be derived from intrinsic absorption, fluorescence, or scatter, or through the use of extrinsic contrast. A major challenge in applying optical microscopy to in vivo tissue imaging is the effects of light attenuation which limits light penetration depth and achievable imaging resolution. Recently Selective Plane Illumination Microscopy (SPIM) has been used to map the 3D distribution of fluorophores dispersed in biological structures. In this approach, a focused sheet of light is used to illuminate the sample from the side to excite fluorophores within the sample of interest. Images are formed based on detection of fluorescence emission orthogonal to the illumination axis. By scanning the sample along the detection axis and acquiring a stack of images, 3D volumes can be obtained. The combination of rapid image acquisition speeds with the low photon dose to samples optical sectioning provides SPIM is an attractive approach for imaging biological samples in 3D. To date all implementations of SPIM rely on the use of fluorescence reporters be that endogenous or exogenous. This approach has the disadvantage that in the case of exogenous probes the specimens are altered from their native stage rendering them unsuitable for in vivo studies and in general fluorescence emission is weak and transient. Here we present for the first time to our knowledge a label-free implementation of SPIM that has downstream applications in the clinical setting. The experimental set up used in this work incorporates both label-free and fluorescent illumination arms in addition to a high specification camera that can be partitioned for simultaneous imaging of both fluorescent emission and scattered light from intrinsic sources of optical contrast in the sample being studied. This work first involved calibration of the imaging system and validation of the label-free method with well characterised fluorescent microbeads embedded in agarose gel. 3D constructs of mammalian cells cultured in agarose gel with varying cell concentrations were then imaged. A time course study to track cell proliferation in the 3D construct was also carried out and finally a native tissue sample was imaged. For each sample multiple images were obtained by scanning the sample along the axis of detection and 3D maps reconstructed. The results obtained validated label-free SPIM as a viable approach for imaging cells in a 3D gel construct and native tissue. This technique has the potential use in a near-patient environment that can provide results quickly and be implemented in an easy to use manner to provide more information with improved spatial resolution and depth penetration than current approaches.Keywords: bioimaging, optics, selective plane illumination microscopy, tissue imaging
Procedia PDF Downloads 247151 Mechanical Properties of Poly(Propylene)-Based Graphene Nanocomposites
Authors: Luiza Melo De Lima, Tito Trindade, Jose M. Oliveira
Abstract:
The development of thermoplastic-based graphene nanocomposites has been of great interest not only to the scientific community but also to different industrial sectors. Due to the possible improvement of performance and weight reduction, thermoplastic nanocomposites are a great promise as a new class of materials. These nanocomposites are of relevance for the automotive industry, namely because the emission limits of CO2 emissions imposed by the European Commission (EC) regulations can be fulfilled without compromising the car’s performance but by reducing its weight. Thermoplastic polymers have some advantages over thermosetting polymers such as higher productivity, lower density, and recyclability. In the automotive industry, for example, poly(propylene) (PP) is a common thermoplastic polymer, which represents more than half of the polymeric raw material used in automotive parts. Graphene-based materials (GBM) are potential nanofillers that can improve the properties of polymer matrices at very low loading. In comparison to other composites, such as fiber-based composites, weight reduction can positively affect their processing and future applications. However, the properties and performance of GBM/polymer nanocomposites depend on the type of GBM and polymer matrix, the degree of dispersion, and especially the type of interactions between the fillers and the polymer matrix. In order to take advantage of the superior mechanical strength of GBM, strong interfacial strength between GBM and the polymer matrix is required for efficient stress transfer from GBM to the polymer. Thus, chemical compatibilizers and physicochemical modifications have been reported as important tools during the processing of these nanocomposites. In this study, PP-based nanocomposites were obtained by a simple melt blending technique, using a Brabender type mixer machine. Graphene nanoplatelets (GnPs) were applied as structural reinforcement. Two compatibilizers were used to improve the interaction between PP matrix and GnPs: PP graft maleic anhydride (PPgMA) and PPgMA modified with tertiary amine alcohol (PPgDM). The samples for tensile and Charpy impact tests were obtained by injection molding. The results suggested the GnPs presence can increase the mechanical strength of the polymer. However, it was verified that the GnPs presence can promote a decrease of impact resistance, turning the nanocomposites more fragile than neat PP. The compatibilizers’ incorporation increases the impact resistance, suggesting that the compatibilizers can enhance the adhesion between PP and GnPs. Compared to neat PP, Young’s modulus of non-compatibilized nanocomposite increase demonstrated that GnPs incorporation can promote a stiffness improvement of the polymer. This trend can be related to the several physical crosslinking points between the PP matrix and the GnPs. Furthermore, the decrease of strain at a yield of PP/GnPs, together with the enhancement of Young’s modulus, confirms that the GnPs incorporation led to an increase in stiffness but to a decrease in toughness. Moreover, the results demonstrated that incorporation of compatibilizers did not affect Young’s modulus and strain at yield results compared to non-compatibilized nanocomposite. The incorporation of these compatibilizers showed an improvement of nanocomposites’ mechanical properties compared both to those the non-compatibilized nanocomposite and to a PP sample used as reference.Keywords: graphene nanoplatelets, mechanical properties, melt blending processing, poly(propylene)-based nanocomposites
Procedia PDF Downloads 187150 Detection and Quantification of Viable but Not Culturable Vibrio Parahaemolyticus in Frozen Bivalve Molluscs
Authors: Eleonora Di Salvo, Antonio Panebianco, Graziella Ziino
Abstract:
Background: Vibrio parahaemolyticus is a human pathogen that is widely distributed in marine environments. It is frequently isolated from raw seafood, particularly shellfish. Consumption of raw or undercooked seafood contaminated with V. parahaemolyticus may lead to acute gastroenteritis. Vibrio spp. has excellent resistance to low temperatures so it can be found in frozen products for a long time. Recently, the viable but non-culturable state (VBNC) of bacteria has attracted great attention, and more than 85 species of bacteria have been demonstrated to be capable of entering this state. VBNC cells cannot grow in conventional culture medium but are viable and maintain metabolic activity, which may constitute an unrecognized source of food contamination and infection. Also V. parahaemolyticus could exist in VBNC state under nutrient starvation or low-temperature conditions. Aim: The aim of the present study was to optimize methods and investigate V. parahaemolyticus VBNC cells and their presence in frozen bivalve molluscs, regularly marketed. Materials and Methods: propidium monoazide (PMA) was integrated with real-time polymerase chain reaction (qPCR) targeting the tl gene to detect and quantify V. parahaemolyticus in the VBNC state. PMA-qPCR resulted highly specific to V. parahaemolyticus with a limit of detection (LOD) of 10-1 log CFU/mL in pure bacterial culture. A standard curve for V. parahaemolyticus cell concentrations was established with the correlation coefficient of 0.9999 at the linear range of 1.0 to 8.0 log CFU/mL. A total of 77 samples of frozen bivalve molluscs (35 mussels; 42 clams) were subsequently subjected to the qualitative (on alkaline phosphate buffer solution) and quantitative research of V. parahaemolyticus on thiosulfate-citrate-bile salts-sucrose (TCBS) agar (DIFCO) NaCl 2.5%, and incubation at 30°C for 24-48 hours. Real-time PCR was conducted on homogenate samples, in duplicate, with and without propidium monoazide (PMA) dye, and exposed for 45 min under halogen lights (650 W). Total DNA was extracted from cell suspension in homogenate samples according to bolliture protocol. The Real-time PCR was conducted with species-specific primers for V. parahaemolitycus. The RT-PCR was performed in a final volume of 20 µL, containing 10 µL of SYBR Green Mixture (Applied Biosystems), 2 µL of template DNA, 2 µL of each primer (final concentration 0.6 mM), and H2O 4 µL. The qPCR was carried out on CFX96 TouchTM (Bio-Rad, USA). Results: All samples were negative both to the quantitative and qualitative detection of V. parahaemolyticus by the classical culturing technique. The PMA-qPCR let us individuating VBNC V. parahaemolyticus in the 20,78% of the samples evaluated with a value between the Log 10-1 and Log 10-3 CFU/g. Only clams samples were positive for PMA-qPCR detection. Conclusion: The present research is the first evaluating PMA-qPCR assay for detection of VBNC V. parahaemolyticus in bivalve molluscs samples, and the used method was applicable to the rapid control of marketed bivalve molluscs. We strongly recommend to use of PMA-qPCR in order to identify VBNC forms, undetectable by the classic microbiological methods. A precise knowledge of the V.parahaemolyticus in a VBNC form is fundamental for the correct risk assessment not only in bivalve molluscs but also in other seafood.Keywords: food safety, frozen bivalve molluscs, PMA dye, Real-time PCR, VBNC state, Vibrio parahaemolyticus
Procedia PDF Downloads 139149 Geographic Information System and Ecotourism Sites Identification of Jamui District, Bihar, India
Authors: Anshu Anshu
Abstract:
In the red corridor famed for the Left Wing Extremism, lies small district of Jamui in Bihar, India. The district lies at 24º20´ N latitude and 86º13´ E longitude, covering an area of 3,122.8 km2 The undulating topography, with widespread forests provides pristine environment for invigorating experience of tourists. Natural landscape in form of forests, wildlife, rivers, and cultural landscape dotted with historical and religious places is highly purposive for tourism. The study is primarily related to the identification of potential ecotourism sites, using Geographic Information System. Data preparation, analysis and finally identification of ecotourism sites is done. Secondary data used is Survey of India Topographical Sheets with R.F.1:50,000 covering the area of Jamui district. District Census Handbook, Census of India, 2011; ERDAS Imagine and Arc View is used for digitization and the creation of DEM’s (Digital Elevation Model) of the district, depicting the relief and topography and generate thematic maps. The thematic maps have been refined using the geo-processing tools. Buffer technique has been used for the accessibility analysis. Finally, all the maps, including the Buffer maps were overlaid to find out the areas which have potential for the development of ecotourism sites in the Jamui district. Spatial data - relief, slopes, settlements, transport network and forests of Jamui District were marked and identified, followed by Buffer Analysis that was used to find out the accessibility of features like roads, railway stations to the sites available for the development of ecotourism destinations. Buffer analysis is also carried out to get the spatial proximity of major river banks, lakes, and dam sites to be selected for promoting sustainable ecotourism. Overlay Analysis is conducted using the geo-processing tools. Digital Terrain Model (DEM) generated and relevant themes like roads, forest areas and settlements were draped on the DEM to make an assessment of the topography and other land uses of district to delineate potential zones of ecotourism development. Development of ecotourism in Jamui faces several challenges. The district lies in the portion of Bihar that is part of ‘red corridor’ of India. The hills and dense forests are the prominent hideouts and training ground for the extremists. It is well known that any kind of political instability, war, acts of violence directly influence the travel propensity and hinders all kind of non-essential travels to these areas. The development of ecotourism in the district can bring change and overall growth in this area with communities getting more involved in economically sustainable activities. It is a known fact that poverty and social exclusion are the main force that pushes people, resorting towards violence. All over the world tourism has been used as a tool to eradicate poverty and generate good will among people. Tourism, in sustainable form should be promoted in the district to integrate local communities in the development process and to distribute fruits of development with equity.Keywords: buffer analysis, digital elevation model, ecotourism, red corridor
Procedia PDF Downloads 259148 Fiber Stiffness Detection of GFRP Using Combined ABAQUS and Genetic Algorithms
Authors: Gyu-Dong Kim, Wuk-Jae Yoo, Sang-Youl Lee
Abstract:
Composite structures offer numerous advantages over conventional structural systems in the form of higher specific stiffness and strength, lower life-cycle costs, and benefits such as easy installation and improved safety. Recently, there has been a considerable increase in the use of composites in engineering applications and as wraps for seismic upgrading and repairs. However, these composites deteriorate with time because of outdated materials, excessive use, repetitive loading, climatic conditions, manufacturing errors, and deficiencies in inspection methods. In particular, damaged fibers in a composite result in significant degradation of structural performance. In order to reduce the failure probability of composites in service, techniques to assess the condition of the composites to prevent continual growth of fiber damage are required. Condition assessment technology and nondestructive evaluation (NDE) techniques have provided various solutions for the safety of structures by means of detecting damage or defects from static or dynamic responses induced by external loading. A variety of techniques based on detecting the changes in static or dynamic behavior of isotropic structures has been developed in the last two decades. These methods, based on analytical approaches, are limited in their capabilities in dealing with complex systems, primarily because of their limitations in handling different loading and boundary conditions. Recently, investigators have introduced direct search methods based on metaheuristics techniques and artificial intelligence, such as genetic algorithms (GA), simulated annealing (SA) methods, and neural networks (NN), and have promisingly applied these methods to the field of structural identification. Among them, GAs attract our attention because they do not require a considerable amount of data in advance in dealing with complex problems and can make a global solution search possible as opposed to classical gradient-based optimization techniques. In this study, we propose an alternative damage-detection technique that can determine the degraded stiffness distribution of vibrating laminated composites made of Glass Fiber-reinforced Polymer (GFRP). The proposed method uses a modified form of the bivariate Gaussian distribution function to detect degraded stiffness characteristics. In addition, this study presents a method to detect the fiber property variation of laminated composite plates from the micromechanical point of view. The finite element model is used to study free vibrations of laminated composite plates for fiber stiffness degradation. In order to solve the inverse problem using the combined method, this study uses only first mode shapes in a structure for the measured frequency data. In particular, this study focuses on the effect of the interaction among various parameters, such as fiber angles, layup sequences, and damage distributions, on fiber-stiffness damage detection.Keywords: stiffness detection, fiber damage, genetic algorithm, layup sequences
Procedia PDF Downloads 272147 Adaptive Power Control of the City Bus Integrated Photovoltaic System
Authors: Piotr Kacejko, Mariusz Duk, Miroslaw Wendeker
Abstract:
This paper presents an adaptive controller to track the maximum power point of a photovoltaic modules (PV) under fast irradiation change on the city-bus roof. Photovoltaic systems have been a prominent option as an additional energy source for vehicles. The Municipal Transport Company (MPK) in Lublin has installed photovoltaic panels on its buses roofs. The solar panels turn solar energy into electric energy and are used to load the buses electric equipment. This decreases the buses alternators load, leading to lower fuel consumption and bringing both economic and ecological profits. A DC–DC boost converter is selected as the power conditioning unit to coordinate the operating point of the system. In addition to the conversion efficiency of a photovoltaic panel, the maximum power point tracking (MPPT) method also plays a main role to harvest most energy out of the sun. The MPPT unit on a moving vehicle must keep tracking accuracy high in order to compensate rapid change of irradiation change due to dynamic motion of the vehicle. Maximum power point track controllers should be used to increase efficiency and power output of solar panels under changing environmental factors. There are several different control algorithms in the literature developed for maximum power point tracking. However, energy performances of MPPT algorithms are not clarified for vehicle applications that cause rapid changes of environmental factors. In this study, an adaptive MPPT algorithm is examined at real ambient conditions. PV modules are mounted on a moving city bus designed to test the solar systems on a moving vehicle. Some problems of a PV system associated with a moving vehicle are addressed. The proposed algorithm uses a scanning technique to determine the maximum power delivering capacity of the panel at a given operating condition and controls the PV panel. The aim of control algorithm was matching the impedance of the PV modules by controlling the duty cycle of the internal switch, regardless of changes of the parameters of the object of control and its outer environment. Presented algorithm was capable of reaching the aim of control. The structure of an adaptive controller was simplified on purpose. Since such a simple controller, armed only with an ability to learn, a more complex structure of an algorithm can only improve the result. The presented adaptive control system of the PV system is a general solution and can be used for other types of PV systems of both high and low power. Experimental results obtained from comparison of algorithms by a motion loop are presented and discussed. Experimental results are presented for fast change in irradiation and partial shading conditions. The results obtained clearly show that the proposed method is simple to implement with minimum tracking time and high tracking efficiency proving superior to the proposed method. This work has been financed by the Polish National Centre for Research and Development, PBS, under Grant Agreement No. PBS 2/A6/16/2013.Keywords: adaptive control, photovoltaic energy, city bus electric load, DC-DC converter
Procedia PDF Downloads 211146 Monitoring of Formaldehyde over Punjab Pakistan Using Car Max-Doas and Satellite Observation
Authors: Waqas Ahmed Khan, Faheem Khokhaar
Abstract:
Air pollution is one of the main perpetrators of climate change. GHGs cause melting of glaciers and cause change in temperature and heavy rain fall many gasses like Formaldehyde is not direct precursor that damage ozone like CO2 or Methane but Formaldehyde (HCHO) form glyoxal (CHOCHO) that has effect on ozone. Countries around the globe have unique air quality monitoring protocols to describe local air pollution. Formaldehyde is a colorless, flammable, strong-smelling chemical that is used in building materials and to produce many household products and medical preservatives. Formaldehyde also occurs naturally in the environment. It is produced in small amounts by most living organisms as part of normal metabolic processes. Pakistan lacks the monitoring facilities on larger scale to measure the atmospheric gasses on regular bases. Formaldehyde is formed from Glyoxal and effect mountain biodiversity and livelihood. So its monitoring is necessary in order to maintain and preserve biodiversity. Objective: Present study is aimed to measure atmospheric HCHO vertical column densities (VCDs) obtained from ground-base and compute HCHO data in Punjab and elevated areas (Rawalpindi & Islamabad) by satellite observation during the time period of 2014-2015. Methodology: In order to explore the spatial distributing of H2CO, various fields campaigns including international scientist by using car Max-Doas. Major focus was on the cities along national highways and industrial region of Punjab Pakistan. Level 2 data product of satellite instruments OMI retrieved by differential optical absorption spectroscopy (DOAS) technique are used. Spatio-temporal distribution of HCHO column densities over main cities and region of Pakistan has been discussed. Results: Results show the High HCHO column densities exceeding permissible limit over the main cities of Pakistan particularly the areas with rapid urbanization and enhanced economic growth. The VCDs value over elevated areas of Pakistan like Islamabad, Rawalpindi is around 1.0×1016 to 34.01×1016 Molecules’/cm2. While Punjab has values revolving around the figure 34.01×1016. Similarly areas with major industrial activity showed high amount of HCHO concentrations. Tropospheric glyoxal VCDs were found to be 4.75 × 1015 molecules/cm2. Conclusion: Results shows that monitoring site surrounded by Margalla hills (Islamabad) have higher concentrations of Formaldehyde. Wind data shows that industrial areas and areas having high economic growth have high values as they provide pathways for transmission of HCHO. Results obtained from this study would help EPA, WHO and air protection departments in order to monitor air quality and further preservation and restoration of mountain biodiversity.Keywords: air quality, formaldehyde, Max-Doas, vertical column densities (VCDs), satellite instrument, climate change
Procedia PDF Downloads 212145 The Human Rights Implications of Arbitrary Arrests and Political Imprisonment in Cameroon between 2016 and 2019
Authors: Ani Eda Njwe
Abstract:
Cameroon is a bilingual and bijural country in West and Central Africa. The current president has been in power since 1982, which makes him the longest-serving president in the world. The length of his presidency is one of the major causes of the ongoing political instability in the country. The preamble of the Cameroonian constitution commits Cameroon to respect international law and human rights. It provides that these laws should be translated into national laws, and respected by all spheres of government and public service. Cameroon is a signatory of several international human rights laws and conventions. In theory, the citizens of Cameroon have adequate legal protection against the violation of their human rights for political reasons. The ongoing political crisis in Cameroon erupted after the Anglophone lawyers and teachers launched a protest against the hiring of Francophone judges in Anglophone courts; and the hiring of Francophone teachers in Anglophone schools. In retaliation, the government launched a military crackdown on protesters and civilians, conducted arbitrary arrests on Anglophones, raped and maimed civilians, and declared a state of emergency in the Anglophone provinces. This infuriated the Anglophone public, causing them to create a secessionist movement, requesting the Independence of Anglophone Cameroon and demanding a separate country called Ambazonia. The Ambazonian armed rebel forces have ever since launched guerrilla attacks on government troops. This fighting has deteriorated into a war between the Ambazonians and the Cameroon government. The arbitrary arrests and unlawful imprisonments have continued, causing the closure of Anglophone schools since November 2016. In October 2018, Cameroon held presidential elections. Before the electoral commission announced the results, the opposition leader, a Francophone, declared himself winner, following a leak of the polling information. This led to his imprisonment. This research has the objective of finding out whether the government’s reactions to protesters and opposition is lawful, under national and international laws. This research will also verify if the prison conditions of political prisoners meet human rights standards. Furthermore, this research seeks detailed information obtained from current political prisoners and detainees on their experiences. This research also aims to highlight the effort being made internationally, towards bringing awareness and finding a resolution to the war in Cameroon. Finally, this research seeks to elucidate on the efforts which human rights organisations have made, towards overseeing the respect of human rights in Cameroon. This research adopts qualitative methods, whereby data were collected using semi-structured interviews of political detainees, and questionnaires. Also, data was collected from secondary sources such as; scholarly articles, newspaper articles, web sources, and human rights reports. From the data collected, the findings were analysed using the content analysis research technique. From the deductions, recommendations have been made, which human rights organisations, activists, and international bodies can implement, to cause the Cameroonian government to stop unlawful arrests and reinstate the respect of human rights and the rule of law in Cameroon.Keywords: arbitrary arrests, Cameroon, human rights, political
Procedia PDF Downloads 122144 Evaluation of Microstructure, Mechanical and Abrasive Wear Response of in situ TiC Particles Reinforced Zinc Aluminum Matrix Alloy Composites
Authors: Mohammad M. Khan, Pankaj Agarwal
Abstract:
The present investigation deals with the microstructures, mechanical and detailed wear characteristics of in situ TiC particles reinforced zinc aluminum-based metal matrix composites. The composites have been synthesized by liquid metallurgy route using vortex technique. The composite was found to be harder than the matrix alloy due to high hardness of the dispersoid particles therein. The former was also lower in ultimate tensile strength and ductility as compared to the matrix alloy. This could be explained to be due to the use of coarser size dispersoid and larger interparticle spacing. Reasonably uniform distribution of the dispersoid phase in the alloy matrix and good interfacial bonding between the dispersoid and matrix was observed. The composite exhibited predominantly brittle mode of fracture with microcracking in the dispersoid phase indicating effective easy transfer of load from matrix to the dispersoid particles. To study the wear behavior of the samples three different types of tests were performed namely: (i) sliding wear tests using a pin on disc machine under dry condition, (ii) high stress (two-body) abrasive wear tests using different combinations of abrasive media and specimen surfaces under the conditions of varying abrasive size, traversal distance and load, and (iii) low-stress (three-body) abrasion tests using a rubber wheel abrasion tester at various loads and traversal distances using different abrasive media. In sliding wear test, significantly lower wear rates were observed in the case of base alloy over that of the composites. This has been attributed to the poor room temperature strength as a result of increased microcracking tendency of the composite over the matrix alloy. Wear surfaces of the composite revealed the presence of fragmented dispersoid particles and microcracking whereas the wear surface of matrix alloy was observed to be smooth with shallow grooves. During high-stress abrasion, the presence of the reinforcement offered increased resistance to the destructive action of the abrasive particles. Microcracking tendency was also enhanced because of the reinforcement in the matrix. The negative effect of the microcracking tendency was predominant by the abrasion resistance of the dispersoid. As a result, the composite attained improved wear resistance than the matrix alloy. The wear rate increased with load and abrasive size due to a larger depth of cut made by the abrasive medium. The wear surfaces revealed fine grooves, and damaged reinforcement particles while subsurface regions revealed limited plastic deformation and microcracking and fracturing of the dispersoid phase. During low-stress abrasion, the composite experienced significantly less wear rate than the matrix alloy irrespective of the test conditions. This could be explained to be due to wear resistance offered by the hard dispersoid phase thereby protecting the softer matrix against the destructive action of the abrasive medium. Abraded surfaces of the composite showed protrusion of dispersoid phase. The subsurface regions of the composites exhibited decohesion of the dispersoid phase along with its microcracking and limited plastic deformation in the vicinity of the abraded surfaces.Keywords: abrasive wear, liquid metallurgy, metal martix composite, SEM
Procedia PDF Downloads 150143 Multi-scale Geographic Object-Based Image Analysis (GEOBIA) Approach to Segment a Very High Resolution Images for Extraction of New Degraded Zones. Application to The Region of Mécheria in The South-West of Algeria
Authors: Bensaid A., Mostephaoui T., Nedjai R.
Abstract:
A considerable area of Algerian lands are threatened by the phenomenon of wind erosion. For a long time, wind erosion and its associated harmful effects on the natural environment have posed a serious threat, especially in the arid regions of the country. In recent years, as a result of increases in the irrational exploitation of natural resources (fodder) and extensive land clearing, wind erosion has particularly accentuated. The extent of degradation in the arid region of the Algerian Mécheriadepartment generated a new situation characterized by the reduction of vegetation cover, the decrease of land productivity, as well as sand encroachment on urban development zones. In this study, we attempt to investigate the potential of remote sensing and geographic information systems for detecting the spatial dynamics of the ancient dune cords based on the numerical processing of PlanetScope PSB.SB sensors images by September 29, 2021. As a second step, we prospect the use of a multi-scale geographic object-based image analysis (GEOBIA) approach to segment the high spatial resolution images acquired on heterogeneous surfaces that vary according to human influence on the environment. We have used the fractal net evolution approach (FNEA) algorithm to segment images (Baatz&Schäpe, 2000). Multispectral data, a digital terrain model layer, ground truth data, a normalized difference vegetation index (NDVI) layer, and a first-order texture (entropy) layer were used to segment the multispectral images at three segmentation scales, with an emphasis on accurately delineating the boundaries and components of the sand accumulation areas (Dune, dunes fields, nebka, and barkhane). It is important to note that each auxiliary data contributed to improve the segmentation at different scales. The silted areas were classified using a nearest neighbor approach over the Naâma area using imagery. The classification of silted areas was successfully achieved over all study areas with an accuracy greater than 85%, although the results suggest that, overall, a higher degree of landscape heterogeneity may have a negative effect on segmentation and classification. Some areas suffered from the greatest over-segmentation and lowest mapping accuracy (Kappa: 0.79), which was partially attributed to confounding a greater proportion of mixed siltation classes from both sandy areas and bare ground patches. This research has demonstrated a technique based on very high-resolution images for mapping sanded and degraded areas using GEOBIA, which can be applied to the study of other lands in the steppe areas of the northern countries of the African continent.Keywords: land development, GIS, sand dunes, segmentation, remote sensing
Procedia PDF Downloads 109142 Non-Invasive Evaluation of Patients After Percutaneous Coronary Revascularization. The Role of Cardiac Imaging
Authors: Abdou Elhendy
Abstract:
Numerous study have shown the efficacy of the percutaneous intervention (PCI) and coronary stenting in improving left ventricular function and relieving exertional angina. Furthermore, PCI remains the main line of therapy in acute myocardial infarction. Improvement of procedural techniques and new devices have resulted in an increased number of PCI in those with difficult and extensive lesions, multivessel disease as well as total occlusion. Immediate and late outcome may be compromised by acute thrombosis or the development of fibro-intimal hyperplasia. In addition, progression of coronary artery disease proximal or distal to the stent as well as in non-stented arteries is not uncommon. As a result, complications can occur, such as acute myocardial infarction, worsened heart failure or recurrence of angina. In a stent, restenosis can occur without symptoms or with atypical complaints rendering the clinical diagnosis difficult. Routine invasive angiography is not appropriate as a follow up tool due to associated risk and cost and the limited functional assessment. Exercise and pharmacologic stress testing are increasingly used to evaluate the myocardial function, perfusion and adequacy of revascularization. Information obtained by these techniques provide important clues regarding presence and severity of compromise in myocardial blood flow. Stress echocardiography can be performed in conjunction with exercise or dobutamine infusion. The diagnostic accuracy has been moderate, but the results provide excellent prognostic stratification. Adding myocardial contrast agents can improve imaging quality and allows assessment of both function and perfusion. Stress radionuclide myocardial perfusion imaging is an alternative to evaluate these patients. The extent and severity of wall motion and perfusion abnormalities observed during exercise or pharmacologic stress are predictors of survival and risk of cardiac events. According to current guidelines, stress echocardiography and radionuclide imaging are considered to have appropriate indication among patients after PCI who have cardiac symptoms and those who underwent incomplete revascularization. Stress testing is not recommended in asymptomatic patients, particularly early after revascularization, Coronary CT angiography is increasingly used and provides high sensitive for the diagnosis of coronary artery stenosis. Average sensitivity and specificity for the diagnosis of in stent stenosis in pooled data are 79% and 81%, respectively. Limitations include blooming artifacts and low feasibility in patients with small stents or thick struts. Anatomical and functional cardiac imaging modalities are corner stone for the assessment of patients after PCI and provide salient diagnostic and prognostic information. Current imaging techniques cans serve as gate keeper for coronary angiography, thus limiting the risk of invasive procedures to those who are likely to benefit from subsequent revascularization. The determination of which modality to apply requires careful identification of merits and limitation of each technique as well as the unique characteristic of each individual patient.Keywords: coronary artery disease, stress testing, cardiac imaging, restenosis
Procedia PDF Downloads 168