Search results for: detection and estimation
872 Implementation of an Image Processing System Using Artificial Intelligence for the Diagnosis of Malaria Disease
Authors: Mohammed Bnebaghdad, Feriel Betouche, Malika Semmani
Abstract:
Image processing become more sophisticated over time due to technological advances, especially artificial intelligence (AI) technology. Currently, AI image processing is used in many areas, including surveillance, industry, science, and medicine. AI in medical image processing can help doctors diagnose diseases faster, with minimal mistakes, and with less effort. Among these diseases is malaria, which remains a major public health challenge in many parts of the world. It affects millions of people every year, particularly in tropical and subtropical regions. Early detection of malaria is essential to prevent serious complications and reduce the burden of the disease. In this paper, we propose and implement a scheme based on AI image processing to enhance malaria disease diagnosis through automated analysis of blood smear images. The scheme is based on the convolutional neural network (CNN) method. So, we have developed a model that classifies infected and uninfected single red cells using images available on Kaggle, as well as real blood smear images obtained from the Central Laboratory of Medical Biology EHS Laadi Flici (formerly El Kettar) in Algeria. The real images were segmented into individual cells using the watershed algorithm in order to match the images from the Kaagle dataset. The model was trained and tested, achieving an accuracy of 99% and 97% accuracy for new real images. This validates that the model performs well with new real images, although with slightly lower accuracy. Additionally, the model has been embedded in a Raspberry Pi4, and a graphical user interface (GUI) was developed to visualize the malaria diagnostic results and facilitate user interaction.Keywords: medical image processing, malaria parasite, classification, CNN, artificial intelligence
Procedia PDF Downloads 20871 Application of Remote Sensing for Monitoring the Impact of Lapindo Mud Sedimentation for Mangrove Ecosystem, Case Study in Sidoarjo, East Java
Authors: Akbar Cahyadhi Pratama Putra, Tantri Utami Widhaningtyas, M. Randy Aswin
Abstract:
Indonesia as an archipelagic nation have very long coastline which have large potential marine resources, one of that is the mangrove ecosystems. Lapindo mudflow disaster in Sidoarjo, East Java requires mudflow flowed into the sea through the river Brantas and Porong. Mud material that transported by river flow is feared dangerous because they contain harmful substances such as heavy metals. This study aims to map the mangrove ecosystem seen from its density and knowing how big the impact of a disaster on the Lapindo mud to mangrove ecosystem and accompanied by efforts to address the mangrove ecosystem that maintained continuity. Mapping coastal mangrove conditions of Sidoarjo was done using remote sensing products that Landsat 7 ETM + images with dry months of recording time in 2002, 2006, 2009, and 2014. The density of mangrove detected using NDVI that uses the band 3 that is the red channel and band 4 that is near IR channel. Image processing was used to produce NDVI using ENVI 5.1 software. NDVI results were used for the detection of mangrove density is 0-1. The development of mangrove ecosystems of both area and density from year to year experienced has a significant increase. Mangrove ecosystems growths are affected by material deposition area of Lapindo mud on Porong and Brantas river estuary, where the silt is growing medium suitable mangrove ecosystem and increasingly growing. Increasing the density caused support by public awareness to prevent heavy metals in the material so that the Lapindo mud mangrove breeding done around the farm.Keywords: archipelagic nation, mangrove, Lapindo mudflow disaster, NDVI
Procedia PDF Downloads 438870 Lamb Waves Wireless Communication in Healthy Plates Using Coherent Demodulation
Authors: Rudy Bahouth, Farouk Benmeddour, Emmanuel Moulin, Jamal Assaad
Abstract:
Guided ultrasonic waves are used in Non-Destructive Testing (NDT) and Structural Health Monitoring (SHM) for inspection and damage detection. Recently, wireless data transmission using ultrasonic waves in solid metallic channels has gained popularity in some industrial applications such as nuclear, aerospace and smart vehicles. The idea is to find a good substitute for electromagnetic waves since they are highly attenuated near metallic components due to Faraday shielding. The proposed solution is to use ultrasonic guided waves such as Lamb waves as an information carrier due to their capability of propagation for long distances. In addition to this, valuable information about the health of the structure could be extracted simultaneously. In this work, the reliable frequency bandwidth for communication is extracted experimentally from dispersion curves at first. Then, an experimental platform for wireless communication using Lamb waves is described and built. After this, coherent demodulation algorithm used in telecommunications is tested for Amplitude Shift Keying, On-Off Keying and Binary Phase Shift Keying modulation techniques. Signal processing parameters such as threshold choice, number of cycles per bit and Bit Rate are optimized. Experimental results are compared based on the average Bit Error Rate. Results have shown high sensitivity to threshold selection for Amplitude Shift Keying and On-Off Keying techniques resulting a Bit Rate decrease. Binary Phase Shift Keying technique shows the highest stability and data rate between all tested modulation techniques.Keywords: lamb waves communication, wireless communication, coherent demodulation, bit error rate
Procedia PDF Downloads 260869 Enhanced Acquisition Time of a Quantum Holography Scheme within a Nonlinear Interferometer
Authors: Sergio Tovar-Pérez, Sebastian Töpfer, Markus Gräfe
Abstract:
The work proposes a technique that decreases the detection acquisition time of quantum holography schemes down to one-third; this allows the possibility to image moving objects. Since its invention, quantum holography with undetected photon schemes has gained interest in the scientific community. This is mainly due to its ability to tailor the detected wavelengths according to the needs of the scheme implementation. Yet this wavelength flexibility grants the scheme a wide range of possible applications; an important matter was yet to be addressed. Since the scheme uses digital phase-shifting techniques to retrieve the information of the object out of the interference pattern, it is necessary to acquire a set of at least four images of the interference pattern along with well-defined phase steps to recover the full object information. Hence, the imaging method requires larger acquisition times to produce well-resolved images. As a consequence, the measurement of moving objects remains out of the reach of the imaging scheme. This work presents the use and implementation of a spatial light modulator along with a digital holographic technique called quasi-parallel phase-shifting. This technique uses the spatial light modulator to build a structured phase image consisting of a chessboard pattern containing the different phase steps for digitally calculating the object information. Depending on the reduction in the number of needed frames, the acquisition time reduces by a significant factor. This technique opens the door to the implementation of the scheme for moving objects. In particular, the application of this scheme in imaging alive specimens comes one step closer.Keywords: quasi-parallel phase shifting, quantum imaging, quantum holography, quantum metrology
Procedia PDF Downloads 114868 Quantification and Identification of the Main Components of the Biomass of the Microalgae Scenedesmus SP. – Prospection of Molecules of Commercial Interest
Authors: Carolina V. Viegas, Monique Gonçalves, Gisel Chenard Diaz, Yordanka Reyes Cruz, Donato Alexandre Gomes Aranda
Abstract:
To develop the massive cultivation of microalgae, it is necessary to isolate and characterize the species, improving genetic tools in search of specific characteristics. Therefore, the detection, identification and quantification of the compounds that compose the Scenedesmus sp. were prerequisites to verify the potential of these microalgae. The main objective of this work was to carry out the characterization of Scenedesmus sp. as to the content of ash, carbohydrates, proteins and lipids as well as the determination of the composition of their lipid classes and main fatty acids. The biomass of Scenedesmus sp, showed 15,29 ± 0,23 % of ash and CaO (36,17 %) was the main component of this fraction, The total protein and carbohydrate content of the biomass was 40,74 ± 1,01 % and 23,37 ± 0,95 %, respectively, proving to be a potential source of proteins as well as carbohydrates for the production of ethanol via fermentation, The lipid contents extracted via Bligh & Dyer and in situ saponification were 8,18 ± 0,13 % and 4,11 ± 0,11 %, respectively. In the lipid extracts obtained via Bligh & Dyer, approximately 50 % of the composition of this fraction consists of fatty compounds, while the other half is composed of an unsaponifiable fraction composed mainly of chlorophylls, phytosterols and carotenes. From the lowest yield, it was possible to obtain a selectivity of 92,14 % for fatty components (fatty acids and fatty esters) confirmed through the infrared spectroscopy technique. The presence of polyunsaturated acids (~45 %) in the lipid extracts indicated the potential of this fraction as a source of nutraceuticals. The results indicate that the biomass of Scenedesmus sp, can become a promising potential source for obtaining polyunsaturated fatty acids, carotenoids and proteins as well as the simultaneous obtainment of different compounds of high commercial value.Keywords: microalgae, Desmodesmus, lipid classes, fatty acid profile, proteins, carbohydrates
Procedia PDF Downloads 97867 Biophysical Features of Glioma-Derived Extracellular Vesicles as Potential Diagnostic Markers
Authors: Abhimanyu Thakur, Youngjin Lee
Abstract:
Glioma is a lethal brain cancer whose early diagnosis and prognosis are limited due to the dearth of a suitable technique for its early detection. Current approaches, including magnetic resonance imaging (MRI), computed tomography (CT), and invasive biopsy for the diagnosis of this lethal disease, hold several limitations, demanding an alternative method. Recently, extracellular vesicles (EVs) have been used in numerous biomarker studies, majorly exosomes and microvesicles (MVs), which are found in most of the cells and biofluids, including blood, cerebrospinal fluid (CSF), and urine. Remarkably, glioma cells (GMs) release a high number of EVs, which are found to cross the blood-brain-barrier (BBB) and impersonate the constituents of parent GMs including protein, and lncRNA; however, biophysical properties of EVs have not been explored yet as a biomarker for glioma. We isolated EVs from cell culture conditioned medium of GMs and regular primary culture, blood, and urine of wild-type (WT)- and glioma mouse models, and characterized by nano tracking analyzer, transmission electron microscopy, immunogold-EM, and differential light scanning. Next, we measured the biophysical parameters of GMs-EVs by using atomic force microscopy. Further, the functional constituents of EVs were examined by FTIR and Raman spectroscopy. Exosomes and MVs-derived from GMs, blood, and urine showed distinction biophysical parameters (roughness, adhesion force, and stiffness) and different from that of regular primary glial cells, WT-blood, and -urine, which can be attributed to the characteristic functional constituents. Therefore, biophysical features can be potential diagnostic biomarkers for glioma.Keywords: glioma, extracellular vesicles, exosomes, microvesicles, biophysical properties
Procedia PDF Downloads 142866 Estimation of the Exergy-Aggregated Value Generated by a Manufacturing Process Using the Theory of the Exergetic Cost
Authors: German Osma, Gabriel Ordonez
Abstract:
The production of metal-rubber spares for vehicles is a sequential process that consists in the transformation of raw material through cutting activities and chemical and thermal treatments, which demand electricity and fossil fuels. The energy efficiency analysis for these cases is mostly focused on studying of each machine or production step, but is not common to study of the quality of the production process achieves from aggregated value viewpoint, which can be used as a quality measurement for determining of impact on the environment. In this paper, the theory of exergetic cost is used for determining of aggregated exergy to three metal-rubber spares, from an exergy analysis and thermoeconomic analysis. The manufacturing processing of these spares is based into batch production technique, and therefore is proposed the use of this theory for discontinuous flows from of single models of workstations; subsequently, the complete exergy model of each product is built using flowcharts. These models are a representation of exergy flows between components into the machines according to electrical, mechanical and/or thermal expressions; they determine the demanded exergy to produce the effective transformation in raw materials (aggregated exergy value), the exergy losses caused by equipment and irreversibilities. The energy resources of manufacturing process are electricity and natural gas. The workstations considered are lathes, punching presses, cutters, zinc machine, chemical treatment tanks, hydraulic vulcanizing presses and rubber mixer. The thermoeconomic analysis was done by workstation and by spare; first of them describes the operation of the components of each machine and where the exergy losses are; while the second of them estimates the exergy-aggregated value for finished product and wasted feedstock. Results indicate that exergy efficiency of a mechanical workstation is between 10% and 60% while this value in the thermal workstations is less than 5%; also that each effective exergy-aggregated value is one-thirtieth of total exergy required for operation of manufacturing process, which amounts approximately to 2 MJ. These troubles are caused mainly by technical limitations of machines, oversizing of metal feedstock that demands more mechanical transformation work, and low thermal insulation of chemical treatment tanks and hydraulic vulcanizing presses. From established information, in this case, it is possible to appreciate the usefulness of theory of exergetic cost for analyzing of aggregated value in manufacturing processes.Keywords: exergy-aggregated value, exergy efficiency, thermoeconomics, exergy modeling
Procedia PDF Downloads 171865 An Overview of the Porosity Classification in Carbonate Reservoirs and Their Challenges: An Example of Macro-Microporosity Classification from Offshore Miocene Carbonate in Central Luconia, Malaysia
Authors: Hammad T. Janjuhah, Josep Sanjuan, Mohamed K. Salah
Abstract:
Biological and chemical activities in carbonates are responsible for the complexity of the pore system. Primary porosity is generally of natural origin while secondary porosity is subject to chemical reactivity through diagenetic processes. To understand the integrated part of hydrocarbon exploration, it is necessary to understand the carbonate pore system. However, the current porosity classification scheme is limited to adequately predict the petrophysical properties of different reservoirs having various origins and depositional environments. Rock classification provides a descriptive method for explaining the lithofacies but makes no significant contribution to the application of porosity and permeability (poro-perm) correlation. The Central Luconia carbonate system (Malaysia) represents a good example of pore complexity (in terms of nature and origin) mainly related to diagenetic processes which have altered the original reservoir. For quantitative analysis, 32 high-resolution images of each thin section were taken using transmitted light microscopy. The quantification of grains, matrix, cement, and macroporosity (pore types) was achieved using a petrographic analysis of thin sections and FESEM images. The point counting technique was used to estimate the amount of macroporosity from thin section, which was then subtracted from the total porosity to derive the microporosity. The quantitative observation of thin sections revealed that the mouldic porosity (macroporosity) is the dominant porosity type present, whereas the microporosity seems to correspond to a sum of 40 to 50% of the total porosity. It has been proven that these Miocene carbonates contain a significant amount of microporosity, which significantly complicates the estimation and production of hydrocarbons. Neglecting its impact can increase uncertainty about estimating hydrocarbon reserves. Due to the diversity of geological parameters, the application of existing porosity classifications does not allow a better understanding of the poro-perm relationship. However, the classification can be improved by including the pore types and pore structures where they can be divided into macro- and microporosity. Such studies of microporosity identification/classification represent now a major concern in limestone reservoirs around the world.Keywords: overview of porosity classification, reservoir characterization, microporosity, carbonate reservoir
Procedia PDF Downloads 154864 Estimation of Noise Barriers for Arterial Roads of Delhi
Authors: Sourabh Jain, Parul Madan
Abstract:
Traffic noise pollution has become a challenging problem for all metro cities of India due to rapid urbanization, growing population and rising number of vehicles and transport development. In Delhi the prime source of noise pollution is vehicular traffic. In Delhi it is found that the ambient noise level (Leq) is exceeding the standard permissible value at all the locations. Noise barriers or enclosures are definitely useful in obtaining effective deduction of traffic noise disturbances in urbanized areas. US’s Federal Highway Administration Model (FHWA) and Calculation of Road Traffic Noise (CORTN) of UK are used to develop spread sheets for noise prediction. Spread sheets are also developed for evaluating effectiveness of existing boundary walls abutting houses in mitigating noise, redesigning them as noise barriers. Study was also carried out to examine the changes in noise level due to designed noise barrier by using both models FHWA and CORTN respectively. During the collection of various data it is found that receivers are located far away from road at Rithala and Moolchand sites and hence extra barrier height needed to meet prescribed limits was less as seen from calculations and most of the noise diminishes by propagation effect.On the basis of overall study and data analysis, it is concluded that FHWA and CORTN models under estimate noise levels. FHWA model predicted noise levels with an average percentage error of -7.33 and CORTN predicted with an average percentage error of -8.5. It was observed that at all sites noise levels at receivers were exceeding the standard limit of 55 dB. It was seen from calculations that existing walls are reducing noise levels. Average noise reduction due to walls at Rithala was 7.41 dB and at Panchsheel was 7.20 dB and lower amount of noise reduction was observed at Friend colony which was only 5.88. It was observed from analysis that Friends colony sites need much greater height of barrier. This was because of residential buildings abutting the road. At friends colony great amount of traffic was observed since it is national highway. At this site diminishing of noise due to propagation effect was very less.As FHWA and CORTN models were developed in excel programme, it eliminates laborious calculations of noise. There was no reflection correction in FHWA models as like in CORTN model.Keywords: IFHWA, CORTN, Noise Sources, Noise Barriers
Procedia PDF Downloads 133863 Relevance of Brain Stem Evoked Potential in Diagnosis of Central Demyelination in Guillain Barre’ Syndrome
Authors: Geetanjali Sharma
Abstract:
Guillain Barre’ syndrome (GBS) is an auto-immune mediated demyelination poly-radiculo-neuropathy. Clinical features include progressive symmetrical ascending muscle weakness of more than two limbs, areflexia with or without sensory, autonomic and brainstem abnormalities, the purpose of this study was to determine subclinical neurological changes of CNS with GBS and to establish the presence of central demyelination in GBS. The study was prospective and conducted in the Department of Physiology, Pt. B. D. Sharma Post-graduate Institute of Medical Sciences, University of Health Sciences, Rohtak, Haryana, India to find out early central demyelination in clinically diagnosed patients of GBS. These patients were referred from the department of Medicine of our Institute to our department for electro-diagnostic evaluation. The study group comprised of 40 subjects (20 clinically diagnosed GBS patients and 20 healthy individuals as controls) aged between 6-65 years. Brain Stem evoked Potential (BAEP) were done in both groups using RMS EMG EP mark II machine. BAEP parameters included the latencies of waves I to IV, inter peak latencies I-III, III-IV & I-V. Statistically significant increase in absolute peak and inter peak latencies in the GBS group as compared with control group was noted. Results of evoked potential reflect impairment of auditory pathways probably due to focal demyelination in Schwann cell derived myelin sheaths that cover the extramedullary portion of auditory nerves. Early detection of the sub-clinical abnormalities is important as timely intervention reduces morbidity.Keywords: brainstem, demyelination, evoked potential, Guillain Barre’
Procedia PDF Downloads 302862 A Method for Evaluating Gender Equity of Cycling from Rawls Justice Perspective
Authors: Zahra Hamidi
Abstract:
Promoting cycling, as an affordable environmentally friendly mode of transport to replace private car use has been central to sustainable transport policies. Cycling is faster than walking and combined with public transport has the potential to extend the opportunities that people can access. In other words, cycling, besides direct positive health impacts, can improve people mobility and ultimately their quality of life. Transport literature well supports the close relationship between mobility, quality of life, and, well being. At the same time inequity in the distribution of access and mobility has been associated with the key aspects of injustice and social exclusion. The pattern of social exclusion and inequality in access are also often related to population characteristics such as age, gender, income, health, and ethnic background. Therefore, while investing in transport infrastructure it is important to consider the equity of provided access for different population groups. This paper proposes a method to evaluate the equity of cycling in a city from Rawls egalitarian perspective. Since this perspective is concerned with the difference between individuals and social groups, this method combines accessibility measures and Theil index of inequality that allows capturing the inequalities ‘within’ and ‘between’ groups. The paper specifically focuses on two population characteristics as gender and ethnic background. Following Rawls equity principles, this paper measures accessibility by bikes to a selection of urban activities that can be linked to the concept of the social primary goods. Moreover, as growing number of cities around the world have launched bike-sharing systems (BSS) this paper incorporates both private and public bikes networks in the estimation of accessibility levels. Additionally, the typology of bike lanes (separated from or shared with roads), the presence of a bike sharing system in the network, as well as bike facilities (e.g. parking racks) have been included in the developed accessibility measures. Application of this proposed method to a real case study, the city of Malmö, Sweden, shows its effectiveness and efficiency. Although the accessibility levels were estimated only based on gender and ethnic background characteristics of the population, the author suggests that the analysis can be applied to other contexts and further developed using other properties, such as age, income, or health.Keywords: accessibility, cycling, equity, gender
Procedia PDF Downloads 403861 Relation Between Traffic Mix and Traffic Accidents in a Mixed Industrial Urban Area
Authors: Michelle Eliane Hernández-García, Angélica Lozano
Abstract:
The traffic accidents study usually contemplates the relation between factors such as the type of vehicle, its operation, and the road infrastructure. Traffic accidents can be explained by different factors, which have a greater or lower relevance. Two zones are studied, a mixed industrial zone and the extended zone of it. The first zone has mainly residential (57%), and industrial (23%) land uses. Trucks are mainly on the roads where industries are located. Four sensors give information about traffic and speed on the main roads. The extended zone (which includes the first zone) has mainly residential (47%) and mixed residential (43%) land use, and just 3% of industrial use. The traffic mix is composed mainly of non-trucks. 39 traffic and speed sensors are located on main roads. The traffic mix in a mixed land use zone, could be related to traffic accidents. To understand this relation, it is required to identify the elements of the traffic mix which are linked to traffic accidents. Models that attempt to explain what factors are related to traffic accidents have faced multiple methodological problems for obtaining robust databases. Poisson regression models are used to explain the accidents. The objective of the Poisson analysis is to estimate a vector to provide an estimate of the natural logarithm of the mean number of accidents per period; this estimate is achieved by standard maximum likelihood procedures. For the estimation of the relation between traffic accidents and the traffic mix, the database is integrated of eight variables, with 17,520 observations and six vectors. In the model, the dependent variable is the occurrence or non-occurrence of accidents, and the vectors that seek to explain it, correspond to the vehicle classes: C1, C2, C3, C4, C5, and C6, respectively, standing for car, microbus, and van, bus, unitary trucks (2 to 6 axles), articulated trucks (3 to 6 axles) and bi-articulated trucks (5 to 9 axles); in addition, there is a vector for the average speed of the traffic mix. A Poisson model is applied, using a logarithmic link function and a Poisson family. For the first zone, the Poisson model shows a positive relation among traffic accidents and C6, average speed, C3, C2, and C1 (in a decreasing order). The analysis of the coefficient shows a high relation with bi-articulated truck and bus (C6 and the C3), indicating an important participation of freight trucks. For the expanded zone, the Poisson model shows a positive relation among traffic accidents and speed average, biarticulated truck (C6), and microbus and vans (C2). The coefficients obtained in both Poisson models shows a higher relation among freight trucks and traffic accidents in the first industrial zone than in the expanded zone.Keywords: freight transport, industrial zone, traffic accidents, traffic mix, trucks
Procedia PDF Downloads 130860 Numerical Calculation and Analysis of Fine Echo Characteristics of Underwater Hemispherical Cylindrical Shell
Authors: Hongjian Jia
Abstract:
A finite-length cylindrical shell with a spherical cap is a typical engineering approximation model of actual underwater targets. The research on the omni-directional acoustic scattering characteristics of this target model can provide a favorable basis for the detection and identification of actual underwater targets. The elastic resonance characteristics of the target are the results of the comprehensive effect of the target length, shell-thickness ratio and materials. Under the conditions of different materials and geometric dimensions, the coincidence resonance characteristics of the target have obvious differences. Aiming at this problem, this paper obtains the omni-directional acoustic scattering field of the underwater hemispherical cylindrical shell by numerical calculation and studies the influence of target geometric parameters (length, shell-thickness ratio) and material parameters on the coincidence resonance characteristics of the target in turn. The study found that the formant interval is not a stable value and changes with the incident angle. Among them, the formant interval is less affected by the target length and shell-thickness ratio and is significantly affected by the material properties, which is an effective feature for classifying and identifying targets of different materials. The quadratic polynomial is utilized to fully fit the change relationship between the formant interval and the angle. The results show that the three fitting coefficients of the stainless steel and aluminum targets are significantly different, which can be used as an effective feature parameter to characterize the target materials.Keywords: hemispherical cylindrical shell;, fine echo characteristics;, geometric and material parameters;, formant interval
Procedia PDF Downloads 109859 Comparison of Physicochemical Properties of Catfish Myofibrillar and Sarcoplasmic Protein Hydrolysates and Characterization of Their Bioactive Peptides
Authors: Leila Najafian
Abstract:
Sarcoplasmic protein hydrolysates (SPHs) and myofibrillar protein hydrolysates (MPHs) from patin (Pangasius sutchi) were produced using two types of proteases: Papain and Alcalase. 1,1-diphenyl-2-picrylhydrazyl (DPPH), 2,2'-azino-bis(3-ethylbenzothiazoline-6-sulphonic acid) diammonium salt (ABTS) radical scavenging activities and metal chelating activity assays for antioxidant activities were carried out on the SPHs and MPHs. The hydrolysates were isolated and purified by ultrafiltration, gel filtration and reverse phase high-performance liquid chromatography (RP-HPLC) and liquid chromatography with tandem mass spectrometry detection (LC-MS/MS) was used in identifying peptide sequences. The results showed that when the DH of MPHs increased, the protein solubility increased, while the highest amount of the protein solubility of SPHs was after 60 min incubation. The effect of DH on antioxidant activities of SPHs and MPHs was investigated. Among the hydrolysates, papain-MPH and Alcalase-SPH, which had the highest antioxidant activities, were purified. The potent fractions obtained from RP-HPLC of sarcoplasmic (SI 3 fraction) and myofibrillar (MI 4 fraction) hydrolysates showed the highest DPPH radical scavenging activity. The FVNQPYLLYSVHMK peptide for MPH and the LVVDIPAALQHA peptide for SPH exhibited the highest antioxidant activity. The presence of hydrophobic and hydrophilic amino acids, namely leucine (L), valine (V), phenylalanine (F), histidine (H) and proline (P), in the peptide sequences of SPH and MPH are believed to contribute to high antioxidant activity. Hence, SPH and MPH from patin have the potential as a natural functional ingredient in food and pharmaceutical industry.Keywords: patin (Pangasius sutchi), protein hydrolysates, antioxidative peptides, mass spectrometry
Procedia PDF Downloads 260858 Leveraging xAPI in a Corporate e-Learning Environment to Facilitate the Tracking, Modelling, and Predictive Analysis of Learner Behaviour
Authors: Libor Zachoval, Daire O Broin, Oisin Cawley
Abstract:
E-learning platforms, such as Blackboard have two major shortcomings: limited data capture as a result of the limitations of SCORM (Shareable Content Object Reference Model), and lack of incorporation of Artificial Intelligence (AI) and machine learning algorithms which could lead to better course adaptations. With the recent development of Experience Application Programming Interface (xAPI), a large amount of additional types of data can be captured and that opens a window of possibilities from which online education can benefit. In a corporate setting, where companies invest billions on the learning and development of their employees, some learner behaviours can be troublesome for they can hinder the knowledge development of a learner. Behaviours that hinder the knowledge development also raise ambiguity about learner’s knowledge mastery, specifically those related to gaming the system. Furthermore, a company receives little benefit from their investment if employees are passing courses without possessing the required knowledge and potential compliance risks may arise. Using xAPI and rules derived from a state-of-the-art review, we identified three learner behaviours, primarily related to guessing, in a corporate compliance course. The identified behaviours are: trying each option for a question, specifically for multiple-choice questions; selecting a single option for all the questions on the test; and continuously repeating tests upon failing as opposed to going over the learning material. These behaviours were detected on learners who repeated the test at least 4 times before passing the course. These findings suggest that gauging the mastery of a learner from multiple-choice questions test scores alone is a naive approach. Thus, next steps will consider the incorporation of additional data points, knowledge estimation models to model knowledge mastery of a learner more accurately, and analysis of the data for correlations between knowledge development and identified learner behaviours. Additional work could explore how learner behaviours could be utilised to make changes to a course. For example, course content may require modifications (certain sections of learning material may be shown to not be helpful to many learners to master the learning outcomes aimed at) or course design (such as the type and duration of feedback).Keywords: artificial intelligence, corporate e-learning environment, knowledge maintenance, xAPI
Procedia PDF Downloads 121857 Assessing the Legacy Effects of Wildfire on Eucalypt Canopy Structure of South Eastern Australia
Authors: Yogendra K. Karna, Lauren T. Bennett
Abstract:
Fire-tolerant eucalypt forests are one of the major forest ecosystems of south-eastern Australia and thought to be highly resistant to frequent high severity wildfires. However, the impact of different severity wildfires on the canopy structure of fire-tolerant forest type is under-studied, and there are significant knowledge gaps in relation to the assessment of tree and stand level canopy structural dynamics and recovery after fire. Assessment of canopy structure is a complex task involving accurate measurements of the horizontal and vertical arrangement of the canopy in space and time. This study examined the utility of multitemporal, small-footprint lidar data to describe the changes in the horizontal and vertical canopy structure of fire-tolerant eucalypt forests seven years after wildfire of different severities from the tree to stand level. Extensive ground measurements were carried out in four severity classes to describe and validate canopy cover and height metrics as they change after wildfire. Several metrics such as crown height and width, crown base height and clumpiness of crown were assessed at tree and stand level using several individual tree top detection and measurement algorithm. Persistent effects of high severity fire 8 years after both on tree crowns and stand canopy were observed. High severity fire increased the crown depth but decreased the crown projective cover leading to more open canopy.Keywords: canopy gaps, canopy structure, crown architecture, crown projective cover, multi-temporal lidar, wildfire severity
Procedia PDF Downloads 175856 Effects of Environmental Parameters on Salmonella Contaminated in Harvested Oysters (Crassostrea lugubris and Crassostrea belcheri)
Authors: Varangkana Thaotumpitak, Jarukorn Sripradite, Saharuetai Jeamsripong
Abstract:
Environmental contamination from wastewater discharges originated from anthropogenic activities introduces the accumulation of enteropathogenic bacteria in aquatic animals, especially in oysters, and in shellfish harvesting areas. The consumption of raw or partially cooked oysters can be a risk for seafood-borne diseases in human. This study aimed to evaluate the relationship between the presence of Salmonella in oyster meat samples, and environmental factors (ambient air temperature, relative humidity, gust wind speed, average wind speed, tidal condition, precipitation and season) by using the principal component analysis (PCA). One hundred and forty-four oyster meat samples were collected from four oyster harvesting areas in Phang Nga province, Thailand from March 2016 to February 2017. The prevalence of Salmonella of each site was ranged from 25.0-36.11% in oyster meat. The results of PCA showed that ambient air temperature, relative humidity, and precipitation were main factors correlated with Salmonella detection in these oysters. Positive relationship was observed between positive Salmonella in the oysters and relative humidity (PC1=0.413) and precipitation (PC1=0.607), while the negative association was found between ambient air temperature (PC1=0.338) and the presence of Salmonella in oyster samples. These results suggested that lower temperature and higher precipitation and higher relative humidity will possibly effect on Salmonella contamination of oyster meat. During the high risk period, harvesting of oysters should be prohibited to reduce pathogenic bacteria contamination and to minimize a hazard of humans from Salmonellosis.Keywords: oyster, Phang Nga Bay, principal component analysis, Salmonella
Procedia PDF Downloads 132855 In vitro and in vivo Anticancer Activity of Nanosize Zinc Oxide Composites of Doxorubicin
Authors: Emma R. Arakelova, Stepan G. Grigoryan, Flora G. Arsenyan, Nelli S. Babayan, Ruzanna M. Grigoryan, Natalia K. Sarkisyan
Abstract:
Novel nanosize zinc oxide composites of doxorubicin obtained by deposition of 180 nm thick zinc oxide film on the drug surface using DC-magnetron sputtering of a zinc target in the form of gels (PEO+Dox+ZnO and Starch+NaCMC+Dox+ZnO) were studied for drug delivery applications. The cancer specificity was revealed both in in vitro and in vivo models. The cytotoxicity of the test compounds was analyzed against human cancer (HeLa) and normal (MRC5) cell lines using MTT colorimetric cell viability assay. IC50 values were determined and compared to reveal the cancer specificity of the test samples. The mechanistic study of the most active compound was investigated using Flow cytometry analyzing of the DNA content after PI (propidium iodide) staining. Data were analyzed with Tree Star FlowJo software using cell cycle analysis Dean-Jett-Fox module. The in vivo anticancer activity estimation experiments were carried out on mice with inoculated ascitic Ehrlich’s carcinoma at intraperitoneal introduction of doxorubicin and its zinc oxide compositions. It was shown that the nanosize zinc oxide film deposition on the drug surface leads to the selective anticancer activity of composites at the cellular level with the range of selectivity index (SI) from 4 (Starch+NaCMC+Dox+ZnO) to 200 (PEO(gel)+Dox+ZnO) which is higher than that of free Dox (SI = 56). The significant increase in vivo antitumor activity (by a factor of 2-2.5) and decrease of general toxicity of zinc oxide compositions of doxorubicin in the form of the above mentioned gels compared to free doxorubicin were shown on the model of inoculated Ehrlich's ascitic carcinoma. Mechanistic studies of anticancer activity revealed the cytostatic effect based on the high level of DNA biosynthesis inhibition at considerable low concentrations of zinc oxide compositions of doxorubicin. The results of studies in vitro and in vivo behavior of PEO+Dox+ZnO and Starch+NaCMC+Dox+ZnO composites confirm the high potential of the nanosize zinc oxide composites as a vector delivery system for future application in cancer chemotherapy.Keywords: anticancer activity, cancer specificity, doxorubicin, zinc oxide
Procedia PDF Downloads 411854 Synthesis of Human Factors Theories and Industry 4.0
Authors: Andrew Couch, Nicholas Loyd, Nathan Tenhundfeld
Abstract:
The rapid emergence of technology observably induces disruptive effects that carry implications for internal organizational dynamics as well as external market opportunities, strategic pressures, and threats. An examination of the historical tendencies of technology innovation shows that the body of managerial knowledge for addressing such disruption is underdeveloped. Fundamentally speaking, the impacts of innovation are unique and situationally oriented. Hence, the appropriate managerial response becomes a complex function that depends on the nature of the emerging technology, the posturing of internal organizational dynamics, the rate of technological growth, and much more. This research considers a particular case of mismanagement, the BP Texas City Refinery explosion of 2005, that carries notable discrepancies on the basis of human factors principles. Moreover, this research considers the modern technological climate (shaped by Industry 4.0 technologies) and seeks to arrive at an appropriate conceptual lens by which human factors principles and Industry 4.0 may be favorably integrated. In this manner, the careful examination of these phenomena helps to better support the sustainment of human factors principles despite the disruptive impacts that are imparted by technological innovation. In essence, human factors considerations are assessed through the application of principles that stem from usability engineering, the Swiss Cheese Model of accident causation, human-automation interaction, signal detection theory, alarm design, and other factors. Notably, this stream of research supports a broader framework in seeking to guide organizations amid the uncertainties of Industry 4.0 to capture higher levels of adoption, implementation, and transparency.Keywords: Industry 4.0, human factors engineering, management, case study
Procedia PDF Downloads 68853 Intensity of Dyspnea and Anxiety in Seniors in the Terminal Phase of the Disease
Authors: Mariola Głowacka
Abstract:
Aim: The aim of this study was to present the assessment of dyspnea and anxiety in seniors staying in the hospice in the context of the nurse's tasks. Materials and methods: The presented research was carried out at the "Hospicjum Płockie" Association of St. Urszula Ledóchowska in Płock, in a stationary ward, for adults. The research group consisted of 100 people, women, and men. In the study described in this paper, the method of diagnostic survey, the method of estimation and analysis of patient records were used, and the research tools were the numerical scale of the NRS assessment, the modified Borg scale to assess dyspnea, the Trait Anxiety scale to test the intensity of anxiety and the sociodemographic assessment of the respondent. Results: Among the patients, the greatest number were people without dyspnoea (38 people) and with average levels of dyspnoea (26 people). People with lung cancer had a higher level of breathlessness than people with other cancers. Half of the patients included in the study felt anxiety at a low level. On average, men had a higher level of anxiety than women. Conclusion: 1) Patients staying in the hospice require comprehensive nursing care due to the underlying disease, comorbidities, and a wide range of medications taken, which aggravate the feeling of dyspnea and anxiety. 2) The study showed that in patients staying in the hospice, the level of dyspnea was of varying severity. The greatest number of people were without dyspnea (38) and patients with a low level of dyspnea (34). There were 12 people experiencing an average level of dyspnea and a high level of dyspnea 15. 3) The main factor influencing the severity of dyspnea in patients was the location of cancer. There was no significant relationship between the intensity of dyspnea and the age, gender of the patient, and time from diagnosis. 4) The study showed that in patients staying in the hospice, the level of anxiety was of varying severity. Most people experience a low level of anxiety (51). There were 16 people with a high level of anxiety, while there were 33 people experiencing anxiety at an average level. 5) The patient's gender was the main factor influencing the increase in anxiety intensity. Men had higher levels of anxiety than women. There was no significant correlation between the intensity of anxiety and the age of the respondents, as well as the type of cancer and time since diagnosis. 6) The intensity of dyspnea depended on the type of cancer the subjects had. People with lung cancer had a higher level of breathlessness than those with breast cancer and bowel cancer. It was not found that the anxiety increased depending on the type of cancer and comorbidities of the examined person.Keywords: cancer, shortness of breath, anxiety, senior, hospice
Procedia PDF Downloads 94852 Structural Damage Detection in a Steel Column-Beam Joint Using Piezoelectric Sensors
Authors: Carlos H. Cuadra, Nobuhiro Shimoi
Abstract:
Application of piezoelectric sensors to detect structural damage due to seismic action on building structures is investigated. Plate-type piezoelectric sensor was developed and proposed for this task. A film-type piezoelectric sheet was attached on a steel plate and covered by a layer of glass. A special glue is used to fix the glass. This glue is a silicone that requires the application of ultraviolet rays for its hardening. Then, the steel plate was set up at a steel column-beam joint of a test specimen that was subjected to bending moment when test specimen is subjected to monotonic load and cyclic load. The structural behavior of test specimen during cyclic loading was verified using a finite element model, and it was found good agreement between both results on load-displacement characteristics. The cross section of steel elements (beam and column) is a box section of 100 mm×100 mm with a thin of 6 mm. This steel section is specified by the Japanese Industrial Standards as carbon steel square tube for general structure (STKR400). The column and beam elements are jointed perpendicularly using a fillet welding. The resulting test specimen has a T shape. When large deformation occurs the glass plate of the sensor device cracks and at that instant, the piezoelectric material emits a voltage signal which would be the indicator of a certain level of deformation or damage. Applicability of this piezoelectric sensor to detect structural damages was verified; however, additional analysis and experimental tests are required to establish standard parameters of the sensor system.Keywords: piezoelectric sensor, static cyclic test, steel structure, seismic damages
Procedia PDF Downloads 123851 Role of Vision Centers in Eliminating Avoidable Blindness Caused Due to Uncorrected Refractive Error in Rural South India
Authors: Ranitha Guna Selvi D, Ramakrishnan R, Mohideen Abdul Kader
Abstract:
Purpose: To study the role of Vision centers in managing preventable blindness through refractive error correction in Rural South India. Methods: A retrospective analysis of patients attending 15 Vision centers in Rural South India from a period of January 2021 to December 2021 was done. Medical records of 10,85,81 patients both new and reviewed, 79,562 newly registered patients and 29,019 review patient’s from15 Vision centers were included for data analysis. All the patients registered at the vision center underwent basic eye examination, including visual acuity, IOP measurement, Slit-lamp examination, retinoscopy, Fundus examination etc. Results: A total of 1,08,581 patients were included in the study. Of the total 1,08,581 patients, 79,562 were newly registered patients at Vision center and 29,019 were review patients. Males were 52,201(48.1%) and Females were 56,308(51.9) among them. The mean age of all examined patients was 41.03 ± 20.9 years (Standard deviation) and ranged from 01 – 113 years. Presenting mean visual acuity was 0.31 ± 0.5 in the right eye and 0.31 ± 0.4 in the left eye. Of the 1,08,581 patients 22,770 patients had refractive error in right eye and 22,721 patients had uncorrected refractive error in left eye. Glass prescription was given to 17,178 (15.8%) patients. 8,109 (7.5%) patients were referred to the base hospital for specialty clinic expert opinion or for cataract surgery. Conclusion: Vision center utilizing teleconsultation for comprehensive eye screening unit is a very effective tool in reducing the avoidable visual impairment caused due to uncorrected refractive error. Vision Centre model is believed to be efficient as it facilitates early detection and management of uncorrected refractive errors.Keywords: refractive error, uncorrected refractive error, vision center, vision technician, teleconsultation
Procedia PDF Downloads 142850 Stereo Motion Tracking
Authors: Yudhajit Datta, Hamsi Iyer, Jonathan Bandi, Ankit Sethia
Abstract:
Motion Tracking and Stereo Vision are complicated, albeit well-understood problems in computer vision. Existing softwares that combine the two approaches to perform stereo motion tracking typically employ complicated and computationally expensive procedures. The purpose of this study is to create a simple and effective solution capable of combining the two approaches. The study aims to explore a strategy to combine the two techniques of two-dimensional motion tracking using Kalman Filter; and depth detection of object using Stereo Vision. In conventional approaches objects in the scene of interest are observed using a single camera. However for Stereo Motion Tracking; the scene of interest is observed using video feeds from two calibrated cameras. Using two simultaneous measurements from the two cameras a calculation for the depth of the object from the plane containing the cameras is made. The approach attempts to capture the entire three-dimensional spatial information of each object at the scene and represent it through a software estimator object. In discrete intervals, the estimator tracks object motion in the plane parallel to plane containing cameras and updates the perpendicular distance value of the object from the plane containing the cameras as depth. The ability to efficiently track the motion of objects in three-dimensional space using a simplified approach could prove to be an indispensable tool in a variety of surveillance scenarios. The approach may find application from high security surveillance scenes such as premises of bank vaults, prisons or other detention facilities; to low cost applications in supermarkets and car parking lots.Keywords: kalman filter, stereo vision, motion tracking, matlab, object tracking, camera calibration, computer vision system toolbox
Procedia PDF Downloads 327849 Evaluating the Feasibility of Chemical Dermal Exposure Assessment Model
Authors: P. S. Hsi, Y. F. Wang, Y. F. Ho, P. C. Hung
Abstract:
The aim of the present study was to explore the dermal exposure assessment model of chemicals that have been developed abroad and to evaluate the feasibility of chemical dermal exposure assessment model for manufacturing industry in Taiwan. We conducted and analyzed six semi-quantitative risk management tools, including UK - Control of substances hazardous to health ( COSHH ) Europe – Risk assessment of occupational dermal exposure ( RISKOFDERM ), Netherlands - Dose related effect assessment model ( DREAM ), Netherlands – Stoffenmanager ( STOFFEN ), Nicaragua-Dermal exposure ranking method ( DERM ) and USA / Canada - Public Health Engineering Department ( PHED ). Five types of manufacturing industry were selected to evaluate. The Monte Carlo simulation was used to analyze the sensitivity of each factor, and the correlation between the assessment results of each semi-quantitative model and the exposure factors used in the model was analyzed to understand the important evaluation indicators of the dermal exposure assessment model. To assess the effectiveness of the semi-quantitative assessment models, this study also conduct quantitative dermal exposure results using prediction model and verify the correlation via Pearson's test. Results show that COSHH was unable to determine the strength of its decision factor because the results evaluated at all industries belong to the same risk level. In the DERM model, it can be found that the transmission process, the exposed area, and the clothing protection factor are all positively correlated. In the STOFFEN model, the fugitive, operation, near-field concentrations, the far-field concentration, and the operating time and frequency have a positive correlation. There is a positive correlation between skin exposure, work relative time, and working environment in the DREAM model. In the RISKOFDERM model, the actual exposure situation and exposure time have a positive correlation. We also found high correlation with the DERM and RISKOFDERM models, with coefficient coefficients of 0.92 and 0.93 (p<0.05), respectively. The STOFFEN and DREAM models have poor correlation, the coefficients are 0.24 and 0.29 (p>0.05), respectively. According to the results, both the DERM and RISKOFDERM models are suitable for performance in these selected manufacturing industries. However, considering the small sample size evaluated in this study, more categories of industries should be evaluated to reduce its uncertainty and enhance its applicability in the future.Keywords: dermal exposure, risk management, quantitative estimation, feasibility evaluation
Procedia PDF Downloads 169848 Doped and Co-doped ZnO Based Nanoparticles and their Photocatalytic and Gas Sensing Property
Authors: Neha Verma, Manik Rakhra
Abstract:
Statement of the Problem: Nowadays, a tremendous increase in population and advanced industrialization augment the problems related to air and water pollutions. Growing industries promoting environmental danger, which is an alarming threat to the ecosystem. For safeguard, the environment, detection of perilous gases and release of colored wastewater is required for eutrophication pollution. Researchers around the globe are trying their best efforts to save the environment. For this remediation advanced oxidation process is used for potential applications. ZnO is an important semiconductor photocatalyst with high photocatalytic and gas sensing activities. For efficient photocatalytic and gas sensing properties, it is necessary to prepare a doped/co-doped ZnO compound to decrease the electron-hole recombination rates. However, lanthanide doped and co-doped metal oxide is seldom studied for photocatalytic and gas sensing applications. The purpose of this study is to describe the best photocatalyst for the photodegradation of dyes and gas sensing properties. Methodology & Theoretical Orientation: Economical framework has to be used for the synthesis of ZnO. In the depth literature survey, a simple combustion method is utilized for gas sensing and photocatalytic activities. Findings: Rare earth doped and co-doped ZnO nanoparticles were the best photocatalysts for photodegradation of organic dyes and different gas sensing applications by varying various factors such as pH, aging time, and different concentrations of doping and codoping metals in ZnO. Complete degradation of dye was observed only in min. Gas sensing nanodevice showed a better response and quick recovery time for doped/co-doped ZnO. Conclusion & Significance: In order to prevent air and water pollution, well crystalline ZnO nanoparticles were synthesized by rapid and economic method, which is used as photocatalyst for photodegradation of organic dyes and gas sensing applications to sense the release of hazardous gases from the environment.Keywords: ZnO, photocatalyst, photodegradation of dye, gas sensor
Procedia PDF Downloads 155847 Signs, Signals and Syndromes: Algorithmic Surveillance and Global Health Security in the 21st Century
Authors: Stephen L. Roberts
Abstract:
This article offers a critical analysis of the rise of syndromic surveillance systems for the advanced detection of pandemic threats within contemporary global health security frameworks. The article traces the iterative evolution and ascendancy of three such novel syndromic surveillance systems for the strengthening of health security initiatives over the past two decades: 1) The Program for Monitoring Emerging Diseases (ProMED-mail); 2) The Global Public Health Intelligence Network (GPHIN); and 3) HealthMap. This article demonstrates how each newly introduced syndromic surveillance system has become increasingly oriented towards the integration of digital algorithms into core surveillance capacities to continually harness and forecast upon infinitely generating sets of digital, open-source data, potentially indicative of forthcoming pandemic threats. This article argues that the increased centrality of the algorithm within these next-generation syndromic surveillance systems produces a new and distinct form of infectious disease surveillance for the governing of emergent pathogenic contingencies. Conceptually, the article also shows how the rise of this algorithmic mode of infectious disease surveillance produces divergences in the governmental rationalities of global health security, leading to the rise of an algorithmic governmentality within contemporary contexts of Big Data and these surveillance systems. Empirically, this article demonstrates how this new form of algorithmic infectious disease surveillance has been rapidly integrated into diplomatic, legal, and political frameworks to strengthen the practice of global health security – producing subtle, yet distinct shifts in the outbreak notification and reporting transparency of states, increasingly scrutinized by the algorithmic gaze of syndromic surveillance.Keywords: algorithms, global health, pandemic, surveillance
Procedia PDF Downloads 185846 Supervisory Controller with Three-State Energy Saving Mode for Induction Motor in Fluid Transportation
Authors: O. S. Ebrahim, K. O. Shawky, M. O. S. Ebrahim, P. K. Jain
Abstract:
Induction Motor (IM) driving pump is the main consumer of electricity in a typical fluid transportation system (FTS). It was illustrated that changing the connection of the stator windings from delta to star at no load could achieve noticeable active and reactive energy savings. This paper proposes a supervisory hysteresis liquid-level control with three-state energy saving mode (ESM) for IM in FTS including storage tank. The IM pump drive comprises modified star/delta switch and hydromantic coupler. Three-state ESM is defined, along with the normal running, and named analog to computer ESMs as follows: Sleeping mode in which the motor runs at no load with delta stator connection, hibernate mode in which the motor runs at no load with a star connection, and motor shutdown is the third energy saver mode. A logic flow-chart is synthesized to select the motor state at no-load for best energetic cost reduction, considering the motor thermal capacity used. An artificial neural network (ANN) state estimator, based on the recurrent architecture, is constructed and learned in order to provide fault-tolerant capability for the supervisory controller. Sequential test of Wald is used for sensor fault detection. Theoretical analysis, preliminary experimental testing and, computer simulations are performed to show the effectiveness of the proposed control in terms of reliability, power quality and energy/coenergy cost reduction with the suggestion of power factor correction.Keywords: ANN, ESM, IM, star/delta switch, supervisory control, FT, reliability, power quality
Procedia PDF Downloads 193845 Investigation of Detectability of Orbital Objects/Debris in Geostationary Earth Orbit by Microwave Kinetic Inductance Detectors
Authors: Saeed Vahedikamal, Ian Hepburn
Abstract:
Microwave Kinetic Inductance Detectors (MKIDs) are considered as one of the most promising photon detectors of the future in many Astronomical applications such as exoplanet detections. The MKID advantages stem from their single photon sensitivity (ranging from UV to optical and near infrared), photon energy resolution and high temporal capability (~microseconds). There has been substantial progress in the development of these detectors and MKIDs with Megapixel arrays is now possible. The unique capability of recording an incident photon and its energy (or wavelength) while also registering its time of arrival to within a microsecond enables an array of MKIDs to produce a four-dimensional data block of x, y, z and t comprising x, y spatial, z axis per pixel spectral and t axis per pixel which is temporal. This offers the possibility that the spectrum and brightness variation for any detected piece of space debris as a function of time might offer a unique identifier or fingerprint. Such a fingerprint signal from any object identified in multiple detections by different observers has the potential to determine the orbital features of the object and be used for their tracking. Modelling performed so far shows that with a 20 cm telescope located at an Astronomical observatory (e.g. La Palma, Canary Islands) we could detect sub cm objects at GEO. By considering a Lambertian sphere with a 10 % reflectivity (albedo of the Moon) we anticipate the following for a GEO object: 10 cm object imaged in a 1 second image capture; 1.2 cm object for a 70 second image integration or 0.65 cm object for a 4 minute image integration. We present details of our modelling and the potential instrument for a dedicated GEO surveillance system.Keywords: space debris, orbital debris, detection system, observation, microwave kinetic inductance detectors, MKID
Procedia PDF Downloads 98844 Femoral Neck Anteversion and Neck-Shaft Angles: Determination and Their Clinical Implications in Fetuses of Different Gestational Ages
Authors: Vrinda Hari Ankolekar, Anne D. Souza, Mamatha Hosapatna
Abstract:
Introduction: Precise anatomical assessment of femoral neck anteversion (FNA) and the neck shaft angles (NSA) would be essential in diagnosing the pathological conditions involving hip joint and its ligaments. FNA of greater than 20 degrees is considered excessive femoral anteversion, whereas a torsion angle of fewer than 10 degrees is considered femoral retroversion. Excessive femoral torsion is not uncommon and has been associated with certain neurologic and orthopedic conditions. The enlargement and maturation of the hip joint increases at the 20th week of gestation and the NSA ranges from 135- 140◦ at birth. Material and methods: 48 femurs were tagged according to the GA and two photographs for each femur were taken using Nikon digital camera. Each femur was kept on a horizontal hard desk and end on an image of the upper end was taken for the estimation of FNA and a photograph in a perpendicular plane was taken to calculate the NSA. The images were transferred to the computer and were stored in TIFF format. Microsoft Paint software was used to mark the points and Image J software was used to calculate the angles digitally. 1. Calculation of FNA: The midpoint of the femoral head and the neck were marked and a line was drawn joining these two points. The angle made by this line with the horizontal plane was measured as FNA. 2. Calculation of NSA: The midpoint of the femoral head and the neck were marked and a line was drawn joining these two points. A vertical line was drawn passing through the tip of the greater trochanter to the inter-condylar notch. The angle formed by these lines was calculated as NSA. Results: The paired t-test for the inter-observer variability showed no significant difference between the values of two observers. (FNA: t=-1.06 and p=0.31; NSA: t=-0.09 and p=0.9). The FNA ranged from 17.08º to 33.97 º on right and 17.32 º to 45.08 º on left. The NSA ranged from 139.33 º to 124.91 º on right and 143.98 º to 123.8 º on left. Unpaired t-test was applied to compare the mean angles between the second and third trimesters which did not show any statistical significance. This shows that the FNA and NSA of femur did not vary significantly during the third trimester. The FNA and NSA were correlated with the GA using Pearson’s correlation. FNA appeared to increase with the GA (r=0.5) but the increase was not statistically significant. A decrease in the NSA was also noted with the GA (r=-0.3) which was also statistically not significant. Conclusion: The present study evaluates the FNA and NSA of the femur in fetuses and correlates their development with the GA during second and third trimesters. The FNA and NSA did not vary significantly during the third trimester.Keywords: anteversion, coxa antetorsa, femoral torsion, femur neck shaft angle
Procedia PDF Downloads 320843 Broadband Ultrasonic and Rheological Characterization of Liquids Using Longitudinal Waves
Authors: M. Abderrahmane Mograne, Didier Laux, Jean-Yves Ferrandis
Abstract:
Rheological characterizations of complex liquids like polymer solutions present an important scientific interest for a lot of researchers in many fields as biology, food industry, chemistry. In order to establish master curves (elastic moduli vs frequency) which can give information about microstructure, classical rheometers or viscometers (such as Couette systems) are used. For broadband characterization of the sample, temperature is modified in a very large range leading to equivalent frequency modifications applying the Time Temperature Superposition principle. For many liquids undergoing phase transitions, this approach is not applicable. That is the reason, why the development of broadband spectroscopic methods around room temperature becomes a major concern. In literature many solutions have been proposed but, to our knowledge, there is no experimental bench giving the whole rheological characterization for frequencies about a few Hz (Hertz) to many MHz (Mega Hertz). Consequently, our goal is to investigate in a nondestructive way in very broadband frequency (A few Hz – Hundreds of MHz) rheological properties using longitudinal ultrasonic waves (L waves), a unique experimental bench and a specific container for the liquid: a test tube. More specifically, we aim to estimate the three viscosities (longitudinal, shear and bulk) and the complex elastic moduli (M*, G* and K*) respectively longitudinal, shear and bulk moduli. We have decided to use only L waves conditioned in two ways: bulk L wave in the liquid or guided L waves in the tube test walls. In this paper, we will present first results for very low frequencies using the ultrasonic tracking of a falling ball in the test tube. This will lead to the estimation of shear viscosity from a few mPa.s to a few Pa.s (Pascal second). Corrections due to the small dimensions of the tube will be applied and discussed regarding the size of the falling ball. Then the use of bulk L wave’s propagation in the liquid and the development of a specific signal processing in order to assess longitudinal velocity and attenuation will conduct to the longitudinal viscosity evaluation in the MHz frequency range. At last, the first results concerning the propagation, the generation and the processing of guided compressional waves in the test tube walls will be discussed. All these approaches and results will be compared to standard methods available and already validated in our lab.Keywords: nondestructive measurement for liquid, piezoelectric transducer, ultrasonic longitudinal waves, viscosities
Procedia PDF Downloads 265