Search results for: signal classification
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3694

Search results for: signal classification

94 A Comprehensive Survey of Artificial Intelligence and Machine Learning Approaches across Distinct Phases of Wildland Fire Management

Authors: Ursula Das, Manavjit Singh Dhindsa, Kshirasagar Naik, Marzia Zaman, Richard Purcell, Srinivas Sampalli, Abdul Mutakabbir, Chung-Horng Lung, Thambirajah Ravichandran

Abstract:

Wildland fires, also known as forest fires or wildfires, are exhibiting an alarming surge in frequency in recent times, further adding to its perennial global concern. Forest fires often lead to devastating consequences ranging from loss of healthy forest foliage and wildlife to substantial economic losses and the tragic loss of human lives. Despite the existence of substantial literature on the detection of active forest fires, numerous potential research avenues in forest fire management, such as preventative measures and ancillary effects of forest fires, remain largely underexplored. This paper undertakes a systematic review of these underexplored areas in forest fire research, meticulously categorizing them into distinct phases, namely pre-fire, during-fire, and post-fire stages. The pre-fire phase encompasses the assessment of fire risk, analysis of fuel properties, and other activities aimed at preventing or reducing the risk of forest fires. The during-fire phase includes activities aimed at reducing the impact of active forest fires, such as the detection and localization of active fires, optimization of wildfire suppression methods, and prediction of the behavior of active fires. The post-fire phase involves analyzing the impact of forest fires on various aspects, such as the extent of damage in forest areas, post-fire regeneration of forests, impact on wildlife, economic losses, and health impacts from byproducts produced during burning. A comprehensive understanding of the three stages is imperative for effective forest fire management and mitigation of the impact of forest fires on both ecological systems and human well-being. Artificial intelligence and machine learning (AI/ML) methods have garnered much attention in the cyber-physical systems domain in recent times leading to their adoption in decision-making in diverse applications including disaster management. This paper explores the current state of AI/ML applications for managing the activities in the aforementioned phases of forest fire. While conventional machine learning and deep learning methods have been extensively explored for the prevention, detection, and management of forest fires, a systematic classification of these methods into distinct AI research domains is conspicuously absent. This paper gives a comprehensive overview of the state of forest fire research across more recent and prominent AI/ML disciplines, including big data, classical machine learning, computer vision, explainable AI, generative AI, natural language processing, optimization algorithms, and time series forecasting. By providing a detailed overview of the potential areas of research and identifying the diverse ways AI/ML can be employed in forest fire research, this paper aims to serve as a roadmap for future investigations in this domain.

Keywords: artificial intelligence, computer vision, deep learning, during-fire activities, forest fire management, machine learning, pre-fire activities, post-fire activities

Procedia PDF Downloads 72
93 Forming-Free Resistive Switching Effect in ZnₓTiᵧHfzOᵢ Nanocomposite Thin Films for Neuromorphic Systems Manufacturing

Authors: Vladimir Smirnov, Roman Tominov, Vadim Avilov, Oleg Ageev

Abstract:

The creation of a new generation micro- and nanoelectronics elements opens up unlimited possibilities for electronic devices parameters improving, as well as developing neuromorphic computing systems. Interest in the latter is growing up every year, which is explained by the need to solve problems related to the unstructured classification of data, the construction of self-adaptive systems, and pattern recognition. However, for its technical implementation, it is necessary to fulfill a number of conditions for the basic parameters of electronic memory, such as the presence of non-volatility, the presence of multi-bitness, high integration density, and low power consumption. Several types of memory are presented in the electronics industry (MRAM, FeRAM, PRAM, ReRAM), among which non-volatile resistive memory (ReRAM) is especially distinguished due to the presence of multi-bit property, which is necessary for neuromorphic systems manufacturing. ReRAM is based on the effect of resistive switching – a change in the resistance of the oxide film between low-resistance state (LRS) and high-resistance state (HRS) under an applied electric field. One of the methods for the technical implementation of neuromorphic systems is cross-bar structures, which are ReRAM cells, interconnected by cross data buses. Such a structure imitates the architecture of the biological brain, which contains a low power computing elements - neurons, connected by special channels - synapses. The choice of the ReRAM oxide film material is an important task that determines the characteristics of the future neuromorphic system. An analysis of literature showed that many metal oxides (TiO2, ZnO, NiO, ZrO2, HfO2) have a resistive switching effect. It is worth noting that the manufacture of nanocomposites based on these materials allows highlighting the advantages and hiding the disadvantages of each material. Therefore, as a basis for the neuromorphic structures manufacturing, it was decided to use ZnₓTiᵧHfzOᵢ nanocomposite. It is also worth noting that the ZnₓTiᵧHfzOᵢ nanocomposite does not need an electroforming, which degrades the parameters of the formed ReRAM elements. Currently, this material is not well studied, therefore, the study of the effect of resistive switching in forming-free ZnₓTiᵧHfzOᵢ nanocomposite is an important task and the goal of this work. Forming-free nanocomposite ZnₓTiᵧHfzOᵢ thin film was grown by pulsed laser deposition (Pioneer 180, Neocera Co., USA) on the SiO2/TiN (40 nm) substrate. Electrical measurements were carried out using a semiconductor characterization system (Keithley 4200-SCS, USA) with W probes. During measurements, TiN film was grounded. The analysis of the obtained current-voltage characteristics showed a resistive switching from HRS to LRS resistance states at +1.87±0.12 V, and from LRS to HRS at -2.71±0.28 V. Endurance test shown that HRS was 283.21±32.12 kΩ, LRS was 1.32±0.21 kΩ during 100 measurements. It was shown that HRS/LRS ratio was about 214.55 at reading voltage of 0.6 V. The results can be useful for forming-free nanocomposite ZnₓTiᵧHfzOᵢ films in neuromorphic systems manufacturing. This work was supported by RFBR, according to the research project № 19-29-03041 mk. The results were obtained using the equipment of the Research and Education Center «Nanotechnologies» of Southern Federal University.

Keywords: nanotechnology, nanocomposites, neuromorphic systems, RRAM, pulsed laser deposition, resistive switching effect

Procedia PDF Downloads 132
92 Reproductive Biology and Lipid Content of Albacore Tuna (Thunnus alalunga) in the Western Indian Ocean

Authors: Zahirah Dhurmeea, Iker Zudaire, Heidi Pethybridge, Emmanuel Chassot, Maria Cedras, Natacha Nikolic, Jerome Bourjea, Wendy West, Chandani Appadoo, Nathalie Bodin

Abstract:

Scientific advice on the status of fish stocks relies on indicators that are based on strong assumptions on biological parameters such as condition, maturity and fecundity. Currently, information on the biology of albacore tuna, Thunnus alalunga, in the Indian Ocean is scarce. Consequently, many parameters used in stock assessment models for Indian Ocean albacore originate largely from other studied stocks or species of tuna. Inclusion of incorrect biological data in stock assessment models would lead to inappropriate estimates of stock status used by fisheries manager’s to establish future catch allowances. The reproductive biology of albacore tuna in the western Indian Ocean was examined through analysis of the sex ratio, spawning season, length-at-maturity (L50), spawning frequency, fecundity and fish condition. In addition, the total lipid content (TL) and lipid class composition in the gonads, liver and muscle tissues of female albacore during the reproductive cycle was investigated. A total of 923 female and 867 male albacore were sampled from 2013 to 2015. A bias in sex-ratio was found in favour of females with fork length (LF) <100 cm. Using histological analyses and gonadosomatic index, spawning was found to occur between 10°S and 30°S, mainly to the east of Madagascar from October to January. Large females contributed more to reproduction through their longer spawning period compared to small individuals. The L50 (mean ± standard error) of female albacore was estimated at 85.3 ± 0.7 cm LF at the vitellogenic 3 oocyte stage maturity threshold. Albacore spawn on average every 2.2 days within the spawning region and spawning months from November to January. Batch fecundity varied between 0.26 and 2.09 million eggs and the relative batch fecundity (mean  standard deviation) was estimated at 53.4 ± 23.2 oocytes g-1 of somatic-gutted weight. Depending on the maturity stage, TL in ovaries ranged from 7.5 to 577.8 mg g-1 of wet weight (ww) with different proportions of phospholipids (PL), wax esters (WE), triacylglycerol (TAG) and sterol (ST). The highest TL were observed in immature (mostly TAG and PL) and spawning capable ovaries (mostly PL, WE and TAG). Liver TL varied from 21.1 to 294.8 mg g-1 (ww) and acted as an energy (mainly TAG and PL) storage prior to reproduction when the lowest TL was observed. Muscle TL varied from 2.0 to 71.7 g-1 (ww) in mature females without a clear pattern between maturity stages, although higher values of up to 117.3 g-1 (ww) was found in immature females. TL results suggest that albacore could be viewed predominantly as a capital breeder relying mostly on lipids stored before the onset of reproduction and with little additional energy derived from feeding. This study is the first one to provide new information on the reproductive development and classification of albacore in the western Indian Ocean. The reproductive parameters will reduce uncertainty in current stock assessment models which will eventually promote sustainability of the fishery.

Keywords: condition, size-at-maturity, spawning behaviour, temperate tuna, total lipid content

Procedia PDF Downloads 260
91 Phenotypic and Molecular Heterogeneity Linked to the Magnesium Transporter CNNM2

Authors: Reham Khalaf-Nazzal, Imad Dweikat, Paula Gimenez, Iker Oyenarte, Alfonso Martinez-Cruz, Domonik Muller

Abstract:

Metal cation transport mediator (CNNM) gene family comprises 4 isoforms that are expressed in various human tissues. Structurally, CNNMs are complex proteins that contain an extracellular N-terminal domain preceding a DUF21 transmembrane domain, a ‘Bateman module’ and a C-terminal cNMP-binding domain. Mutations in CNNM2 cause familial dominant hypomagnesaemia. Growing evidence highlights the role of CNNM2 in neurodevelopment. Mutations in CNNM2 have been implicated in epilepsy, intellectual disability, schizophrenia, and others. In the present study, we aim to elucidate the function of CNNM2 in the developing brain. Thus, we present the genetic origin of symptoms in two family cohorts. In the first family, three siblings of a consanguineous Palestinian family in which parents are first cousins, and consanguinity ran over several generations, presented a varying degree of intellectual disability, cone-rod dystrophy, and autism spectrum disorder. Exome sequencing and segregation analysis revealed the presence of homozygous pathogenic mutation in the CNNM2 gene, the parents were heterozygous for that gene mutation. Magnesium blood levels were normal in the three children and their parents in several measurements. They had no symptoms of hypomagnesemia. The CNNM2 mutation in this family was found to locate in the CBS1 domain of the CNNM2 protein. The crystal structure of the mutated CNNM2 protein was not significantly different from the wild-type protein, and the binding of AMP or MgATP was not dramatically affected. This suggests that the CBS1 domain could be involved in pure neurodevelopmental functions independent of its magnesium-handling role, and this mutation could have affected a protein partner binding or other functions in this protein. In the second family, another autosomal dominant CNNM2 mutation was found to run in a large family with multiple individuals over three generations. All affected family members had hypomagnesemia and hypermagnesuria. Oral supplementation of magnesium did not increase the levels of magnesium in serum significantly. Some affected members of this family have defects in fine motor skills such as dyslexia and dyslalia. The detected mutation is located in the N-terminal part, which contains a signal peptide thought to be involved in the sorting and routing of the protein. In this project, we describe heterogenous clinical phenotypes related to CNNM2 mutations and protein functions. In the first family, and up to the authors’ knowledge, we report for the first time the involvement of CNNM2 in retinal photoreceptor development and function. In addition, we report the presence of a neurophenotype independent of magnesium status related to the CNNM2 protein mutation. Taking into account the different modes of inheritance and the different positions of the mutations within CNNM2 and its different structural and functional domains, it is likely that CNNM2 might be involved in a wide spectrum of neuropsychiatric comorbidities with considerable varying phenotypes.

Keywords: magnesium transport, autosomal recessive, autism, neurodevelopment, CBS domain

Procedia PDF Downloads 150
90 Wind Resource Classification and Feasibility of Distributed Generation for Rural Community Utilization in North Central Nigeria

Authors: O. D. Ohijeagbon, Oluseyi O. Ajayi, M. Ogbonnaya, Ahmeh Attabo

Abstract:

This study analyzed the electricity generation potential from wind at seven sites spread across seven states of the North-Central region of Nigeria. Twenty-one years (1987 to 2007) wind speed data at a height of 10m were assessed from the Nigeria Meteorological Department, Oshodi. The data were subjected to different statistical tests and also compared with the two-parameter Weibull probability density function. The outcome shows that the monthly average wind speeds ranged between 2.2 m/s in November for Bida and 10.1 m/s in December for Jos. The yearly average ranged between 2.1m/s in 1987 for Bida and 11.8 m/s in 2002 for Jos. Also, the power density for each site was determined to range between 29.66 W/m2 for Bida and 864.96 W/m2 for Jos, Two parameters (k and c) of the Weibull distribution were found to range between 2.3 in Lokoja and 6.5 in Jos for k, while c ranged between 2.9 in Bida and 9.9m/s in Jos. These outcomes points to the fact that wind speeds at Jos, Minna, Ilorin, Makurdi and Abuja are compatible with the cut-in speeds of modern wind turbines and hence, may be economically feasible for wind-to-electricity at and above the height of 10 m. The study further assessed the potential and economic viability of standalone wind generation systems for off-grid rural communities located in each of the studied sites. A specific electric load profile was developed to suite hypothetic communities, each consisting of 200 homes, a school and a community health center. Assessment of the design that will optimally meet the daily load demand with a loss of load probability (LOLP) of 0.01 was performed, considering 2 stand-alone applications of wind and diesel. The diesel standalone system (DSS) was taken as the basis of comparison since the experimental locations have no connection to a distribution network. The HOMER® software optimizing tool was utilized to determine the optimal combination of system components that will yield the lowest life cycle cost. Sequel to the analysis for rural community utilization, a Distributed Generation (DG) analysis that considered the possibility of generating wind power in the MW range in order to take advantage of Nigeria’s tariff regime for embedded generation was carried out for each site. The DG design incorporated each community of 200 homes, freely catered for and offset from the excess electrical energy generated above the minimum requirement for sales to a nearby distribution grid. Wind DG systems were found suitable and viable in producing environmentally friendly energy in terms of life cycle cost and levelised value of producing energy at Jos ($0.14/kWh), Minna ($0.12/kWh), Ilorin ($0.09/kWh), Makurdi ($0.09/kWh), and Abuja ($0.04/kWh) at a particluar turbine hub height. These outputs reveal the value retrievable from the project after breakeven point as a function of energy consumed Based on the results, the study demonstrated that including renewable energy in the rural development plan will enhance fast upgrade of the rural communities.

Keywords: wind speed, wind power, distributed generation, cost per kilowatt-hour, clean energy, North-Central Nigeria

Procedia PDF Downloads 512
89 Seafloor and Sea Surface Modelling in the East Coast Region of North America

Authors: Magdalena Idzikowska, Katarzyna Pająk, Kamil Kowalczyk

Abstract:

Seafloor topography is a fundamental issue in geological, geophysical, and oceanographic studies. Single-beam or multibeam sonars attached to the hulls of ships are used to emit a hydroacoustic signal from transducers and reproduce the topography of the seabed. This solution provides relevant accuracy and spatial resolution. Bathymetric data from ships surveys provides National Centers for Environmental Information – National Oceanic and Atmospheric Administration. Unfortunately, most of the seabed is still unidentified, as there are still many gaps to be explored between ship survey tracks. Moreover, such measurements are very expensive and time-consuming. The solution is raster bathymetric models shared by The General Bathymetric Chart of the Oceans. The offered products are a compilation of different sets of data - raw or processed. Indirect data for the development of bathymetric models are also measurements of gravity anomalies. Some forms of seafloor relief (e.g. seamounts) increase the force of the Earth's pull, leading to changes in the sea surface. Based on satellite altimetry data, Sea Surface Height and marine gravity anomalies can be estimated, and based on the anomalies, it’s possible to infer the structure of the seabed. The main goal of the work is to create regional bathymetric models and models of the sea surface in the area of the east coast of North America – a region of seamounts and undulating seafloor. The research includes an analysis of the methods and techniques used, an evaluation of the interpolation algorithms used, model thickening, and the creation of grid models. Obtained data are raster bathymetric models in NetCDF format, survey data from multibeam soundings in MB-System format, and satellite altimetry data from Copernicus Marine Environment Monitoring Service. The methodology includes data extraction, processing, mapping, and spatial analysis. Visualization of the obtained results was carried out with Geographic Information System tools. The result is an extension of the state of the knowledge of the quality and usefulness of the data used for seabed and sea surface modeling and knowledge of the accuracy of the generated models. Sea level is averaged over time and space (excluding waves, tides, etc.). Its changes, along with knowledge of the topography of the ocean floor - inform us indirectly about the volume of the entire water ocean. The true shape of the ocean surface is further varied by such phenomena as tides, differences in atmospheric pressure, wind systems, thermal expansion of water, or phases of ocean circulation. Depending on the location of the point, the higher the depth, the lower the trend of sea level change. Studies show that combining data sets, from different sources, with different accuracies can affect the quality of sea surface and seafloor topography models.

Keywords: seafloor, sea surface height, bathymetry, satellite altimetry

Procedia PDF Downloads 80
88 Modification of Hyrax Expansion Screw to Be Used as an Intro-Oral Distractor for Anterior Maxillary Distraction in a Patient with Cleft Lip and Palate: A Case Report

Authors: Ananya Hazare, Ranjit Kamble

Abstract:

Introduction: Patients with Cleft lip and palate (CL/P) can present with a maxillary retrution after cleft repair. Anterior Maxillary distraction osteogenesis (AMD) is a technique that provides simultaneous skeletal advancement and expansion of the soft tissues related to an anterior segment of the maxilla. This case presented is a case of AMD. The advantage of this technique is that the occlusion in the posterior segment can be maintained, and only the segment in cross bite is advanced for correction of the midfacial deficiency. The other alternative treatment is anterior movement by a Lefort 1 osteotomy. When a Lefort 1 osteotomy is compared with the Distraction osteogenesis or AMD, the disadvantages of the Le Fort 1 include a higher risk of morbidity, requirement of fixation, relapse tendency and unexpected changes in the nasal form. These complications were eliminated by AMD technique. This was followed by placement of the implant in the bone formed after AMD. Hence complete surgical, orthodontic and prosthodontics rehabilitation of the patient was done by an interdisciplinary approach. Methods: Patient presented with repaired UCL/P of the right side with midfacial retrusion. Intro-oral examination revealed a good occlusion in the posterior arch and anterior Crossbite from canine to canine. Patient's both maxillary lateral incisors were missing. The lower arch was well aligned with all teeth present. The study models when scored according to GOSLON yardstick received a score of 4. After pre-surgical orthodontic phase was completed an intraoral distractor was fabricated by modification of HYRAX expansion screw. After surgery, low subapical osteotomy cuts were placed and the distractor was fixed. The latency period of 5 days was observed after which the distraction was started. Distraction was done at a rate of 1 mm/day with a rhythm of 0.5mm in morning and 0.5mm in the evening. The total distraction of 12 mm was done. After a consolidation period, the distractor was removed, and retention by a removable partial denture was given. Radiographic examination confirmed mature bone formation in the distracted segment. Implants were placed and allowed to osseointegrate for approximately 4 months and were then loaded with abutments. Results: Total distraction done was 12mm and after relapse it was 8mm. After consolidation phase the radiographic examination revealed a B2 quality of bone according to the Misch's classification and sufficient height from the maxillary sinus. These findings were indicative for placement of implants in the distracted bone formed in premolar region. Implants were placed and after radiographic evidence of osseointegration was seen they were loaded with abutments. Thus resulting in a complete rehabilitation of a cleft patient by an interdisciplinary approach. Conclusion: Anterior maxillary distraction can be used as an alternative method instead of complete distraction osteogenesis or Lefort 1 advancement of maxilla in cases where the advancement needed is minimum. Use of HYRAX expansion screw modified as intra-oral distractor can be used in such cases, which significantly reduces the cost of treatment, as expensive distractors are not used. This technique is very useful and efficient in countries like India where the patient cannot afford expensive treatment options.

Keywords: cleft lip and palate, distraction osteogenesis, anterior maxillary distraction, orthodontics and dentofacial orthopaedics, hyrax expansion screw modification

Procedia PDF Downloads 256
87 Multiscale Modelization of Multilayered Bi-Dimensional Soils

Authors: I. Hosni, L. Bennaceur Farah, N. Saber, R Bennaceur

Abstract:

Soil moisture content is a key variable in many environmental sciences. Even though it represents a small proportion of the liquid freshwater on Earth, it modulates interactions between the land surface and the atmosphere, thereby influencing climate and weather. Accurate modeling of the above processes depends on the ability to provide a proper spatial characterization of soil moisture. The measurement of soil moisture content allows assessment of soil water resources in the field of hydrology and agronomy. The second parameter in interaction with the radar signal is the geometric structure of the soil. Most traditional electromagnetic models consider natural surfaces as single scale zero mean stationary Gaussian random processes. Roughness behavior is characterized by statistical parameters like the Root Mean Square (RMS) height and the correlation length. Then, the main problem is that the agreement between experimental measurements and theoretical values is usually poor due to the large variability of the correlation function, and as a consequence, backscattering models have often failed to predict correctly backscattering. In this study, surfaces are considered as band-limited fractal random processes corresponding to a superposition of a finite number of one-dimensional Gaussian process each one having a spatial scale. Multiscale roughness is characterized by two parameters, the first one is proportional to the RMS height, and the other one is related to the fractal dimension. Soil moisture is related to the complex dielectric constant. This multiscale description has been adapted to two-dimensional profiles using the bi-dimensional wavelet transform and the Mallat algorithm to describe more correctly natural surfaces. We characterize the soil surfaces and sub-surfaces by a three layers geo-electrical model. The upper layer is described by its dielectric constant, thickness, a multiscale bi-dimensional surface roughness model by using the wavelet transform and the Mallat algorithm, and volume scattering parameters. The lower layer is divided into three fictive layers separated by an assumed plane interface. These three layers were modeled by an effective medium characterized by an apparent effective dielectric constant taking into account the presence of air pockets in the soil. We have adopted the 2D multiscale three layers small perturbations model including, firstly air pockets in the soil sub-structure, and then a vegetable canopy in the soil surface structure, that is to simulate the radar backscattering. A sensitivity analysis of backscattering coefficient dependence on multiscale roughness and new soil moisture has been performed. Later, we proposed to change the dielectric constant of the multilayer medium because it takes into account the different moisture values of each layer in the soil. A sensitivity analysis of the backscattering coefficient, including the air pockets in the volume structure with respect to the multiscale roughness parameters and the apparent dielectric constant, was carried out. Finally, we proposed to study the behavior of the backscattering coefficient of the radar on a soil having a vegetable layer in its surface structure.

Keywords: multiscale, bidimensional, wavelets, backscattering, multilayer, SPM, air pockets

Procedia PDF Downloads 125
86 Performance Evaluation of Various Displaced Left Turn Intersection Designs

Authors: Hatem Abou-Senna, Essam Radwan

Abstract:

With increasing traffic and limited resources, accommodating left-turning traffic has been a challenge for traffic engineers as they seek balance between intersection capacity and safety; these are two conflicting goals in the operation of a signalized intersection that are mitigated through signal phasing techniques. Hence, to increase the left-turn capacity and reduce the delay at the intersections, the Florida Department of Transportation (FDOT) moves forward with a vision of optimizing intersection control using innovative intersection designs through the Transportation Systems Management & Operations (TSM&O) program. These alternative designs successfully eliminate the left-turn phase, which otherwise reduces the conventional intersection’s (CI) efficiency considerably, and divide the intersection into smaller networks that would operate in a one-way fashion. This study focused on the Crossover Displaced Left-turn intersections (XDL), also known as Continuous Flow Intersections (CFI). The XDL concept is best suited for intersections with moderate to high overall traffic volumes, especially those with very high or unbalanced left turn volumes. There is little guidance on determining whether partial XDL intersections are adequate to mitigate the overall intersection condition or full XDL is always required. The primary objective of this paper was to evaluate the overall intersection performance in the case of different partial XDL designs compared to a full XDL. The XDL alternative was investigated for 4 different scenarios; partial XDL on the east-west approaches, partial XDL on the north-south approaches, partial XDL on the north and east approaches and full XDL on all 4 approaches. Also, the impact of increasing volume on the intersection performance was considered by modeling the unbalanced volumes with 10% increment resulting in 5 different traffic scenarios. The study intersection, located in Orlando Florida, is experiencing recurring congestion in the PM peak hour and is operating near capacity with volume to a capacity ratio closer to 1.00 due to the presence of two heavy conflicting movements; southbound and westbound. The results showed that a partial EN XDL alternative proved to be effective and compared favorably to a full XDL alternative followed by the partial EW XDL alternative. The analysis also showed that Full, EW and EN XDL alternatives outperformed the NS XDL and the CI alternatives with respect to the throughput, delay and queue lengths. Significant throughput improvements were remarkable at the higher volume level with percent increase in capacity of 25%. The percent reduction in delay for the critical movements in the XDL scenarios compared to the CI scenario ranged from 30-45%. Similarly, queue lengths showed percent reduction in the XDL scenarios ranging from 25-40%. The analysis revealed how partial XDL design can improve the overall intersection performance at various demands, reduce the costs associated with full XDL and proved to outperform the conventional intersection. However, partial XDL serving low volumes or only one of the critical movements while other critical movements are operating near or above capacity do not provide significant benefits when compared to the conventional intersection.

Keywords: continuous flow intersections, crossover displaced left-turn, microscopic traffic simulation, transportation system management and operations, VISSIM simulation model

Procedia PDF Downloads 310
85 Establishing Correlation between Urban Heat Island and Urban Greenery Distribution by Means of Remote Sensing and Statistics Data to Prioritize Revegetation in Yerevan

Authors: Linara Salikhova, Elmira Nizamova, Aleksandra Katasonova, Gleb Vitkov, Olga Sarapulova.

Abstract:

While most European cities conduct research on heat-related risks, there is a research gap in the Caucasus region, particularly in Yerevan, Armenia. This study aims to test the method of establishing a correlation between urban heat islands (UHI) and urban greenery distribution for prioritization of heat-vulnerable areas for revegetation. Armenia has failed to consider measures to mitigate UHI in urban development strategies despite a 2.1°C increase in average annual temperature over the past 32 years. However, planting vegetation in the city is commonly used to deal with air pollution and can be effective in reducing UHI if it prioritizes heat-vulnerable areas. The research focuses on establishing such priorities while considering the distribution of urban greenery across the city. The lack of spatially explicit air temperature data necessitated the use of satellite images to achieve the following objectives: (1) identification of land surface temperatures (LST) and quantification of temperature variations across districts; (2) classification of massifs of land surface types using normalized difference vegetation index (NDVI); (3) correlation of land surface classes with LST. Examination of the heat-vulnerable city areas (in this study, the proportion of individuals aged 75 years and above) is based on demographic data (Census 2011). Based on satellite images (Sentinel-2) captured on June 5, 2021, NDVI calculations were conducted. The massifs of the land surface were divided into five surface classes. Due to capacity limitations, the average LST for each district was identified using one satellite image from Landsat-8 on August 15, 2021. In this research, local relief is not considered, as the study mainly focuses on the interconnection between temperatures and green massifs. The average temperature in the city is 3.8°C higher than in the surrounding non-urban areas. The temperature excess ranges from a low in Norq Marash to a high in Nubarashen. Norq Marash and Avan have the highest tree and grass coverage proportions, with 56.2% and 54.5%, respectively. In other districts, the balance of wastelands and buildings is three times higher than the grass and trees, ranging from 49.8% in Quanaqer-Zeytun to 76.6% in Nubarashen. Studies have shown that decreased tree and grass coverage within a district correlates with a higher temperature increase. The temperature excess is highest in Erebuni, Ajapnyak, and Nubarashen districts. These districts have less than 25% of their area covered with grass and trees. On the other hand, Avan and Norq Marash districts have a lower temperature difference, as more than 50% of their areas are covered with trees and grass. According to the findings, a significant proportion of the elderly population (35%) aged 75 years and above reside in the Erebuni, Ajapnyak, and Shengavit neighborhoods, which are more susceptible to heat stress with an LST higher than in other city districts. The findings suggest that the method of comparing the distribution of green massifs and LST can contribute to the prioritization of heat-vulnerable city areas for revegetation. The method can become a rationale for the formation of an urban greening program.

Keywords: heat-vulnerability, land surface temperature, urban greenery, urban heat island, vegetation

Procedia PDF Downloads 72
84 Thermoluminescence Investigations of Tl2Ga2Se3S Layered Single Crystals

Authors: Serdar Delice, Mehmet Isik, Nizami Hasanli, Kadir Goksen

Abstract:

Researchers have donated great interest to ternary and quaternary semiconductor compounds especially with the improvement of the optoelectronic technology. The quaternary compound Tl2Ga2Se3S which was grown by Bridgman method carries the properties of ternary thallium chalcogenides group of semiconductors with layered structure. This compound can be formed from TlGaSe2 crystals replacing the one quarter of selenium atom by sulfur atom. Although Tl2Ga2Se3S crystals are not intentionally doped, some unintended defect types such as point defects, dislocations and stacking faults can occur during growth processes of crystals. These defects can cause undesirable problems in semiconductor materials especially produced for optoelectronic technology. Defects of various types in the semiconductor devices like LEDs and field effect transistor may act as a non-radiative or scattering center in electron transport. Also, quick recombination of holes with electrons without any energy transfer between charge carriers can occur due to the existence of defects. Therefore, the characterization of defects may help the researchers working in this field to produce high quality devices. Thermoluminescence (TL) is an effective experimental method to determine the kinetic parameters of trap centers due to defects in crystals. In this method, the sample is illuminated at low temperature by a light whose energy is bigger than the band gap of studied sample. Thus, charge carriers in the valence band are excited to delocalized band. Then, the charge carriers excited into conduction band are trapped. The trapped charge carriers are released by heating the sample gradually and these carriers then recombine with the opposite carriers at the recombination center. By this way, some luminescence is emitted from the samples. The emitted luminescence is converted to pulses by using an experimental setup controlled by computer program and TL spectrum is obtained. Defect characterization of Tl2Ga2Se3S single crystals has been performed by TL measurements at low temperatures between 10 and 300 K with various heating rate ranging from 0.6 to 1.0 K/s. The TL signal due to the luminescence from trap centers revealed one glow peak having maximum temperature of 36 K. Curve fitting and various heating rate methods were used for the analysis of the glow curve. The activation energy of 13 meV was found by the application of curve fitting method. This practical method established also that the trap center exhibits the characteristics of mixed (general) kinetic order. In addition, various heating rate analysis gave a compatible result (13 meV) with curve fitting as the temperature lag effect was taken into consideration. Since the studied crystals were not intentionally doped, these centers are thought to originate from stacking faults, which are quite possible in Tl2Ga2Se3S due to the weakness of the van der Waals forces between the layers. Distribution of traps was also investigated using an experimental method. A quasi-continuous distribution was attributed to the determined trap centers.

Keywords: chalcogenides, defects, thermoluminescence, trap centers

Procedia PDF Downloads 282
83 Hardware Implementation for the Contact Force Reconstruction in Tactile Sensor Arrays

Authors: María-Luisa Pinto-Salamanca, Wilson-Javier Pérez-Holguín

Abstract:

Reconstruction of contact forces is a fundamental technique for analyzing the properties of a touched object and is essential for regulating the grip force in slip control loops. This is based on the processing of the distribution, intensity, and direction of the forces during the capture of the sensors. Currently, efficient hardware alternatives have been used more frequently in different fields of application, allowing the implementation of computationally complex algorithms, as is the case with tactile signal processing. The use of hardware for smart tactile sensing systems is a research area that promises to improve the processing time and portability requirements of applications such as artificial skin and robotics, among others. The literature review shows that hardware implementations are present today in almost all stages of smart tactile detection systems except in the force reconstruction process, a stage in which they have been less applied. This work presents a hardware implementation of a model-driven reported in the literature for the contact force reconstruction of flat and rigid tactile sensor arrays from normal stress data. From the analysis of a software implementation of such a model, this implementation proposes the parallelization of tasks that facilitate the execution of matrix operations and a two-dimensional optimization function to obtain a vector force by each taxel in the array. This work seeks to take advantage of the parallel hardware characteristics of Field Programmable Gate Arrays, FPGAs, and the possibility of applying appropriate techniques for algorithms parallelization using as a guide the rules of generalization, efficiency, and scalability in the tactile decoding process and considering the low latency, low power consumption, and real-time execution as the main parameters of design. The results show a maximum estimation error of 32% in the tangential forces and 22% in the normal forces with respect to the simulation by the Finite Element Modeling (FEM) technique of Hertzian and non-Hertzian contact events, over sensor arrays of 10×10 taxels of different sizes. The hardware implementation was carried out on an MPSoC XCZU9EG-2FFVB1156 platform of Xilinx® that allows the reconstruction of force vectors following a scalable approach, from the information captured by means of tactile sensor arrays composed of up to 48 × 48 taxels that use various transduction technologies. The proposed implementation demonstrates a reduction in estimation time of x / 180 compared to software implementations. Despite the relatively high values of the estimation errors, the information provided by this implementation on the tangential and normal tractions and the triaxial reconstruction of forces allows to adequately reconstruct the tactile properties of the touched object, which are similar to those obtained in the software implementation and in the two FEM simulations taken as reference. Although errors could be reduced, the proposed implementation is useful for decoding contact forces for portable tactile sensing systems, thus helping to expand electronic skin applications in robotic and biomedical contexts.

Keywords: contact forces reconstruction, forces estimation, tactile sensor array, hardware implementation

Procedia PDF Downloads 195
82 Single Cell Analysis of Circulating Monocytes in Prostate Cancer Patients

Authors: Leander Van Neste, Kirk Wojno

Abstract:

The innate immune system reacts to foreign insult in several unique ways, one of which is phagocytosis of perceived threats such as cancer, bacteria, and viruses. The goal of this study was to look for evidence of phagocytosed RNA from tumor cells in circulating monocytes. While all monocytes possess phagocytic capabilities, the non-classical CD14+/FCGR3A+ monocytes and the intermediate CD14++/FCGR3A+ monocytes most actively remove threatening ‘external’ cellular materials. Purified CD14-positive monocyte samples from fourteen patients recently diagnosed with clinically localized prostate cancer (PCa) were investigated by single-cell RNA sequencing using the 10X Genomics protocol followed by paired-end sequencing on Illumina’s NovaSeq. Similarly, samples were processed and used as controls, i.e., one patient underwent biopsy but was found not to harbor prostate cancer (benign), three young, healthy men, and three men previously diagnosed with prostate cancer that recently underwent (curative) radical prostatectomy (post-RP). Sequencing data were mapped using 10X Genomics’ CellRanger software and viable cells were subsequently identified using CellBender, removing technical artifacts such as doublets and non-cellular RNA. Next, data analysis was performed in R, using the Seurat package. Because the main goal was to identify differences between PCa patients and ‘control’ patients, rather than exploring differences between individual subjects, the individual Seurat objects of all 21 patients were merged into one Seurat object per Seurat’s recommendation. Finally, the single-cell dataset was normalized as a whole prior to further analysis. Cell identity was assessed using the SingleR and cell dex packages. The Monaco Immune Data was selected as the reference dataset, consisting of bulk RNA-seq data of sorted human immune cells. The Monaco classification was supplemented with normalized PCa data obtained from The Cancer Genome Atlas (TCGA), which consists of bulk RNA sequencing data from 499 prostate tumor tissues (including 1 metastatic) and 52 (adjacent) normal prostate tissues. SingleR was subsequently run on the combined immune cell and PCa datasets. As expected, the vast majority of cells were labeled as having a monocytic origin (~90%), with the most noticeable difference being the larger number of intermediate monocytes in the PCa patients (13.6% versus 7.1%; p<.001). In men harboring PCa, 0.60% of all purified monocytes were classified as harboring PCa signals when the TCGA data were included. This was 3-fold, 7.5-fold, and 4-fold higher compared to post-RP, benign, and young men, respectively (all p<.001). In addition, with 7.91%, the number of unclassified cells, i.e., cells with pruned labels due to high uncertainty of the assigned label, was also highest in men with PCa, compared to 3.51%, 2.67%, and 5.51% of cells in post-RP, benign, and young men, respectively (all p<.001). It can be postulated that actively phagocytosing cells are hardest to classify due to their dual immune cell and foreign cell nature. Hence, the higher number of unclassified cells and intermediate monocytes in PCa patients might reflect higher phagocytic activity due to tumor burden. This also illustrates that small numbers (~1%) of circulating peripheral blood monocytes that have interacted with tumor cells might still possess detectable phagocytosed tumor RNA.

Keywords: circulating monocytes, phagocytic cells, prostate cancer, tumor immune response

Procedia PDF Downloads 162
81 A Geospatial Approach to Coastal Vulnerability Using Satellite Imagery and Coastal Vulnerability Index: A Case Study Mauritius

Authors: Manta Nowbuth, Marie Anais Kimberley Therese

Abstract:

The vulnerability of coastal areas to storm surges stands as a critical global concern. The increasing frequency and intensity of extreme weather events have increased the risks faced by communities living along the coastlines Worldwide. Small Island developing states (SIDS) stands out as being exceptionally vulnerable, coastal regions, ecosystems of human habitation and natural forces, bear witness to the frontlines of climate-induced challenges, and the intensification of storm surges underscores the urgent need for a comprehensive understanding of coastal vulnerability. With limited landmass, low-lying terrains, and resilience on coastal resources, SIDS face an amplified vulnerability to the consequences of storm surges, the delicate balance between human activities and environmental dynamics in these island nations increases the urgency of tailored strategies for assessing and mitigating coastal vulnerability. This research uses an approach to evaluate the vulnerability of coastal communities in Mauritius. The Satellite imagery analysis makes use of sentinel satellite imageries, modified normalised difference water index, classification techniques and the DSAS add on to quantify the extent of shoreline erosion or accumulation, providing a spatial perspective on coastal vulnerability. The coastal Vulnerability Index (CVI) is applied by Gonitz et al Formula, this index considers factors such as coastal slope, sea level rise, mean significant wave height, and tidal range. Weighted assessments identify regions with varying levels of vulnerability, ranging from low to high. The study was carried out in a Village Located in the south of Mauritius, namely Rivière des Galets, with a population of about 500 people over an area of 60,000m². The Village of Rivière des Galets being located in the south, and the southern coast of Mauritius being exposed to the open Indian ocean, is vulnerable to swells, The swells generated by the South east trade winds can lead to large waves and rough sea conditions along the Southern Coastline which has an impact on the coastal activities, including fishing, tourism and coastal Infrastructures, hence, On the one hand, the results highlighted that from a stretch of 123km of coastline the linear rate regression for the 5 –year span varies from-24.1m/yr. to 8.2m/yr., the maximum rate of change in terms of eroded land is -24m/yr. and the maximum rate of accretion is 8.2m/yr. On the other hand, the coastal vulnerability index varies from 9.1 to 45.6 and it was categorised into low, moderate, high and very high risks zones. It has been observed that region which lacks protective barriers and are made of sandy beaches are categorised as high risks zone and hence it is imperative to high risk regions for immediate attention and intervention, as they will most likely be exposed to coastal hazards and impacts from climate change, which demands proactive measures for enhanced resilience and sustainable adaptation strategies.

Keywords: climate change, coastal vulnerability, disaster management, remote sensing, satellite imagery, storm surge

Procedia PDF Downloads 9
80 Plasmonic Biosensor for Early Detection of Environmental DNA (eDNA) Combined with Enzyme Amplification

Authors: Monisha Elumalai, Joana Guerreiro, Joana Carvalho, Marta Prado

Abstract:

DNA biosensors popularity has been increasing over the past few years. Traditional analytical techniques tend to require complex steps and expensive equipment however DNA biosensors have the advantage of getting simple, fast and economic. Additionally, the combination of DNA biosensors with nanomaterials offers the opportunity to improve the selectivity, sensitivity and the overall performance of the devices. DNA biosensors are based on oligonucleotides as sensing elements. These oligonucleotides are highly specific to complementary DNA sequences resulting in the hybridization of the strands. DNA biosensors are not only an advantage in the clinical field but also applicable in numerous research areas such as food analysis or environmental control. Zebra Mussels (ZM), Dreissena polymorpha are invasive species responsible for enormous negative impacts on the environment and ecosystems. Generally, the detection of ZM is made when the observation of adult or macroscopic larvae's is made however at this stage is too late to avoid the harmful effects. Therefore, there is a need to develop an analytical tool for the early detection of ZM. Here, we present a portable plasmonic biosensor for the detection of environmental DNA (eDNA) released to the environment from this invasive species. The plasmonic DNA biosensor combines gold nanoparticles, as transducer elements, due to their great optical properties and high sensitivity. The detection strategy is based on the immobilization of a short base pair DNA sequence on the nanoparticles surface followed by specific hybridization in the presence of a complementary target DNA. The hybridization events are tracked by the optical response provided by the nanospheres and their surrounding environment. The identification of the DNA sequences (synthetic target and probes) to detect Zebra mussel were designed by using Geneious software in order to maximize the specificity. Moreover, to increase the optical response enzyme amplification of DNA might be used. The gold nanospheres were synthesized and characterized by UV-visible spectrophotometry and transmission electron microscopy (TEM). The obtained nanospheres present the maximum localized surface plasmon resonance (LSPR) peak position are found to be around 519 nm and a diameter of 17nm. The DNA probes modified with a sulfur group at one end of the sequence were then loaded on the gold nanospheres at different ionic strengths and DNA probe concentrations. The optimal DNA probe loading will be selected based on the stability of the optical signal followed by the hybridization study. Hybridization process leads to either nanoparticle dispersion or aggregation based on the presence or absence of the target DNA. Finally, this detection system will be integrated into an optical sensing platform. Considering that the developed device will be used in the field, it should fulfill the inexpensive and portability requirements. The sensing devices based on specific DNA detection holds great potential and can be exploited for sensing applications in-loco.

Keywords: ZM DNA, DNA probes, nicking enzyme, gold nanoparticles

Procedia PDF Downloads 245
79 Environmental Impacts of Point and Non-Point Source Pollution in Krishnagiri Reservoir: A Case Study in South India

Authors: N. K. Ambujam, V. Sudha

Abstract:

Reservoirs are being contaminated all around the world with point source and Non-Point Source (NPS) pollution. The most common NPS pollutants are sediments and nutrients. Krishnagiri Reservoir (KR) has been chosen for the present case study, which is located in the tropical semi-arid climatic zone of Tamil Nadu, South India. It is the main source of surface water in Krishnagiri district to meet the freshwater demands. The reservoir has lost about 40% of its water holding capacity due to sedimentation over the period of 50 years. Hence, from the research and management perspective, there is a need for a sound knowledge on the spatial and seasonal variations of KR water quality. The present study encompasses the specific objectives as (i) to investigate the longitudinal heterogeneity and seasonal variations of physicochemical parameters, nutrients and biological characteristics of KR water and (ii) to examine the extent of degradation of water quality in KR. 15 sampling points were identified by uniform stratified method and a systematic monthly sampling strategy was selected due to high dynamic nature in its hydrological characteristics. The physicochemical parameters, major ions, nutrients and Chlorophyll a (Chl a) were analysed. Trophic status of KR was classified by using Carlson's Trophic State Index (TSI). All statistical analyses were performed by using Statistical Package for Social Sciences programme, version-16.0. Spatial maps were prepared for Chl a using Arc GIS. Observations in KR pointed out that electrical conductivity and major ions are highly variable factors as it receives inflow from the catchment with different land use activities. The study of major ions in KR exhibited different trends in their values and it could be concluded that as the monsoon progresses the major ions in the water decreases or water quality stabilizes. The inflow point of KR showed comparatively higher concentration of nutrients including nitrate, soluble reactive phosphorus (SRP), total phosphors (TP), total suspended phosphorus (TSP) and total dissolved phosphorus (TDP) during monsoon seasons. This evidently showed the input of significant amount of nutrients from the catchment side through agricultural runoff. High concentration of TDP and TSP at the lacustrine zone of the reservoir during summer season evidently revealed that there was a significant release of phosphorus from the bottom sediments. Carlson’s TSI of KR ranged between 81 and 92 during northeast monsoon and summer seasons. High and permanent Cyanobacterial bloom in KR could be mainly due to the internal loading of phosphorus from the bottom sediments. According to Carlson’s TSI classification Krishnagiri reservoir was ranked in the hyper-eutrophic category. This study provides necessary basic data on the spatio-temporal variations of water quality in KR and also proves the impact of point and NPS pollution from the catchment area. High TSI warrants a greater threat for the recovery of internal P loading and hyper-eutrophic condition of KR. Several expensive internal measures for the reduction of internal loading of P were introduced by many scientists. However, the outcome of the present research suggests for the innovative algae harvesting technique for the removal of sediment nutrients.

Keywords: NPS pollution, nutrients, hyper-eutrophication, krishnagiri reservoir

Procedia PDF Downloads 324
78 The Effect of Online Analyzer Malfunction on the Performance of Sulfur Recovery Unit and Providing a Temporary Solution to Reduce the Emission Rate

Authors: Hamid Reza Mahdipoor, Mehdi Bahrami, Mohammad Bodaghi, Seyed Ali Akbar Mansoori

Abstract:

Nowadays, with stricter limitations to reduce emissions, considerable penalties are imposed if pollution limits are exceeded. Therefore, refineries, along with focusing on improving the quality of their products, are also focused on producing products with the least environmental impact. The duty of the sulfur recovery unit (SRU) is to convert H₂S gas coming from the upstream units to elemental sulfur and minimize the burning of sulfur compounds to SO₂. The Claus process is a common process for converting H₂S to sulfur, including a reaction furnace followed by catalytic reactors and sulfur condensers. In addition to a Claus section, SRUs usually consist of a tail gas treatment (TGT) section to decrease the concentration of SO₂ in the flue gas below the emission limits. To operate an SRU properly, the flow rate of combustion air to the reaction furnace must be adjusted so that the Claus reaction is performed according to stoichiometry. Accurate control of the air demand leads to an optimum recovery of sulfur during the flow and composition fluctuations in the acid gas feed. Therefore, the major control system in the SRU is the air demand control loop, which includes a feed-forward control system based on predetermined feed flow rates and a feed-back control system based on the signal from the tail gas online analyzer. The use of online analyzers requires compliance with the installation and operation instructions. Unfortunately, most of these analyzers in Iran are out of service for different reasons, like the low importance of environmental issues and a lack of access to after-sales services due to sanctions. In this paper, an SRU in Iran was simulated and calibrated using industrial experimental data. Afterward, the effect of the malfunction of the online analyzer on the performance of SRU was investigated using the calibrated simulation. The results showed that an increase in the SO₂ concentration in the tail gas led to an increase in the temperature of the reduction reactor in the TGT section. This increase in temperature caused the failure of TGT and increased the concentration of SO₂ from 750 ppm to 35,000 ppm. In addition, the lack of a control system for the adjustment of the combustion air caused further increases in SO₂ emissions. In some processes, the major variable cannot be controlled directly due to difficulty in measurement or a long delay in the sampling system. In these cases, a secondary variable, which can be measured more easily, is considered to be controlled. With the correct selection of this variable, the main variable is also controlled along with the secondary variable. This strategy for controlling a process system is referred to as inferential control" and is considered in this paper. Therefore, a sensitivity analysis was performed to investigate the sensitivity of other measurable parameters to input disturbances. The results revealed that the output temperature of the first Claus reactor could be used for inferential control of the combustion air. Applying this method to the operation led to maximizing the sulfur recovery in the Claus section.

Keywords: sulfur recovery, online analyzer, inferential control, SO₂ emission

Procedia PDF Downloads 75
77 Artificial Intelligence Based Method in Identifying Tumour Infiltrating Lymphocytes of Triple Negative Breast Cancer

Authors: Nurkhairul Bariyah Baharun, Afzan Adam, Reena Rahayu Md Zin

Abstract:

Tumor microenvironment (TME) in breast cancer is mainly composed of cancer cells, immune cells, and stromal cells. The interaction between cancer cells and their microenvironment plays an important role in tumor development, progression, and treatment response. The TME in breast cancer includes tumor-infiltrating lymphocytes (TILs) that are implicated in killing tumor cells. TILs can be found in tumor stroma (sTILs) and within the tumor (iTILs). TILs in triple negative breast cancer (TNBC) have been demonstrated to have prognostic and potentially predictive value. The international Immune-Oncology Biomarker Working Group (TIL-WG) had developed a guideline focus on the assessment of sTILs using hematoxylin and eosin (H&E)-stained slides. According to the guideline, the pathologists use “eye balling” method on the H&E stained- slide for sTILs assessment. This method has low precision, poor interobserver reproducibility, and is time-consuming for a comprehensive evaluation, besides only counted sTILs in their assessment. The TIL-WG has therefore recommended that any algorithm for computational assessment of TILs utilizing the guidelines provided to overcome the limitations of manual assessment, thus providing highly accurate and reliable TILs detection and classification for reproducible and quantitative measurement. This study is carried out to develop a TNBC digital whole slide image (WSI) dataset from H&E-stained slides and IHC (CD4+ and CD8+) stained slides. TNBC cases were retrieved from the database of the Department of Pathology, Hospital Canselor Tuanku Muhriz (HCTM). TNBC cases diagnosed between the year 2010 and 2021 with no history of other cancer and available block tissue were included in the study (n=58). Tissue blocks were sectioned approximately 4 µm for H&E and IHC stain. The H&E staining was performed according to a well-established protocol. Indirect IHC stain was also performed on the tissue sections using protocol from Diagnostic BioSystems PolyVue™ Plus Kit, USA. The slides were stained with rabbit monoclonal, CD8 antibody (SP16) and Rabbit monoclonal, CD4 antibody (EP204). The selected and quality-checked slides were then scanned using a high-resolution whole slide scanner (Pannoramic DESK II DW- slide scanner) to digitalize the tissue image with a pixel resolution of 20x magnification. A manual TILs (sTILs and iTILs) assessment was then carried out by the appointed pathologist (2 pathologists) for manual TILs scoring from the digital WSIs following the guideline developed by TIL-WG 2014, and the result displayed as the percentage of sTILs and iTILs per mm² stromal and tumour area on the tissue. Following this, we aimed to develop an automated digital image scoring framework that incorporates key elements of manual guidelines (including both sTILs and iTILs) using manually annotated data for robust and objective quantification of TILs in TNBC. From the study, we have developed a digital dataset of TNBC H&E and IHC (CD4+ and CD8+) stained slides. We hope that an automated based scoring method can provide quantitative and interpretable TILs scoring, which correlates with the manual pathologist-derived sTILs and iTILs scoring and thus has potential prognostic implications.

Keywords: automated quantification, digital pathology, triple negative breast cancer, tumour infiltrating lymphocytes

Procedia PDF Downloads 116
76 Spatial Assessment of Creek Habitats of Marine Fish Stock in Sindh Province

Authors: Syed Jamil H. Kazmi, Faiza Sarwar

Abstract:

The Indus delta of Sindh Province forms the largest creeks zone of Pakistan. The Sindh coast starts from the mouth of Hab River and terminates at Sir Creek area. In this paper, we have considered the major creeks from the site of Bin Qasim Port in Karachi to Jetty of Keti Bunder in Thatta District. A general decline in the mangrove forest has been observed that within a span of last 25 years. The unprecedented human interventions damage the creeks habitat badly which includes haphazard urban development, industrial and sewage disposal, illegal cutting of mangroves forest, reduced and inconsistent fresh water flow mainly from Jhang and Indus rivers. These activities not only harm the creeks habitat but affected the fish stock substantially. Fishing is the main livelihood of coastal people but with the above-mentioned threats, it is also under enormous pressure by fish catches resulted in unchecked overutilization of the fish resources. This pressure is almost unbearable when it joins with deleterious fishing methods, uncontrolled fleet size, increase trash and by-catch of juvenile and illegal mesh size. Along with these anthropogenic interventions study area is under the red zone of tropical cyclones and active seismicity causing floods, sea intrusion, damage mangroves forests and devastation of fish stock. In order to sustain the natural resources of the Indus Creeks, this study was initiated with the support of FAO, WWF and NIO, the main purpose was to develop a Geo-Spatial dataset for fish stock assessment. The study has been spread over a year (2013-14) on monthly basis which mainly includes detailed fish stock survey, water analysis and few other environmental analyses. Environmental analysis also includes the habitat classification of study area which has done through remote sensing techniques for 22 years’ time series (1992-2014). Furthermore, out of 252 species collected, fifteen species from estuarine and marine groups were short-listed to measure the weight, health and growth of fish species at each creek under GIS data through SPSS system. Furthermore, habitat suitability analysis has been conducted by assessing the surface topographic and aspect derivation through different GIS techniques. The output variables then overlaid in GIS system to measure the creeks productivity. Which provided the results in terms of subsequent classes: extremely productive, highly productive, productive, moderately productive and less productive. This study has revealed the Geospatial tools utilization along with the evaluation of the fisheries resources and creeks habitat risk zone mapping. It has also been identified that the geo-spatial technologies are highly beneficial to identify the areas of high environmental risk in Sindh Creeks. This has been clearly discovered from this study that creeks with high rugosity are more productive than the creeks with low levels of rugosity. The study area has the immense potential to boost the economy of Pakistan in terms of fish export, if geo-spatial techniques are implemented instead of conventional techniques.

Keywords: fish stock, geo-spatial, productivity analysis, risk

Procedia PDF Downloads 245
75 Hyperspectral Imagery for Tree Speciation and Carbon Mass Estimates

Authors: Jennifer Buz, Alvin Spivey

Abstract:

The most common greenhouse gas emitted through human activities, carbon dioxide (CO2), is naturally consumed by plants during photosynthesis. This process is actively being monetized by companies wishing to offset their carbon dioxide emissions. For example, companies are now able to purchase protections for vegetated land due-to-be clear cut or purchase barren land for reforestation. Therefore, by actively preventing the destruction/decay of plant matter or by introducing more plant matter (reforestation), a company can theoretically offset some of their emissions. One of the biggest issues in the carbon credit market is validating and verifying carbon offsets. There is a need for a system that can accurately and frequently ensure that the areas sold for carbon credits have the vegetation mass (and therefore for carbon offset capability) they claim. Traditional techniques for measuring vegetation mass and determining health are costly and require many person-hours. Orbital Sidekick offers an alternative approach that accurately quantifies carbon mass and assesses vegetation health through satellite hyperspectral imagery, a technique which enables us to remotely identify material composition (including plant species) and condition (e.g., health and growth stage). How much carbon a plant is capable of storing ultimately is tied to many factors, including material density (primarily species-dependent), plant size, and health (trees that are actively decaying are not effectively storing carbon). All of these factors are capable of being observed through satellite hyperspectral imagery. This abstract focuses on speciation. To build a species classification model, we matched pixels in our remote sensing imagery to plants on the ground for which we know the species. To accomplish this, we collaborated with the researchers at the Teakettle Experimental Forest. Our remote sensing data comes from our airborne “Kato” sensor, which flew over the study area and acquired hyperspectral imagery (400-2500 nm, 472 bands) at ~0.5 m/pixel resolution. Coverage of the entire teakettle experimental forest required capturing dozens of individual hyperspectral images. In order to combine these images into a mosaic, we accounted for potential variations of atmospheric conditions throughout the data collection. To do this, we ran an open source atmospheric correction routine called ISOFIT1 (Imaging Spectrometer Optiman FITting), which converted all of our remote sensing data from radiance to reflectance. A database of reflectance spectra for each of the tree species within the study area was acquired using the Teakettle stem map and the geo-referenced hyperspectral images. We found that a wide variety of machine learning classifiers were able to identify the species within our images with high (>95%) accuracy. For the most robust quantification of carbon mass and the best assessment of the health of a vegetated area, speciation is critical. Through the use of high resolution hyperspectral data, ground-truth databases, and complex analytical techniques, we are able to determine the species present within a pixel to a high degree of accuracy. These species identifications will feed directly into our carbon mass model.

Keywords: hyperspectral, satellite, carbon, imagery, python, machine learning, speciation

Procedia PDF Downloads 130
74 A Single Cell Omics Experiments as Tool for Benchmarking Bioinformatics Oncology Data Analysis Tools

Authors: Maddalena Arigoni, Maria Luisa Ratto, Raffaele A. Calogero, Luca Alessandri

Abstract:

The presence of tumor heterogeneity, where distinct cancer cells exhibit diverse morphological and phenotypic profiles, including gene expression, metabolism, and proliferation, poses challenges for molecular prognostic markers and patient classification for targeted therapies. Understanding the causes and progression of cancer requires research efforts aimed at characterizing heterogeneity, which can be facilitated by evolving single-cell sequencing technologies. However, analyzing single-cell data necessitates computational methods that often lack objective validation. Therefore, the establishment of benchmarking datasets is necessary to provide a controlled environment for validating bioinformatics tools in the field of single-cell oncology. Benchmarking bioinformatics tools for single-cell experiments can be costly due to the high expense involved. Therefore, datasets used for benchmarking are typically sourced from publicly available experiments, which often lack a comprehensive cell annotation. This limitation can affect the accuracy and effectiveness of such experiments as benchmarking tools. To address this issue, we introduce omics benchmark experiments designed to evaluate bioinformatics tools to depict the heterogeneity in single-cell tumor experiments. We conducted single-cell RNA sequencing on six lung cancer tumor cell lines that display resistant clones upon treatment of EGFR mutated tumors and are characterized by driver genes, namely ROS1, ALK, HER2, MET, KRAS, and BRAF. These driver genes are associated with downstream networks controlled by EGFR mutations, such as JAK-STAT, PI3K-AKT-mTOR, and MEK-ERK. The experiment also featured an EGFR-mutated cell line. Using 10XGenomics platform with cellplex technology, we analyzed the seven cell lines together with a pseudo-immunological microenvironment consisting of PBMC cells labeled with the Biolegend TotalSeq™-B Human Universal Cocktail (CITEseq). This technology allowed for independent labeling of each cell line and single-cell analysis of the pooled seven cell lines and the pseudo-microenvironment. The data generated from the aforementioned experiments are available as part of an online tool, which allows users to define cell heterogeneity and generates count tables as an output. The tool provides the cell line derivation for each cell and cell annotations for the pseudo-microenvironment based on CITEseq data by an experienced immunologist. Additionally, we created a range of pseudo-tumor tissues using different ratios of the aforementioned cells embedded in matrigel. These tissues were analyzed using 10XGenomics (FFPE samples) and Curio Bioscience (fresh frozen samples) platforms for spatial transcriptomics, further expanding the scope of our benchmark experiments. The benchmark experiments we conducted provide a unique opportunity to evaluate the performance of bioinformatics tools for detecting and characterizing tumor heterogeneity at the single-cell level. Overall, our experiments provide a controlled and standardized environment for assessing the accuracy and robustness of bioinformatics tools for studying tumor heterogeneity at the single-cell level, which can ultimately lead to more precise and effective cancer diagnosis and treatment.

Keywords: single cell omics, benchmark, spatial transcriptomics, CITEseq

Procedia PDF Downloads 117
73 Evaluation of Natural Frequency of Single and Grouped Helical Piles

Authors: Maryam Shahbazi, Amy B. Cerato

Abstract:

The importance of a systems’ natural frequency (fn) emerges when the vibration force frequency is equivalent to foundation's fn which causes response amplitude (resonance) that may cause irreversible damage to the structure. Several factors such as pile geometry (e.g., length and diameter), soil density, load magnitude, pile condition, and physical structure affect the fn of a soil-pile system; some of these parameters are evaluated in this study. Although experimental and analytical studies have assessed the fn of a soil-pile system, few have included individual and grouped helical piles. Thus, the current study aims to provide quantitative data on dynamic characteristics of helical pile-soil systems from full-scale shake table tests that will allow engineers to predict more realistic dynamic response under motions with variable frequency ranges. To evaluate the fn of single and grouped helical piles in dry dense sand, full-scale shake table tests were conducted in a laminar box (6.7 m x 3.0 m with 4.6 m high). Two different diameters (8.8 cm and 14 cm) helical piles were embedded in the soil box with corresponding lengths of 3.66m (excluding one pile with length of 3.96) and 4.27m. Different configurations were implemented to evaluate conditions such as fixed and pinned connections. In the group configuration, all four piles with similar geometry were tied together. Simulated real earthquake motions, in addition to white noise, were applied to evaluate the wide range of soil-pile system behavior. The Fast Fourier Transform (FFT) of measured time history responses using installed strain gages and accelerometers were used to evaluate fn. Both time-history records using accelerometer or strain gages were found to be acceptable for calculating fn. In this study, the existence of a pile reduced the fn of the soil slightly. Greater fn occurred on single piles with larger l/d ratios (higher slenderness ratio). Also, regardless of the connection type, the more slender pile group which is obviously surrounded by more soil, yielded higher natural frequencies under white noise, which may be due to exhibiting more passive soil resistance around it. Relatively speaking, within both pile groups, a pinned connection led to a lower fn than a fixed connection (e.g., for the same pile group the fn’s are 5.23Hz and 4.65Hz for fixed and pinned connections, respectively). Generally speaking, a stronger motion causes nonlinear behavior and degrades stiffness which reduces a pile’s fn; even more, reduction occurs in soil with a lower density. Moreover, fn of dense sand under white noise signal was obtained 5.03 which is reduced by 44% when an earthquake with the acceleration of 0.5g was applied. By knowing the factors affecting fn, the designer can effectively match the properties of the soil to a type of pile and structure to attempt to avoid resonance. The quantitative results in this study assist engineers in predicting a probable range of fn for helical pile foundations under potential future earthquake, and machine loading applied forces.

Keywords: helical pile, natural frequency, pile group, shake table, stiffness

Procedia PDF Downloads 133
72 Ionophore-Based Materials for Selective Optical Sensing of Iron(III)

Authors: Natalia Lukasik, Ewa Wagner-Wysiecka

Abstract:

Development of selective, fast-responsive, and economical sensors for diverse ions detection and determination is one of the most extensively studied areas due to its importance in the field of clinical, environmental and industrial analysis. Among chemical sensors, vast popularity has gained ionophore-based optical sensors, where the generated analytical signal is a consequence of the molecular recognition of ion by the ionophore. Change of color occurring during host-guest interactions allows for quantitative analysis and for 'naked-eye' detection without the need of using sophisticated equipment. An example of application of such sensors is colorimetric detection of iron(III) cations. Iron as one of the most significant trace elements plays roles in many biochemical processes. For these reasons, the development of reliable, fast, and selective methods of iron ions determination is highly demanded. Taking all mentioned above into account a chromogenic amide derivative of 3,4-dihydroxybenzoic acid was synthesized, and its ability to iron(III) recognition was tested. To the best of authors knowledge (according to chemical abstracts) the obtained ligand has not been described in the literature so far. The catechol moiety was introduced to the ligand structure in order to mimic the action of naturally occurring siderophores-iron(III)-selective receptors. The ligand–ion interactions were studied using spectroscopic methods: UV-Vis spectrophotometry and infrared spectroscopy. The spectrophotometric measurements revealed that the amide exhibits affinity to iron(III) in dimethyl sulfoxide and fully aqueous solution, what is manifested by the change of color from yellow to green. Incorporation of the tested amide into a polymeric matrix (cellulose triacetate) ensured effective recognition of iron(III) at pH 3 with the detection limit 1.58×10⁻⁵ M. For the obtained sensor material parameters like linear response range, response time, selectivity, and possibility of regeneration were determined. In order to evaluate the effect of the size of the sensing material on iron(III) detection nanospheres (in the form of nanoemulsion) containing the tested amide were also prepared. According to DLS (dynamic light scattering) measurements, the size of the nanospheres is 308.02 ± 0.67 nm. Work parameters of the nanospheres were determined and compared with cellulose triacetate-based material. Additionally, for fast, qualitative experiments the test strips were prepared by adsorption of the amide solution on a glass microfiber material. Visual limit of detection of iron(III) at pH 3 by the test strips was estimated at the level 10⁻⁴ M. In conclusion, reported here amide derived from 3,4- dihydroxybenzoic acid proved to be an effective candidate for optical sensing of iron(III) in fully aqueous solutions. N. L. kindly acknowledges financial support from National Science Centre Poland the grant no. 2017/01/X/ST4/01680. Authors thank for financial support from Gdansk University of Technology grant no. 032406.

Keywords: ion-selective optode, iron(III) recognition, nanospheres, optical sensor

Procedia PDF Downloads 154
71 Non-Steroidal Microtubule Disrupting Analogues Induce Programmed Cell Death in Breast and Lung Cancer Cell Lines

Authors: Marcel Verwey, Anna M. Joubert, Elsie M. Nolte, Wolfgang Dohle, Barry V. L. Potter, Anne E. Theron

Abstract:

A tetrahydroisoquinolinone (THIQ) core can be used to mimic the A,B-ring of colchicine site-binding microtubule disruptors such as 2-methoxyestradiol in the design of anti-cancer agents. Steroidomimeric microtubule disruptors were synthesized by introducing C'2 and C'3 of the steroidal A-ring to C'6 and C'7 of the THIQ core and by introducing a decorated hydrogen bond acceptor motif projecting from the steroidal D-ring to N'2. For this in vitro study, four non-steroidal THIQ-based analogues were investigated and comparative studies were done between the non-sulphamoylated compound STX 3450 and the sulphamoylated compounds STX 2895, STX 3329 and STX 3451. The objective of this study was to investigate the modes of cell death induced by these four THIQ-based analogues in A549 lung carcinoma epithelial cells and metastatic breast adenocarcinoma MDA-MB-231 cells. Cytotoxicity studies to determine the half maximal growth inhibitory concentrations were done using spectrophotometric quantification via crystal violet staining and lactate dehydrogenase (LDH) assays. Microtubule integrity and morphologic changes of exposed cells were investigated using polarization-optical transmitted light differential interference contrast microscopy, transmission electron microscopy and confocal microscopy. Flow cytometric quantification was used to determine apoptosis induction and the effect that THIQ-based analogues have on cell cycle progression. Signal transduction pathways were elucidated by quantification of the mitochondrial membrane integrity, cytochrome c release and caspase 3, -6 and -8 activation. Induction of autophagic cell death by the THIQ-based analogues was investigated by morphological assessment of fluorescent monodansylcadaverine (MDC) staining of acidic vacuoles and by quantifying aggresome formation via flow cytometry. Results revealed that these non-steroidal microtubule disrupting analogues inhibited 50% of cell growth at nanomolar concentrations. Immunofluorescence microscopy indicated microtubule depolarization and the resultant mitotic arrest was further confirmed through cell cycle analysis. Apoptosis induction via the intrinsic pathway was observed due to depolarization of the mitochondrial membrane, induction of cytochrome c release as well as, caspase 3 activation. Potential involvement of programmed cell death type II was observed due to the presence of acidic vacuoles and aggresome formation. Necrotic cell death did not contribute significantly, indicated by stable LDH levels. This in vitro study revealed the induction of the intrinsic apoptotic pathway as well as possible involvement of autophagy after exposure to these THIQ-based analogues in both MDA-MB-231- and A549 cells. Further investigation of this series of anticancer drugs still needs to be conducted to elucidate the temporal, mechanistic and functional crosstalk mechanisms between the two observed programmed cell deaths pathways.

Keywords: apoptosis, autophagy, cancer, microtubule disruptor

Procedia PDF Downloads 253
70 A Computer-Aided System for Tooth Shade Matching

Authors: Zuhal Kurt, Meral Kurt, Bilge T. Bal, Kemal Ozkan

Abstract:

Shade matching and reproduction is the most important element of success in prosthetic dentistry. Until recently, shade matching procedure was implemented by dentists visual perception with the help of shade guides. Since many factors influence visual perception; tooth shade matching using visual devices (shade guides) is highly subjective and inconsistent. Subjective nature of this process has lead to the development of instrumental devices. Nowadays, colorimeters, spectrophotometers, spectroradiometers and digital image analysing systems are used for instrumental shade selection. Instrumental devices have advantages that readings are quantifiable, can obtain more rapidly and simply, objectively and precisely. However, these devices have noticeable drawbacks. For example, translucent structure and irregular surfaces of teeth lead to defects on measurement with these devices. Also between the results acquired by devices with different measurement principles may make inconsistencies. So, its obligatory to search for new methods for dental shade matching process. A computer-aided system device; digital camera has developed rapidly upon today. Currently, advances in image processing and computing have resulted in the extensive use of digital cameras for color imaging. This procedure has a much cheaper process than the use of traditional contact-type color measurement devices. Digital cameras can be taken by the place of contact-type instruments for shade selection and overcome their disadvantages. Images taken from teeth show morphology and color texture of teeth. In last decades, a new method was recommended to compare the color of shade tabs taken by a digital camera using color features. This method showed that visual and computer-aided shade matching systems should be used as concatenated. Recently using methods of feature extraction techniques are based on shape description and not used color information. However, color is mostly experienced as an essential property in depicting and extracting features from objects in the world around us. When local feature descriptors with color information are extended by concatenating color descriptor with the shape descriptor, that descriptor will be effective on visual object recognition and classification task. Therefore, the color descriptor is to be used in combination with a shape descriptor it does not need to contain any spatial information, which leads us to use local histograms. This local color histogram method is remain reliable under variation of photometric changes, geometrical changes and variation of image quality. So, coloring local feature extraction methods are used to extract features, and also the Scale Invariant Feature Transform (SIFT) descriptor used to for shape description in the proposed method. After the combination of these descriptors, the state-of-art descriptor named by Color-SIFT will be used in this study. Finally, the image feature vectors obtained from quantization algorithm are fed to classifiers such as Nearest Neighbor (KNN), Naive Bayes or Support Vector Machines (SVM) to determine label(s) of the visual object category or matching. In this study, SVM are used as classifiers for color determination and shade matching. Finally, experimental results of this method will be compared with other recent studies. It is concluded from the study that the proposed method is remarkable development on computer aided tooth shade determination system.

Keywords: classifiers, color determination, computer-aided system, tooth shade matching, feature extraction

Procedia PDF Downloads 444
69 Chiral Molecule Detection via Optical Rectification in Spin-Momentum Locking

Authors: Jessie Rapoza, Petr Moroshkin, Jimmy Xu

Abstract:

Chirality is omnipresent, in nature, in life, and in the field of physics. One intriguing example is the homochirality that has remained a great secret of life. Another is the pairs of mirror-image molecules – enantiomers. They are identical in atomic composition and therefore indistinguishable in the scalar physical properties. Yet, they can be either therapeutic or toxic, depending on their chirality. Recent studies suggest a potential link between abnormal levels of certain D-amino acids and some serious health impairments, including schizophrenia, amyotrophic lateral sclerosis, and potentially cancer. Although indistinguishable in their scalar properties, the chirality of a molecule reveals itself in interaction with the surrounding of a certain chirality, or more generally, a broken mirror-symmetry. In this work, we report on a system for chiral molecule detection, in which the mirror-symmetry is doubly broken, first by asymmetric structuring a nanopatterned plasmonic surface than by the incidence of circularly polarized light (CPL). In this system, the incident circularly-polarized light induces a surface plasmon polariton (SPP) wave, propagating along the asymmetric plasmonic surface. This SPP field itself is chiral, evanescently bound to a near-field zone on the surface (~10nm thick), but with an amplitude greatly intensified (by up to 104) over that of the incident light. It hence probes just the molecules on the surface instead of those in the volume. In coupling to molecules along its path on the surface, the chiral SPP wave favors one chirality over the other, allowing for chirality detection via the change in an optical rectification current measured at the edges of the sample. The asymmetrically structured surface converts the high-frequency electron plasmonic-oscillations in the SPP wave into a net DC drift current that can be measured at the edge of the sample via the mechanism of optical rectification. The measured results validate these design concepts and principles. The observed optical rectification current exhibits a clear differentiation between a pair of enantiomers. Experiments were performed by focusing a 1064nm CW laser light at the sample - a gold grating microchip submerged in an approximately 1.82M solution of either L-arabinose or D-arabinose and water. A measurement of the current output was then recorded under both rights and left circularly polarized lights. Measurements were recorded at various angles of incidence to optimize the coupling between the spin-momentums of the incident light and that of the SPP, that is, spin-momentum locking. In order to suppress the background, the values of the photocurrent for the right CPL are subtracted from those for the left CPL. Comparison between the two arabinose enantiomers reveals a preferential signal response of one enantiomer to left CPL and the other enantiomer to right CPL. In sum, this work reports on the first experimental evidence of the feasibility of chiral molecule detection via optical rectification in a metal meta-grating. This nanoscale interfaced electrical detection technology is advantageous over other detection methods due to its size, cost, ease of use, and integration ability with read-out electronic circuits for data processing and interpretation.

Keywords: Chirality, detection, molecule, spin

Procedia PDF Downloads 92
68 Impedimetric Phage-Based Sensor for the Rapid Detection of Staphylococcus aureus from Nasal Swab

Authors: Z. Yousefniayejahr, S. Bolognini, A. Bonini, C. Campobasso, N. Poma, F. Vivaldi, M. Di Luca, A. Tavanti, F. Di Francesco

Abstract:

Pathogenic bacteria represent a threat to healthcare systems and the food industry because their rapid detection remains challenging. Electrochemical biosensors are gaining prominence as a novel technology for the detection of pathogens due to intrinsic features such as low cost, rapid response time, and portability, which make them a valuable alternative to traditional methodologies. These sensors use biorecognition elements that are crucial for the identification of specific bacteria. In this context, bacteriophages are promising tools for their inherent high selectivity towards bacterial hosts, which is of fundamental importance when detecting bacterial pathogens in complex biological samples. In this study, we present the development of a low-cost and portable sensor based on the Zeno phage for the rapid detection of Staphylococcus aureus. Screen-printed gold electrodes functionalized with the Zeno phage were used, and electrochemical impedance spectroscopy was applied to evaluate the change of the charge transfer resistance (Rct) as a result of the interaction with S. aureus MRSA ATCC 43300. The phage-based biosensor showed a linear range from 101 to 104 CFU/mL with a 20-minute response time and a limit of detection (LOD) of 1.2 CFU/mL under physiological conditions. The biosensor’s ability to recognize various strains of staphylococci was also successfully demonstrated in the presence of clinical isolates collected from different geographic areas. Assays using S. epidermidis were also carried out to verify the species-specificity of the phage sensor. We only observed a remarkable change of the Rct in the presence of the target S. aureus bacteria, while no substantial binding to S. epidermidis occurred. This confirmed that the Zeno phage sensor only targets S. aureus species within the genus Staphylococcus. In addition, the biosensor's specificity with respect to other bacterial species, including gram-positive bacteria like Enterococcus faecium and the gram-negative bacterium Pseudomonas aeruginosa, was evaluated, and a non-significant impedimetric signal was observed. Notably, the biosensor successfully identified S. aureus bacterial cells in a complex matrix such as a nasal swab, opening the possibility of its use in a real-case scenario. We diluted different concentrations of S. aureus from 108 to 100 CFU/mL with a ratio of 1:10 in the nasal swap matrices collected from healthy donors. Three different sensors were applied to measure various concentrations of bacteria. Our sensor indicated high selectivity to detect S. aureus in biological matrices compared to time-consuming traditional methods, such as enzyme-linked immunosorbent assay (ELISA), polymerase chain reaction (PCR), and radioimmunoassay (RIA), etc. With the aim to study the possibility to use this biosensor to address the challenge associated to pathogen detection, ongoing research is focused on the assessment of the biosensor’s analytical performances in different biological samples and the discovery of new phage bioreceptors.

Keywords: electrochemical impedance spectroscopy, bacteriophage, biosensor, Staphylococcus aureus

Procedia PDF Downloads 66
67 High-Resolution Facial Electromyography in Freely Behaving Humans

Authors: Lilah Inzelberg, David Rand, Stanislav Steinberg, Moshe David Pur, Yael Hanein

Abstract:

Human facial expressions carry important psychological and neurological information. Facial expressions involve the co-activation of diverse muscles. They depend strongly on personal affective interpretation and on social context and vary between spontaneous and voluntary activations. Smiling, as a special case, is among the most complex facial emotional expressions, involving no fewer than 7 different unilateral muscles. Despite their ubiquitous nature, smiles remain an elusive and debated topic. Smiles are associated with happiness and greeting on one hand and anger or disgust-masking on the other. Accordingly, while high-resolution recording of muscle activation patterns, in a non-interfering setting, offers exciting opportunities, it remains an unmet challenge, as contemporary surface facial electromyography (EMG) methodologies are cumbersome, restricted to the laboratory settings, and are limited in time and resolution. Here we present a wearable and non-invasive method for objective mapping of facial muscle activation and demonstrate its application in a natural setting. The technology is based on a recently developed dry and soft electrode array, specially designed for surface facial EMG technique. Eighteen healthy volunteers (31.58 ± 3.41 years, 13 females), participated in the study. Surface EMG arrays were adhered to participant left and right cheeks. Participants were instructed to imitate three facial expressions: closing the eyes, wrinkling the nose and smiling voluntary and to watch a funny video while their EMG signal is recorded. We focused on muscles associated with 'enjoyment', 'social' and 'masked' smiles; three categories with distinct social meanings. We developed a customized independent component analysis algorithm to construct the desired facial musculature mapping. First, identification of the Orbicularis oculi and the Levator labii superioris muscles was demonstrated from voluntary expressions. Second, recordings of voluntary and spontaneous smiles were used to locate the Zygomaticus major muscle activated in Duchenne and non-Duchenne smiles. Finally, recording with a wireless device in an unmodified natural work setting revealed expressions of neutral, positive and negative emotions in face-to-face interaction. The algorithm outlined here identifies the activation sources in a subject-specific manner, insensitive to electrode placement and anatomical diversity. Our high-resolution and cross-talk free mapping performances, along with excellent user convenience, open new opportunities for affective processing and objective evaluation of facial expressivity, objective psychological and neurological assessment as well as gaming, virtual reality, bio-feedback and brain-machine interface applications.

Keywords: affective expressions, affective processing, facial EMG, high-resolution electromyography, independent component analysis, wireless electrodes

Procedia PDF Downloads 246
66 Digitization and Morphometric Characterization of Botanical Collection of Indian Arid Zones as Informatics Initiatives Addressing Conservation Issues in Climate Change Scenario

Authors: Dipankar Saha, J. P. Singh, C. B. Pandey

Abstract:

Indian Thar desert being the seventh largest in the world is the main hot sand desert occupies nearly 385,000km2 and about 9% of the area of the country harbours several species likely the flora of 682 species (63 introduced species) belonging to 352 genera and 87 families. The degree of endemism of plant species in the Thar desert is 6.4 percent, which is relatively higher than the degree of endemism in the Sahara desert which is very significant for the conservationist to envisage. The advent and development of computer technology for digitization and data base management coupled with the rapidly increasing importance of biodiversity conservation resulted in the invention of biodiversity informatics as discipline of basic sciences with multiple applications. Aichi Target 19 as an outcome of Convention of Biological Diversity (CBD) specifically mandates the development of an advanced and shared biodiversity knowledge base. Information on species distributions in space is the crux of effective management of biodiversity in the rapidly changing world. The efficiency of biodiversity management is being increased rapidly by various stakeholders like researchers, policymakers, and funding agencies with the knowledge and application of biodiversity informatics. Herbarium specimens being a vital repository for biodiversity conservation especially in climate change scenario the digitization process usually aims to improve access and to preserve delicate specimens and in doing so creating large sets of images as a part of the existing repository as arid plant information facility for long-term future usage. As the leaf characters are important for describing taxa and distinguishing between them and they can be measured from herbarium specimens as well. As a part of this activity, laminar characterization (leaves being the most important characters in assessing climate change impact) initially resulted in classification of more than thousands collections belonging to ten families like Acanthaceae, Aizoaceae, Amaranthaceae, Asclepiadaceae, Anacardeaceae, Apocynaceae, Asteraceae, Aristolochiaceae, Berseraceae and Bignoniaceae etc. Taxonomic diversity indices has also been worked out being one of the important domain of biodiversity informatics approaches. The digitization process also encompasses workflows which incorporate automated systems to enable us to expand and speed up the digitisation process. The digitisation workflows used to be on a modular system which has the potential to be scaled up. As they are being developed with a geo-referencing tool and additional quality control elements and finally placing specimen images and data into a fully searchable, web-accessible database. Our effort in this paper is to elucidate the role of BIs, present effort of database development of the existing botanical collection of institute repository. This effort is expected to be considered as a part of various global initiatives having an effective biodiversity information facility. This will enable access to plant biodiversity data that are fit-for-use by scientists and decision makers working on biodiversity conservation and sustainable development in the region and iso-climatic situation of the world.

Keywords: biodiversity informatics, climate change, digitization, herbarium, laminar characters, web accessible interface

Procedia PDF Downloads 229
65 Interferon-Induced Transmembrane Protein-3 rs12252-CC Associated with the Progress of Hepatocellular Carcinoma by Up-Regulating the Expression of Interferon-Induced Transmembrane Protein 3

Authors: Yuli Hou, Jianping Sun, Mengdan Gao, Hui Liu, Ling Qin, Ang Li, Dongfu Li, Yonghong Zhang, Yan Zhao

Abstract:

Background and Aims: Interferon-induced transmembrane protein 3 (IFITM3) is a component of ISG (Interferon-Stimulated Gene) family. IFITM3 has been recognized as a key signal molecule regulating cell growth in some tumors. However, the function of IFITM3 rs12252-CC genotype in the hepatocellular carcinoma (HCC) remains unknown to author’s best knowledge. A cohort study was employed to clarify the relationship between IFITM3 rs12252-CC genotype and HCC progression, and cellular experiments were used to investigate the correlation of function of IFITM3 and the progress of HCC. Methods: 336 candidates were enrolled in study, including 156 with HBV related HCC and 180 with chronic Hepatitis B infections or liver cirrhosis. Polymerase chain reaction (PCR) was employed to determine the gene polymorphism of IFITM3. The functions of IFITM3 were detected in PLC/PRF/5 cell with different treated:LV-IFITM3 transfected with lentivirus to knockdown the expression of IFITM3 and LV-NC transfected with empty lentivirus as negative control. The IFITM3 expression, proliferation and migration were detected by Quantitative reverse transcription polymerase chain reaction (qRT-PCR), QuantiGene Plex 2.0 assay, western blotting, immunohistochemistry, Cell Counting Kit(CCK)-8 and wound healing respectively. Six samples (three infected with empty lentiviral as control; three infected with LV-IFITM3 vector lentiviral as experimental group ) of PLC/PRF/5 were sequenced at BGI (Beijing Genomics Institute, Shenzhen,China) using RNA-seq technology to identify the IFITM3-related signaling pathways and chose PI3K/AKT pathway as related signaling to verify. Results: The patients with HCC had a significantly higher proportion of IFITM3 rs12252-CC compared with the patients with chronic HBV infection or liver cirrhosis. The distribution of CC genotype in HCC patients with low differentiation was significantly higher than that in those with high differentiation. Patients with CC genotype found with bigger tumor size, higher percentage of vascular thrombosis, higher distribution of low differentiation and higher 5-year relapse rate than those with CT/TT genotypes. The expression of IFITM3 was higher in HCC tissues than adjacent normal tissues, and the level of IFITM3 was higher in HCC tissues with low differentiation and metastatic than high/medium differentiation and without metastatic. Higher RNA level of IFITM3 was found in CC genotype than TT genotype. In PLC/PRF/5 cell with knockdown, the ability of cell proliferation and migration was inhibited. Analysis RNA sequencing and verification of RT-PCR found out the phosphatidylinositol 3-kinase/protein kinase B/mammalian target of rapamycin(PI3K/AKT/mTOR) pathway was associated with knockdown IFITM3.With the inhibition of IFITM3, the expression of PI3K/AKT/mTOR signaling pathway was blocked and the expression of vimentin was decreased. Conclusions: IFITM3 rs12252-CC with the higher expression plays a vital role in the progress of HCC by regulating HCC cell proliferation and migration. These effects are associated with PI3K/AKT/mTOR signaling pathway.

Keywords: IFITM3, interferon-induced transmembrane protein 3, HCC, hepatocellular carcinoma, PI3K/ AKT/mTOR, phosphatidylinositol 3-kinase/protein kinase B/mammalian target of rapamycin

Procedia PDF Downloads 124