Search results for: qualitative approach
25 Restoring Total Form and Function in Patients with Lower Limb Bony Defects Utilizing Patient-Specific Fused Deposition Modelling- A Neoteric Multidisciplinary Reconstructive Approach
Authors: Divya SY. Ang, Mark B. Tan, Nicholas EM. Yeo, Siti RB. Sudirman, Khong Yik Chew
Abstract:
Introduction: The importance of the amalgamation of technological and engineering advances with surgical principles of reconstruction cannot be overemphasized. With earlier detection of cancer, consequences of high-speed living and neglect, like traumatic injuries and infection, resulting in increasingly younger patients with bone defects. This may result in malformations and suboptimal function that is more noticeable and palpable in the younger, active demographic. Our team proposes a technique that encapsulates a mesh of multidisciplinary effort, tissue engineering and reconstructive principles. Methods/Materials: Our patient was a young competitive footballer in his early 30s who was diagnosed with submandibular adenoid cystic carcinoma with bony involvement. He was thus counselled for a right hemi mandibulectomy, the floor of mouth resection, right selective neck dissection, tracheostomy, and free fibular flap reconstruction of his mandible and required post-operative radiotherapy. Being young and in his prime sportsman years, he was unable to accept the morbidities associated with using his fibula to reconstruct his mandible despite it being the gold standard reconstructive option. The fibula is an ideal vascularized bone flap because it’s reliable and easily shaped with relatively minimal impact on functional outcomes. The fibula contributes to 30% of weightbearing and is the attachment for the lateral compartment muscles; it is stronger in footballers concerning lateral bending. When harvesting the fibula, the distal 6-8cm and up to 10% of the total length is preserved to maintain the ankle’s stability, thus, minimizing the impact on daily activities. There are studies that have noted gait variability post-operatively. Therefore, returning to a premorbid competitive level may be doubtful. To improve his functional outcomes, the decision was made to try and restore the fibula's form and function. Using the concept of Fused Deposition Modelling (FDM), our team comprising of Plastics, Otolaryngology, Orthopedics and Radiology, worked with Osteopore to design a 3D bioresorbable implant to regenerate the fibula defect (14.5cm). Bone marrow was harvested via reaming the contralateral hip prior to the wide resection. 30mls of his blood was obtained for extracting platelet rich plasma. These were packed into the Osteopore 3D-printed bone scaffold. This was then secured into the fibula defect with titanium plates and screws. The flexor hallucis longus and soleus were anchored along the construct and intraosseous membrane, done in a single setting. Results: He was reviewed closely as an outpatient over 10 months post operatively. He reported no discernable loss or difference in ankle function. He is satisfied and back in training and our team has video and photographs that substantiate his progress. Conclusion: FDM allows regeneration of long bone defects. However, we aimed to also restore his eversion and inversion that is imperative for footballers and hence reattached his previously dissected muscles along the length of the Osteopore implant. We believe that the reattachment of the muscle stabilizes not only the construct but allows optimum muscle tensioning when moving his ankle. This is a simple but effective technique in restoring complete function and form in a young patient whose minute muscle control is imperative to life.Keywords: fused deposition modelling, functional reconstruction, lower limb bony defects, regenerative surgery, 3D printing, tissue engineering
Procedia PDF Downloads 7324 Analyzing Spatio-Structural Impediments in the Urban Trafficscape of Kolkata, India
Authors: Teesta Dey
Abstract:
Integrated Transport development with proper traffic management leads to sustainable growth of any urban sphere. Appropriate mass transport planning is essential for the populous cities in third world countries like India. The exponential growth of motor vehicles with unplanned road network is now the common feature of major urban centres in India. Kolkata, the third largest mega city in India, is not an exception of it. The imbalance between demand and supply of unplanned transport services in this city is manifested in the high economic and environmental costs borne by the associated society. With the passage of time, the growth and extent of passenger demand for rapid urban transport has outstripped proper infrastructural planning and causes severe transport problems in the overall urban realm. Hence Kolkata stands out in the world as one of the most crisis-ridden metropolises. The urban transport crisis of this city involves severe traffic congestion, the disparity in mass transport services on changing peripheral land uses, route overlapping, lowering of travel speed and faulty implementation of governmental plans as mostly induced by rapid growth of private vehicles on limited road space with huge carbon footprint. Therefore the paper will critically analyze the extant road network pattern for improving regional connectivity and accessibility, assess the degree of congestion, identify the deviation from demand and supply balance and finally evaluate the emerging alternate transport options as promoted by the government. For this purpose, linear, nodal and spatial transport network have been assessed based on certain selected indices viz. Road Degree, Traffic Volume, Shimbel Index, Direct Bus Connectivity, Average Travel and Waiting Tine Indices, Route Variety, Service Frequency, Bus Intensity, Concentration Analysis, Delay Rate, Quality of Traffic Transmission, Lane Length Duration Index and Modal Mix. Total 20 Traffic Intersection Points (TIPs) have been selected for the measurement of nodal accessibility. Critical Congestion Zones (CCZs) are delineated based on one km buffer zones of each TIP for congestion pattern analysis. A total of 480 bus routes are assessed for identifying the deficiency in network planning. Apart from bus services, the combined effects of other mass and para transit modes, containing metro rail, auto, cab and ferry services, are also analyzed. Based on systematic random sampling method, a total of 1500 daily urban passengers’ perceptions were studied for checking the ground realities. The outcome of this research identifies the spatial disparity among the 15 boroughs of the city with severe route overlapping and congestion problem. North and Central Kolkata-based mass transport services exceed the transport strength of south and peripheral Kolkata. Faulty infrastructural condition, service inadequacy, economic loss and workers’ inefficiency are the most dominant reasons behind the defective mass transport network plan. Hence there is an urgent need to revive the extant road based mass transport system of this city by implementing a holistic management approach by upgrading traffic infrastructure, designing new roads, better cooperation among different mass transport agencies, better coordination of transport and changing land use policies, large increase in funding and finally general passengers’ awareness.Keywords: carbon footprint, critical congestion zones, direct bus connectivity, integrated transport development
Procedia PDF Downloads 27323 Amifostine Analogue, Drde-30, Attenuates Radiation-Induced Lung Injury in Mice
Authors: Aastha Arora, Vikas Bhuria, Saurabh Singh, Uma Pathak, Shweta Mathur, Puja P. Hazari, Rajat Sandhir, Ravi Soni, Anant N. Bhatt, Bilikere S. Dwarakanath
Abstract:
Radiotherapy is an effective curative and palliative option for patients with thoracic malignancies. However, lung injury, comprising of pneumonitis and fibrosis, remains a significant clin¬ical complication of thoracic radiation, thus making it a dose-limiting factor. Also, injury to the lung is often reported as part of multi-organ failure in victims of accidental radiation exposures. Radiation induced inflammatory response in the lung, characterized by leukocyte infiltration and vascular changes, is an important contributing factor for the injury. Therefore, countermeasure agents to attenuate radiation induced inflammatory response are considered as an important approach to prevent chronic lung damage. Although Amifostine, the widely used, FDA approved radio-protector, has been found to reduce the radiation induced pneumonitis during radiation therapy of non-small cell lung carcinoma, its application during mass and field exposure is limited due to associated toxicity and ineffectiveness with the oral administration. The amifostine analogue (DRDE-30) overcomes this limitation as it is orally effective in reducing the mortality of whole body irradiated mice. The current study was undertaken to investigate the potential of DRDE-30 to ameliorate radiation induced lung damage. DRDE-30 was administered intra-peritoneally, 30 minutes prior to 13.5 Gy thoracic (60Co-gamma) radiation in C57BL/6 mice. Broncheo- alveolar lavage fluid (BALF) and lung tissues were harvested at 12 and 24 weeks post irradiation for studying inflammatory and fibrotic markers. Lactate dehydrogenase (LDH) leakage, leukocyte count and protein content in BALF were used as parameters to evaluate lung vascular permeability. Inflammatory cell signaling (p38 phosphorylation) and anti-oxidant status (MnSOD and Catalase level) was assessed by Western blot, while X-ray CT scan, H & E staining and trichrome staining were done to study the lung architecture and collagen deposition. Irradiation of the lung increased the total protein content, LDH leakage and total leukocyte count in the BALF, reflecting endothelial barrier dysfunction. These disruptive effects were significantly abolished by DRDE-30, which appear to be linked to the DRDE-30 mediated abrogation of activation of the redox-sensitive pro- inflammatory signaling cascade, the MAPK pathway. Concurrent administration of DRDE-30 with radiation inhibited radiation-induced oxidative stress by strengthening the anti-oxidant defense system and abrogated p38 mitogen-activated protein kinase activation, which was associated with reduced vascular leak and macrophage recruitment to the lungs. Histopathological examination (by H & E staining) of the lung showed radiation-induced inflammation of the lungs, characterized by cellular infiltration, interstitial oedema, alveolar wall thickening, perivascular fibrosis and obstruction of alveolar spaces, which were all reduced by pre-administration of DRDE-30. Structural analysis with X-ray CT indicated lung architecture (linked to the degree of opacity) comparable to un-irradiated mice that correlated well with the lung morphology and reduced collagen deposition. Reduction in the radiation-induced inflammation and fibrosis brought about by DRDE-30 resulted in a profound increase in animal survival (72 % in the combination vs 24% with radiation) observed at the end of 24 weeks following irradiation. These findings establish the potential of the Amifostine analogue, DRDE-30, in reducing radiation induced pulmonary injury by attenuating the inflammatory and fibrotic responses.Keywords: amifostine, fibrosis, inflammation, lung injury radiation
Procedia PDF Downloads 51022 Identifying the Conservation Gaps in Poorly Studied Protected Area in the Philippines: A Study Case of Sibuyan Island
Authors: Roven Tumaneng, Angelica Kristina Monzon, Ralph Sedricke Lapuz, Jose Don De Alban, Jennica Paula Masigan, Joanne Rae Pales, Laila Monera Pornel, Dennis Tablazon, Rizza Karen Veridiano, Jackie Lou Wenceslao, Edmund Leo Rico, Neil Aldrin Mallari
Abstract:
Most protected area management plans in the Philippines, particularly the smaller and more remote islands suffer from insufficient baseline data, which should provide the bases for formulating measureable conservation targets and appropriate management interventions for these protected areas. Attempts to synthesize available data particularly on cultural and socio-economic characteristic of local peoples within and outside protected areas also suffer from the lack of comprehensive and detailed inventories, which should be considered in designing adaptive management interventions to be used for those protected areas. Mt Guiting-guiting Natural Park (MGGNP) located in Sibuyan Island is one of the poorly studied protected areas in the Philippines. In this study, we determined the highly biologically important areas of the protected area using Maximum Entropy approach (MaxEnt) from environmental predictors (i.e., topographic, bioclimatic,land cover, and soil image layers) derived from global remotely sensed data and point occurrence data of species of birds and trees recorded during field surveys on the island. A total of 23 trigger species of birds and trees was modeled and stacked to generate species richness maps for biological high conservation value areas (HCVAs). Forest habitat change was delineated using dual-polarised L-band ALOS-PALSAR mosaic data at 25 meter spatial resolution, taken at two acquisition years 2007 and 2009 to provide information on forest cover ad habitat change in the island between year 2007 and 2009. Determining the livelihood guilds were also conducted using the data gathered from171 household interviews, from which demographic and livelihood variables were extracted (i.e., age, gender, number of household members, educational attainment, years of residency, distance from forest edge, main occupation, alternative sources of food and resources during scarcity months, and sources of these alternative resources).Using Principal Component Analysis (PCA) and Kruskal-Wallis test, the diversity and patterns of forest resource use by people in the island were determined with particular focus on the economic activities that directly and indirectly affect the population of key species as well as to identify levels of forest resource use by people in different areas of the park.Results showed that there are gaps in the area occupied by the natural park, as evidenced by the mismatch of the proposed HCVAs and the existing perimeters of the park. We found out that subsistence forest gathering was the possible main driver for forest degradation out of the eight livelihood guilds that were identified in the park. Determining the high conservation areas and identifyingthe anthropogenic factors that influence the species richness and abundance of key species in the different management zone of MGGNP would provide guidance for the design of a protected area management plan and future monitoring programs. However, through intensive communication and consultation with government stakeholders and local communities our results led to setting conservation targets in local development plans and serve as a basis for the reposition of the boundaries and reconfiguration of the management zones of MGGNP.Keywords: conservation gaps, livelihood guilds, MaxEnt, protected area
Procedia PDF Downloads 40721 Top Skills That Build Cultures at Organizations
Authors: Priyanka Botny Srinath, Alessandro Suglia, Mel McKendrick
Abstract:
Background: Organizational cultural studies integrate sociology and anthropology, portraying man as a creator of symbols, languages, beliefs, and ideologies -essentially, a creator and manager of meaning. In our research, we leverage analytical measures to discern whether an organization embodies a singular culture or a myriad of subcultures. Fast-forward to 2023, our research thesis focuses on digitally measuring culture, coining it as the "Work Culture Quotient." This entails conceptually mapping common experiential patterns to provide executives insights into the digital organization journey, aiding in understanding their current position and identifying future steps. Objectives: Finding the new age skills that help in defining the culture; understand the implications of post-COVID effects; derive a digital framework for measuring skillsets. Method: We conducted two comprehensive Delphi studies to distill essential insights. Delphi 1: Through a thematic analysis of interviews with 20 high-level leaders representing companies across diverse regions -India, Japan, the US, Canada, Morocco, and Uganda- we identified 20 key skills critical for cultivating a robust organizational culture. The skills are -influence, self-confidence, optimism, empathy, leadership, collaboration and cooperation, developing others, commitment, innovativeness, leveraging diversity, change management, team capabilities, self-control, digital communication, emotional awareness, team bonding, communication, problem solving, adaptability, and trustworthiness. Delphi 2: Subject matter experts were asked to complete a questionnaire derived from the thematic analysis in stage 1 to formalise themes and draw consensus amongst experts on the most important workplace skills. Results: The thematic analysis resulted in 20 workplace employee skills being identified. These skills were all included in the Delphi round 2 questionnaire. From the outputs, we analysed the data using R Studio for arriving at agreement and consensus, we also used sum of squares method to compare various agreements to extract various themes with a threshold of 80% agreements. This yielded three themes at over 80% agreement (leadership, collaboration and cooperation, communication) and three further themes at over 60% agreement (commitment, empathy, trustworthiness). From this, we selected five questionnaires to be included in the primary data collection phase, and these will be paired with the digital footprints to provide a workplace culture quotient. Implications: The findings from these studies bear profound implications for decision-makers, revolutionizing their comprehension of organizational culture. Tackling the challenge of mapping the digital organization journey involves innovative methodologies that probe not only external landscapes but also internal cultural dynamics. This holistic approach furnishes decision-makers with a nuanced understanding of their organizational culture and visualizes pivotal skills for employee growth. This clarity enables informed choices resonating with the organization's unique cultural fabric. Anticipated outcomes transcend mere individual cultural measurements, aligning with organizational goals to unveil a comprehensive view of culture, exposing artifacts and depth. Armed with this profound understanding, decision-makers gain tangible evidence for informed decision-making, strategically leveraging cultural strengths to cultivate an environment conducive to growth, innovation, and enduring success, ultimately leading to measurable outcomes.Keywords: leadership, cooperation, collaboration, teamwork, work culture
Procedia PDF Downloads 4720 A Regional Comparison of Hunter and Harvest Trends of Sika Deer (Cervus n. nippon) and Wild Boar (Sus s. leucomystax) in Japan from 1990 to 2013
Authors: Arthur Müller
Abstract:
The study treats human dimensions of hunting by conducting statistical data analysis and providing decision-making support by examples of good prefectural governance and successful wildlife management, crucial to reduce pest species and sustain a stable hunter population in the future. Therefore it analyzes recent revision of wildlife legislation, reveals differences in administrative management structures, as well as socio-demographic characteristics of hunters in correlation with harvest trends of sika deer and wild boar in 47 prefectures in Japan between 1990 and 2013. In a wider context, Japan’s decentralized license hunting system might take the potential future role of a regional pioneer in East Asia. Consequently, the study contributes to similar issues in premature hunting systems of South Korea and Taiwan. Firstly, a quantitative comparison of seven mainland regions was conducted in Hokkaido, Tohoku, Kanto, Chubu, Kinki, Chugoku, and Kyushu. Example prefectures were chosen by a cluster analysis. Shifts, differences, mean values and exponential growth rates between trap and gun hunters, age classes and common occupation types of hunters were statistically exterminated. While western Japan is characterized by high numbers of aged trap-hunters, occupied in agricultural- and forestry, the north-eastern prefectures show higher relative numbers of younger gun-hunters occupied in the field of production and process workers. With the exception of Okinawa island, most hunters in all prefectures are 60 years and older. Hence, unemployed and retired hunters are the fastest growing occupation group. Despite to drastic decrease in hunter population in absolute numbers, Hunting Recruitment Index indicated that all age classes tend to continue their hunting activity over a longer period, above ten years from 2004 to 2013 than during the former decade. Associated with a rapid population increase and distribution of sika deer and wild boar since 1978, a number of harvest from hunting and culling also have been rapidly increasing. Both wild boar hunting and culling is particularly high in western Japan, while sika hunting and culling proofs most successful in Hokkaido, central and western Japan. Since the Wildlife Protection and Proper Hunting Act in 1999 distinct prefectural hunting management authorities with different power, sets apply management approaches under the principles of subsidiarity and guidelines of the Ministry of Environment. Additionally, the Act on Special Measures for Prevention of Damage Related to Agriculture, Forestry, and Fisheries Caused by Wildlife from 2008 supports local hunters in damage prevention measures through subsidies by the Ministry of Agriculture and Forestry, which caused a rise of trap hunting, especially in western Japan. Secondly, prefectural staff in charge of wildlife management in seven regions was contacted. In summary, Hokkaido serves as a role model for dynamic, integrative, adaptive “feedback” management of Ezo sika deer, as well as a diverse network between management organizations, while Hyogo takes active measures to trap-hunt wild boars effectively. Both prefectures take the leadership in institutional performance and capacity. Northern prefectures in Tohoku, Chubu and Kanto region, firstly confronted with the emergence of wild boars and rising sika deer numbers, demand new institution and capacity building, as well as organizational learning.Keywords: hunting and culling harvest trends, hunter socio-demographics, regional comparison, wildlife management approach
Procedia PDF Downloads 28119 Assessing the Utility of Unmanned Aerial Vehicle-Borne Hyperspectral Image and Photogrammetry Derived 3D Data for Wetland Species Distribution Quick Mapping
Authors: Qiaosi Li, Frankie Kwan Kit Wong, Tung Fung
Abstract:
Lightweight unmanned aerial vehicle (UAV) loading with novel sensors offers a low cost approach for data acquisition in complex environment. This study established a framework for applying UAV system in complex environment quick mapping and assessed the performance of UAV-based hyperspectral image and digital surface model (DSM) derived from photogrammetric point clouds for 13 species classification in wetland area Mai Po Inner Deep Bay Ramsar Site, Hong Kong. The study area was part of shallow bay with flat terrain and the major species including reedbed and four mangroves: Kandelia obovata, Aegiceras corniculatum, Acrostichum auerum and Acanthus ilicifolius. Other species involved in various graminaceous plants, tarbor, shrub and invasive species Mikania micrantha. In particular, invasive species climbed up to the mangrove canopy caused damage and morphology change which might increase species distinguishing difficulty. Hyperspectral images were acquired by Headwall Nano sensor with spectral range from 400nm to 1000nm and 0.06m spatial resolution image. A sequence of multi-view RGB images was captured with 0.02m spatial resolution and 75% overlap. Hyperspectral image was corrected for radiative and geometric distortion while high resolution RGB images were matched to generate maximum dense point clouds. Furtherly, a 5 cm grid digital surface model (DSM) was derived from dense point clouds. Multiple feature reduction methods were compared to identify the efficient method and to explore the significant spectral bands in distinguishing different species. Examined methods including stepwise discriminant analysis (DA), support vector machine (SVM) and minimum noise fraction (MNF) transformation. Subsequently, spectral subsets composed of the first 20 most importance bands extracted by SVM, DA and MNF, and multi-source subsets adding extra DSM to 20 spectrum bands were served as input in maximum likelihood classifier (MLC) and SVM classifier to compare the classification result. Classification results showed that feature reduction methods from best to worst are MNF transformation, DA and SVM. MNF transformation accuracy was even higher than all bands input result. Selected bands frequently laid along the green peak, red edge and near infrared. Additionally, DA found that chlorophyll absorption red band and yellow band were also important for species classification. In terms of 3D data, DSM enhanced the discriminant capacity among low plants, arbor and mangrove. Meanwhile, DSM largely reduced misclassification due to the shadow effect and morphological variation of inter-species. In respect to classifier, nonparametric SVM outperformed than MLC for high dimension and multi-source data in this study. SVM classifier tended to produce higher overall accuracy and reduce scattered patches although it costs more time than MLC. The best result was obtained by combining MNF components and DSM in SVM classifier. This study offered a precision species distribution survey solution for inaccessible wetland area with low cost of time and labour. In addition, findings relevant to the positive effect of DSM as well as spectral feature identification indicated that the utility of UAV-borne hyperspectral and photogrammetry deriving 3D data is promising in further research on wetland species such as bio-parameters modelling and biological invasion monitoring.Keywords: digital surface model (DSM), feature reduction, hyperspectral, photogrammetric point cloud, species mapping, unmanned aerial vehicle (UAV)
Procedia PDF Downloads 25718 Utilization of Developed Single Sequence Repeats Markers for Dalmatian Pyrethrum (Tanacetum cinerariifolium) in Preliminary Genetic Diversity Study on Natural Populations
Authors: F. Varga, Z. Liber, J. Jakše, A. Turudić, Z. Šatović, I. Radosavljević, N. Jeran, M. Grdiša
Abstract:
Dalmatian pyrethrum (Tanacetum cinerariifolium (Trevir.) Sch. Bip.; Asteraceae), a source of the commercially dominant plant insecticide pyrethrin, is a species endemic to the eastern Adriatic. Genetic diversity of T. cinerariifolium was previously studied using amplified fragment length polymorphism (AFLP) markers. However, microsatellite markers (single sequence repeats - SSRs) are more informative because they are codominant, highly polymorphic, locus-specific, and more reproducible, and thus are most often used to assess the genetic diversity of plant species. Dalmatian pyrethrum is an outcrossing diploid (2n = 18) whose large genome size and high repeatability have prevented the success of the traditional approach to SSR markers development. The advent of next-generation sequencing combined with the specifically developed method recently enabled the development of, to the author's best knowledge, the first set of SSRs for genomic characterization of Dalmatian pyrethrum, which is essential from the perspective of plant genetic resources conservation. To evaluate the effectiveness of the developed SSR markers in genetic differentiation of Dalmatian pyrethrum populations, a preliminary genetic diversity study was conducted on 30 individuals from three geographically distinct natural populations in Croatia (northern Adriatic island of Mali Lošinj, southern Adriatic island of Čiovo, and Mount Biokovo) based on 12 SSR loci. Analysis of molecular variance (AMOVA) by randomization test with 10,000 permutations was performed in Arlequin 3.5. The average number of alleles per locus, observed and expected heterozygosity, probability of deviations from Hardy-Weinberg equilibrium, and inbreeding coefficient was calculated using GENEPOP 4.4. Genetic distance based on the proportion of common alleles (DPSA) was calculated using MICROSAT. Cluster analysis using the neighbor-joining method with 1,000 bootstraps was performed with PHYLIP to generate a dendrogram. The results of the AMOVA analysis showed that the total SSR diversity was 23% within and 77% between the three populations. A slight deviation from Hardy-Weinberg equilibrium was observed in the Mali Lošinj population. Allele richness ranged from 2.92 to 3.92, with the highest number of private alleles observed in the Mali Lošinj population (17). The average observed DPSA between 30 individuals was 0.557. The highest DPSA (0.875) was observed between several pairs of Dalmatian pyrethrum individuals from the Mali Lošinj and Mt. Biokovo populations, and the lowest between two individuals from the Čiovo population. Neighbor-joining trees, based on DPSA, grouped individuals into clusters according to their population affiliation. The separation of Mt. Biokovo clade was supported (bootstrap value 58%), which is consistent with the previous study on AFLP markers, where isolated populations from Mt. Biokovo differed from the rest of the populations. The developed SSR markers are an effective tool for assessing the genetic diversity and structure of natural Dalmatian pyrethrum populations. These preliminary results are encouraging for a future comprehensive study with a larger sample size across the species' range. Combined with the biochemical data, these highly informative markers could help identify potential genotypes of interest for future development of breeding lines and cultivars that are both resistant to environmental stress and high in pyrethrins. Acknowledgment: This work has been supported by the Croatian Science Foundation under the project ‘Genetic background of Dalmatian pyrethrum (Tanacetum cinerariifolium /Trevir./ Sch. Bip.) insecticidal potential’- (PyrDiv) (IP-06-2016-9034) and by project KK.01.1.1.01.0005, Biodiversity and Molecular Plant Breeding, at the Centre of Excellence for Biodiversity and Molecular Plant Breeding (CoE CroP-BioDiv), Zagreb, Croatia.Keywords: Asteraceae, genetic diversity, genomic SSRs, NGS, pyrethrum, Tanacetum cinerariifolium
Procedia PDF Downloads 11417 Ultra-Rapid and Efficient Immunomagnetic Separation of Listeria Monocytogenes from Complex Samples in High-Gradient Magnetic Field Using Disposable Magnetic Microfluidic Device
Authors: L. Malic, X. Zhang, D. Brassard, L. Clime, J. Daoud, C. Luebbert, V. Barrere, A. Boutin, S. Bidawid, N. Corneau, J. Farber, T. Veres
Abstract:
The incidence of infections caused by foodborne pathogens such as Listeria monocytogenes (L. monocytogenes) poses a great potential threat to public health and safety. These issues are further exacerbated by legal repercussions due to “zero tolerance” food safety standards adopted in developed countries. Unfortunately, a large number of related disease outbreaks are caused by pathogens present in extremely low counts currently undetectable by available techniques. The development of highly sensitive and rapid detection of foodborne pathogens is therefore crucial, and requires robust and efficient pre-analytical sample preparation. Immunomagnetic separation is a popular approach to sample preparation. Microfluidic chips combined with external magnets have emerged as viable high throughput methods. However, external magnets alone are not suitable for the capture of nanoparticles, as very strong magnetic fields are required. Devices that incorporate externally applied magnetic field and microstructures of a soft magnetic material have thus been used for local field amplification. Unfortunately, very complex and costly fabrication processes used for integration of soft magnetic materials in the reported proof-of-concept devices would prohibit their use as disposable tools for food and water safety or diagnostic applications. We present a sample preparation magnetic microfluidic device implemented in low-cost thermoplastic polymers using fabrication techniques suitable for mass-production. The developed magnetic capture chip (M-chip) was employed for rapid capture and release of L. monocytogenes conjugated to immunomagnetic nanoparticles (IMNs) in buffer and beef filtrate. The M-chip relies on a dense array of Nickel-coated high-aspect ratio pillars for capture with controlled magnetic field distribution and a microfluidic channel network for sample delivery, waste, wash and recovery. The developed Nickel-coating process and passivation allows generation of switchable local perturbations within the uniform magnetic field generated with a pair of permanent magnets placed at the opposite edges of the chip. This leads to strong and reversible trapping force, wherein high local magnetic field gradients allow efficient capture of IMNs conjugated to L. monocytogenes flowing through the microfluidic chamber. The experimental optimization of the M-chip was performed using commercially available magnetic microparticles and fabricated silica-coated iron-oxide nanoparticles. The fabricated nanoparticles were optimized to achieve the desired magnetic moment and surface functionalization was tailored to allow efficient capture antibody immobilization. The integration, validation and further optimization of the capture and release protocol is demonstrated using both, dead and live L. monocytogenes through fluorescence microscopy and plate- culture method. The capture efficiency of the chip was found to vary as function of listeria to nanoparticle concentration ratio. The maximum capture efficiency of 30% was obtained and the 24-hour plate-culture method allowed the detection of initial sample concentration of only 16 cfu/ml. The device was also very efficient in concentrating the sample from a 10 ml initial volume. Specifically, 280% concentration efficiency was achieved in 17 minutes only, demonstrating the suitability of the system for food safety applications. In addition, flexible design and low-cost fabrication process will allow rapid sample preparation for applications beyond food and water safety, including point-of-care diagnosis.Keywords: array of pillars, bacteria isolation, immunomagnetic sample preparation, polymer microfluidic device
Procedia PDF Downloads 28116 Fabrication of Highly Stable Low-Density Self-Assembled Monolayers by Thiolyne Click Reaction
Authors: Leila Safazadeh, Brad Berron
Abstract:
Self-assembled monolayers have tremendous impact in interfacial science, due to the unique opportunity they offer to tailor surface properties. Low-density self-assembled monolayers are an emerging class of monolayers where the environment-interfacing portion of the adsorbate has a greater level of conformational freedom when compared to traditional monolayer chemistries. This greater range of motion and increased spacing between surface-bound molecules offers new opportunities in tailoring adsorption phenomena in sensing systems. In particular, we expect low-density surfaces to offer a unique opportunity to intercalate surface bound ligands into the secondary structure of protiens and other macromolecules. Additionally, as many conventional sensing surfaces are built upon gold surfaces (SPR or QCM), these surfaces must be compatible with gold substrates. Here, we present the first stable method of generating low-density self assembled monolayer surfaces on gold for the analysis of their interactions with protein targets. Our approach is based on the 2:1 addition of thiol-yne chemistry to develop new classes of y-shaped adsorbates on gold, where the environment-interfacing group is spaced laterally from neighboring chemical groups. This technique involves an initial deposition of a crystalline monolayer of 1,10 decanedithiol on the gold substrate, followed by grafting of a low-packed monolayer on through a photoinitiated thiol-yne reaction in presence of light. Orthogonality of the thiol-yne chemistry (commonly referred to as a click chemistry) allows for preparation of low-density monolayers with variety of functional groups. To date, carboxyl, amine, alcohol, and alkyl terminated monolayers have been prepared using this core technology. Results from surface characterization techniques such as FTIR, contact angle goniometry and electrochemical impedance spectroscopy confirm the proposed low chain-chain interactions of the environment interfacing groups. Reductive desorption measurements suggest a higher stability for the click-LDMs compared to traditional SAMs, along with the equivalent packing density at the substrate interface, which confirms the proposed stability of the monolayer-gold interface. In addition, contact angle measurements change in the presence of an applied potential, supporting our description of a surface structure which allows the alkyl chains to freely orient themselves in response to different environments. We are studying the differences in protein adsorption phenomena between well packed and our loosely packed surfaces, and we expect this data will be ready to present at the GRC meeting. This work aims to contribute biotechnology science in the following manner: Molecularly imprinted polymers are a promising recognition mode with several advantages over natural antibodies in the recognition of small molecules. However, because of their bulk polymer structure, they are poorly suited for the rapid diffusion desired for recognition of proteins and other macromolecules. Molecularly imprinted monolayers are an emerging class of materials where the surface is imprinted, and there is not a bulk material to impede mass transfer. Further, the short distance between the binding site and the signal transduction material improves many modes of detection. My dissertation project is to develop a new chemistry for protein-imprinted self-assembled monolayers on gold, for incorporation into SPR sensors. Our unique contribution is the spatial imprinting of not only physical cues (seen in current imprinted monolayer techniques), but to also incorporate complementary chemical cues. This is accomplished through a photo-click grafting of preassembled ligands around a protein template. This conference is important for my development as a graduate student to broaden my appreciation of the sensor development beyond surface chemistry.Keywords: low-density self-assembled monolayers, thiol-yne click reaction, molecular imprinting
Procedia PDF Downloads 22615 Critical Factors for Successful Adoption of Land Value Capture Mechanisms – An Exploratory Study Applied to Indian Metro Rail Context
Authors: Anjula Negi, Sanjay Gupta
Abstract:
Paradigms studied inform inadequacies of financial resources, be it to finance metro rails for construction or to meet operational revenues or to derive profits in the long term. Funding sustainability is far and wide for much-needed public transport modes, like urban rail or metro rails, to be successfully operated. India embarks upon a sustainable transport journey and has proposed metro rail systems countrywide. As an emerging economic leader, its fiscal constraints are paramount, and the land value capture (LVC) mechanism provides necessary support and innovation toward development. India’s metro rail policy promotes multiple methods of financing, including private-sector investments and public-private-partnership. The critical question that remains to be addressed is what factors can make such mechanisms work. Globally, urban rail is a revolution noted by many researchers as future mobility. Researchers in this study deep dive by way of literature review and empirical assessments into factors that can lead to the adoption of LVC mechanisms. It is understood that the adoption of LVC methods is in the nascent stages in India. Research posits numerous challenges being faced by metro rail agencies in raising funding and for incremental value capture. A few issues pertaining to land-based financing, inter alia: are long-term financing, inter-institutional coordination, economic/ market suitability, dedicated metro funds, land ownership issues, piecemeal approach to real estate development, property development legal frameworks, etc. The question under probe is what are the parameters that can lead to success in the adoption of land value capture (LVC) as a financing mechanism. This research provides insights into key parameters crucial to the adoption of LVC in the context of Indian metro rails. Researchers have studied current forms of LVC mechanisms at various metro rails of the country. This study is significant as little research is available on the adoption of LVC, which is applicable to the Indian context. Transit agencies, State Government, Urban Local Bodies, Policy makers and think tanks, Academia, Developers, Funders, Researchers and Multi-lateral agencies may benefit from this research to take ahead LVC mechanisms in practice. The study deems it imperative to explore and understand key parameters that impact the adoption of LVC. Extensive literature review and ratification by experts working in the metro rails arena were undertaken to arrive at parameters for the study. Stakeholder consultations in the exploratory factor analysis (EFA) process were undertaken for principal component extraction. 43 seasoned and specialized experts participated in a semi-structured questionnaire to scale the maximum likelihood on each parameter, represented by various types of stakeholders. Empirical data was collected on chosen eighteen parameters, and significant correlation was extracted for output descriptives and inferential statistics. Study findings reveal these principal components as institutional governance framework, spatial planning features, legal frameworks, funding sustainability features and fiscal policy measures. In particular, funding sustainability features highlight sub-variables of beneficiaries to pay and use of multiple revenue options towards success in LVC adoption. Researchers recommend incorporation of these variables during early stage in design and project structuring for success in adoption of LVC. In turn leading to improvements in revenue sustainability of a public transport asset and help in undertaking informed transport policy decisions.Keywords: Exploratory factor analysis, land value capture mechanism, financing metro rails, revenue sustainability, transport policy
Procedia PDF Downloads 8114 XAI Implemented Prognostic Framework: Condition Monitoring and Alert System Based on RUL and Sensory Data
Authors: Faruk Ozdemir, Roy Kalawsky, Peter Hubbard
Abstract:
Accurate estimation of RUL provides a basis for effective predictive maintenance, reducing unexpected downtime for industrial equipment. However, while models such as the Random Forest have effective predictive capabilities, they are the so-called ‘black box’ models, where interpretability is at a threshold to make critical diagnostic decisions involved in industries related to aviation. The purpose of this work is to present a prognostic framework that embeds Explainable Artificial Intelligence (XAI) techniques in order to provide essential transparency in Machine Learning methods' decision-making mechanisms based on sensor data, with the objective of procuring actionable insights for the aviation industry. Sensor readings have been gathered from critical equipment such as turbofan jet engine and landing gear, and the prediction of the RUL is done by a Random Forest model. It involves steps such as data gathering, feature engineering, model training, and evaluation. These critical components’ datasets are independently trained and evaluated by the models. While suitable predictions are served, their performance metrics are reasonably good; such complex models, however obscure reasoning for the predictions made by them and may even undermine the confidence of the decision-maker or the maintenance teams. This is followed by global explanations using SHAP and local explanations using LIME in the second phase to bridge the gap in reliability within industrial contexts. These tools analyze model decisions, highlighting feature importance and explaining how each input variable affects the output. This dual approach offers a general comprehension of the overall model behavior and detailed insight into specific predictions. The proposed framework, in its third component, incorporates the techniques of causal analysis in the form of Granger causality tests in order to move beyond correlation toward causation. This will not only allow the model to predict failures but also present reasons, from the key sensor features linked to possible failure mechanisms to relevant personnel. The causality between sensor behaviors and equipment failures creates much value for maintenance teams due to better root cause identification and effective preventive measures. This step contributes to the system being more explainable. Surrogate Several simple models, including Decision Trees and Linear Models, can be used in yet another stage to approximately represent the complex Random Forest model. These simpler models act as backups, replicating important jobs of the original model's behavior. If the feature explanations obtained from the surrogate model are cross-validated with the primary model, the insights derived would be more reliable and provide an intuitive sense of how the input variables affect the predictions. We then create an iterative explainable feedback loop, where the knowledge learned from the explainability methods feeds back into the training of the models. This feeds into a cycle of continuous improvement both in model accuracy and interpretability over time. By systematically integrating new findings, the model is expected to adapt to changed conditions and further develop its prognosis capability. These components are then presented to the decision-makers through the development of a fully transparent condition monitoring and alert system. The system provides a holistic tool for maintenance operations by leveraging RUL predictions, feature importance scores, persistent sensor threshold values, and autonomous alert mechanisms. Since the system will provide explanations for the predictions given, along with active alerts, the maintenance personnel can make informed decisions on their end regarding correct interventions to extend the life of the critical machinery.Keywords: predictive maintenance, explainable artificial intelligence, prognostic, RUL, machine learning, turbofan engines, C-MAPSS dataset
Procedia PDF Downloads 713 Hybrid GNN Based Machine Learning Forecasting Model For Industrial IoT Applications
Authors: Atish Bagchi, Siva Chandrasekaran
Abstract:
Background: According to World Bank national accounts data, the estimated global manufacturing value-added output in 2020 was 13.74 trillion USD. These manufacturing processes are monitored, modelled, and controlled by advanced, real-time, computer-based systems, e.g., Industrial IoT, PLC, SCADA, etc. These systems measure and manipulate a set of physical variables, e.g., temperature, pressure, etc. Despite the use of IoT, SCADA etc., in manufacturing, studies suggest that unplanned downtime leads to economic losses of approximately 864 billion USD each year. Therefore, real-time, accurate detection, classification and prediction of machine behaviour are needed to minimise financial losses. Although vast literature exists on time-series data processing using machine learning, the challenges faced by the industries that lead to unplanned downtimes are: The current algorithms do not efficiently handle the high-volume streaming data from industrial IoTsensors and were tested on static and simulated datasets. While the existing algorithms can detect significant 'point' outliers, most do not handle contextual outliers (e.g., values within normal range but happening at an unexpected time of day) or subtle changes in machine behaviour. Machines are revamped periodically as part of planned maintenance programmes, which change the assumptions on which original AI models were created and trained. Aim: This research study aims to deliver a Graph Neural Network(GNN)based hybrid forecasting model that interfaces with the real-time machine control systemand can detect, predict machine behaviour and behavioural changes (anomalies) in real-time. This research will help manufacturing industries and utilities, e.g., water, electricity etc., reduce unplanned downtimes and consequential financial losses. Method: The data stored within a process control system, e.g., Industrial-IoT, Data Historian, is generally sampled during data acquisition from the sensor (source) and whenpersistingin the Data Historian to optimise storage and query performance. The sampling may inadvertently discard values that might contain subtle aspects of behavioural changes in machines. This research proposed a hybrid forecasting and classification model which combines the expressive and extrapolation capability of GNN enhanced with the estimates of entropy and spectral changes in the sampled data and additional temporal contexts to reconstruct the likely temporal trajectory of machine behavioural changes. The proposed real-time model belongs to the Deep Learning category of machine learning and interfaces with the sensors directly or through 'Process Data Historian', SCADA etc., to perform forecasting and classification tasks. Results: The model was interfaced with a Data Historianholding time-series data from 4flow sensors within a water treatment plantfor45 days. The recorded sampling interval for a sensor varied from 10 sec to 30 min. Approximately 65% of the available data was used for training the model, 20% for validation, and the rest for testing. The model identified the anomalies within the water treatment plant and predicted the plant's performance. These results were compared with the data reported by the plant SCADA-Historian system and the official data reported by the plant authorities. The model's accuracy was much higher (20%) than that reported by the SCADA-Historian system and matched the validated results declared by the plant auditors. Conclusions: The research demonstrates that a hybrid GNN based approach enhanced with entropy calculation and spectral information can effectively detect and predict a machine's behavioural changes. The model can interface with a plant's 'process control system' in real-time to perform forecasting and classification tasks to aid the asset management engineers to operate their machines more efficiently and reduce unplanned downtimes. A series of trialsare planned for this model in the future in other manufacturing industries.Keywords: GNN, Entropy, anomaly detection, industrial time-series, AI, IoT, Industry 4.0, Machine Learning
Procedia PDF Downloads 15012 Flood Risk Management in the Semi-Arid Regions of Lebanon - Case Study “Semi Arid Catchments, Ras Baalbeck and Fekha”
Authors: Essam Gooda, Chadi Abdallah, Hamdi Seif, Safaa Baydoun, Rouya Hdeib, Hilal Obeid
Abstract:
Floods are common natural disaster occurring in semi-arid regions in Lebanon. This results in damage to human life and deterioration of environment. Despite their destructive nature and their immense impact on the socio-economy of the region, flash floods have not received adequate attention from policy and decision makers. This is mainly because of poor understanding of the processes involved and measures needed to manage the problem. The current understanding of flash floods remains at the level of general concepts; most policy makers have yet to recognize that flash floods are distinctly different from normal riverine floods in term of causes, propagation, intensity, impacts, predictability, and management. Flash floods are generally not investigated as a separate class of event but are rather reported as part of the overall seasonal flood situation. As a result, Lebanon generally lacks policies, strategies, and plans relating specifically to flash floods. Main objective of this research is to improve flash flood prediction by providing new knowledge and better understanding of the hydrological processes governing flash floods in the East Catchments of El Assi River. This includes developing rainstorm time distribution curves that are unique for this type of study region; analyzing, investigating, and developing a relationship between arid watershed characteristics (including urbanization) and nearby villages flow flood frequency in Ras Baalbeck and Fekha. This paper discusses different levels of integration approach¬es between GIS and hydrological models (HEC-HMS & HEC-RAS) and presents a case study, in which all the tasks of creating model input, editing data, running the model, and displaying output results. The study area corresponds to the East Basin (Ras Baalbeck & Fakeha), comprising nearly 350 km2 and situated in the Bekaa Valley of Lebanon. The case study presented in this paper has a database which is derived from Lebanese Army topographic maps for this region. Using ArcMap to digitizing the contour lines, streams & other features from the topographic maps. The digital elevation model grid (DEM) is derived for the study area. The next steps in this research are to incorporate rainfall time series data from Arseal, Fekha and Deir El Ahmar stations to build a hydrologic data model within a GIS environment and to combine ArcGIS/ArcMap, HEC-HMS & HEC-RAS models, in order to produce a spatial-temporal model for floodplain analysis at a regional scale. In this study, HEC-HMS and SCS methods were chosen to build the hydrologic model of the watershed. The model then calibrated using flood event that occurred between 7th & 9th of May 2014 which considered exceptionally extreme because of the length of time the flows lasted (15 hours) and the fact that it covered both the watershed of Aarsal and Ras Baalbeck. The strongest reported flood in recent times lasted for only 7 hours covering only one watershed. The calibrated hydrologic model is then used to build the hydraulic model & assessing of flood hazards maps for the region. HEC-RAS Model is used in this issue & field trips were done for the catchments in order to calibrated both Hydrologic and Hydraulic models. The presented models are a kind of flexible procedures for an ungaged watershed. For some storm events it delivers good results, while for others, no parameter vectors can be found. In order to have a general methodology based on these ideas, further calibration and compromising of results on the dependence of many flood events parameters and catchment properties is required.Keywords: flood risk management, flash flood, semi arid region, El Assi River, hazard maps
Procedia PDF Downloads 47811 A Spatial Repetitive Controller Applied to an Aeroelastic Model for Wind Turbines
Authors: Riccardo Fratini, Riccardo Santini, Jacopo Serafini, Massimo Gennaretti, Stefano Panzieri
Abstract:
This paper presents a nonlinear differential model, for a three-bladed horizontal axis wind turbine (HAWT) suited for control applications. It is based on a 8-dofs, lumped parameters structural dynamics coupled with a quasi-steady sectional aerodynamics. In particular, using the Euler-Lagrange Equation (Energetic Variation approach), the authors derive, and successively validate, such model. For the derivation of the aerodynamic model, the Greenbergs theory, an extension of the theory proposed by Theodorsen to the case of thin airfoils undergoing pulsating flows, is used. Specifically, in this work, the authors restricted that theory under the hypothesis of low perturbation reduced frequency k, which causes the lift deficiency function C(k) to be real and equal to 1. Furthermore, the expressions of the aerodynamic loads are obtained using the quasi-steady strip theory (Hodges and Ormiston), as a function of the chordwise and normal components of relative velocity between flow and airfoil Ut, Up, their derivatives, and section angular velocity ε˙. For the validation of the proposed model, the authors carried out open and closed-loop simulations of a 5 MW HAWT, characterized by radius R =61.5 m and by mean chord c = 3 m, with a nominal angular velocity Ωn = 1.266rad/sec. The first analysis performed is the steady state solution, where a uniform wind Vw = 11.4 m/s is considered and a collective pitch angle θ = 0.88◦ is imposed. During this step, the authors noticed that the proposed model is intrinsically periodic due to the effect of the wind and of the gravitational force. In order to reject this periodic trend in the model dynamics, the authors propose a collective repetitive control algorithm coupled with a PD controller. In particular, when the reference command to be tracked and/or the disturbance to be rejected are periodic signals with a fixed period, the repetitive control strategies can be applied due to their high precision, simple implementation and little performance dependency on system parameters. The functional scheme of a repetitive controller is quite simple and, given a periodic reference command, is composed of a control block Crc(s) usually added to an existing feedback control system. The control block contains and a free time-delay system eτs in a positive feedback loop, and a low-pass filter q(s). It should be noticed that, while the time delay term reduces the stability margin, on the other hand the low pass filter is added to ensure stability. It is worth noting that, in this work, the authors propose a phase shifting for the controller and the delay system has been modified as e^(−(T−γk)), where T is the period of the signal and γk is a phase shifting of k samples of the same periodic signal. It should be noticed that, the phase shifting technique is particularly useful in non-minimum phase systems, such as flexible structures. In fact, using the phase shifting, the iterative algorithm could reach the convergence also at high frequencies. Notice that, in our case study, the shifting of k samples depends both on the rotor angular velocity Ω and on the rotor azimuth angle Ψ: we refer to this controller as a spatial repetitive controller. The collective repetitive controller has also been coupled with a C(s) = PD(s), in order to dampen oscillations of the blades. The performance of the spatial repetitive controller is compared with an industrial PI controller. In particular, starting from wind speed velocity Vw = 11.4 m/s the controller is asked to maintain the nominal angular velocity Ωn = 1.266rad/s after an instantaneous increase of wind speed (Vw = 15 m/s). Then, a purely periodic external disturbance is introduced in order to stress the capabilities of the repetitive controller. The results of the simulations show that, contrary to a simple PI controller, the spatial repetitive-PD controller has the capability to reject both external disturbances and periodic trend in the model dynamics. Finally, the nominal value of the angular velocity is reached, in accordance with results obtained with commercial software for a turbine of the same type.Keywords: wind turbines, aeroelasticity, repetitive control, periodic systems
Procedia PDF Downloads 25010 Unleashing Potential in Pedagogical Innovation for STEM Education: Applying Knowledge Transfer Technology to Guide a Co-Creation Learning Mechanism for the Lingering Effects Amid COVID-19
Authors: Lan Cheng, Harry Qin, Yang Wang
Abstract:
Background: COVID-19 has induced the largest digital learning experiment in history. There is also emerging research evidence that students have paid a high cost of learning loss from virtual learning. University-wide survey results demonstrate that digital learning remains difficult for students who struggle with learning challenges, isolation, or a lack of resources. Large-scale efforts are therefore increasingly utilized for digital education. To better prepare students in higher education for this grand scientific and technological transformation, STEM education has been prioritized and promoted as a strategic imperative in the ongoing curriculum reform essential for unfinished learning needs and whole-person development. Building upon five key elements identified in the STEM education literature: Problem-based Learning, Community and Belonging, Technology Skills, Personalization of Learning, Connection to the External Community, this case study explores the potential of pedagogical innovation that integrates computational and experimental methodologies to support, enrich, and navigate STEM education. Objectives: The goal of this case study is to create a high-fidelity prototype design for STEM education with knowledge transfer technology that contains a Cooperative Multi-Agent System (CMAS), which has the objectives of (1) conduct assessment to reveal a virtual learning mechanism and establish strategies to facilitate scientific learning engagement, accessibility, and connection within and beyond university setting, (2) explore and validate an interactional co-creation approach embedded in project-based learning activities under the STEM learning context, which is being transformed by both digital technology and student behavior change,(3) formulate and implement the STEM-oriented campaign to guide learning network mapping, mitigate the loss of learning, enhance the learning experience, scale-up inclusive participation. Methods: This study applied a case study strategy and a methodology informed by Social Network Analysis Theory within a cross-disciplinary communication paradigm (students, peers, educators). Knowledge transfer technology is introduced to address learning challenges and to increase the efficiency of Reinforcement Learning (RL) algorithms. A co-creation learning framework was identified and investigated in a context-specific way with a learning analytic tool designed in this study. Findings: The result shows that (1) CMAS-empowered learning support reduced students’ confusion, difficulties, and gaps during problem-solving scenarios while increasing learner capacity empowerment, (2) The co-creation learning phenomenon have examined through the lens of the campaign and reveals that an interactive virtual learning environment fosters students to navigate scientific challenge independently and collaboratively, (3) The deliverables brought from the STEM educational campaign provide a methodological framework both within the context of the curriculum design and external community engagement application. Conclusion: This study brings a holistic and coherent pedagogy to cultivates students’ interest in STEM and develop them a knowledge base to integrate and apply knowledge across different STEM disciplines. Through the co-designing and cross-disciplinary educational content and campaign promotion, findings suggest factors to empower evidence-based learning practice while also piloting and tracking the impact of the scholastic value of co-creation under the dynamic learning environment. The data nested under the knowledge transfer technology situates learners’ scientific journey and could pave the way for theoretical advancement and broader scientific enervators within larger datasets, projects, and communities.Keywords: co-creation, cross-disciplinary, knowledge transfer, STEM education, social network analysis
Procedia PDF Downloads 1149 Mapping the Neurotoxic Effects of Sub-Toxic Manganese Exposure: Behavioral Outcomes, Imaging Biomarkers, and Dopaminergic System Alterations
Authors: Katie M. Clark, Adriana A. Tienda, Krista C. Paffenroth, Lindsey N. Brigante, Daniel C. Colvin, Jose Maldonado, Erin S. Calipari, Fiona E. Harrison
Abstract:
Manganese (Mn) is an essential trace element required for human health and is important in antioxidant defenses, as well as in the development and function of dopaminergic neurons. However, chronic low-level Mn exposure, such as through contaminated drinking water, poses risks that may contribute to neurodevelopmental and neurodegenerative conditions, including attention deficit hyperactivity disorder (ADHD). Pharmacological inhibition of the dopamine transporter (DAT) blocks reuptake, elevates synaptic dopamine, and alleviates ADHD symptoms. This study aimed to determine whether Mn exposure in juvenile mice modifies their response to DAT blockers, amphetamine, and methylphenidate and utilize neuroimaging methods to visualize and quantify Mn distribution across dopaminergic brain regions. Male and female heterozygous DATᵀ³⁵⁶ᴹ and wild-type littermates were randomly assigned to receive control (2.5% Stevia) or high Manganese (2.5 mg/ml Mn + 2.5% Stevia) via water ad libitum from weaning (21-28 days) for 4-5 weeks. Mice underwent repeated testing in locomotor activity chambers for three consecutive days (60 mins.) to ensure that they were fully habituated to the environments. On the fourth day, a 3-hour activity session was conducted following treatment with amphetamine (3 mg/kg) or methylphenidate (5 mg/kg). The second drug was administered in a second 3-hour activity session following a 1-week washout period. Following the washout, the mice were given one last injection of amphetamine and euthanized one hour later. Using the ex-vivo brains, magnetic resonance relaxometry (MRR) was performed on a 7Telsa imaging system to map T1- and T2-weighted (T1W, T2W) relaxation times. Mn inherent paramagnetic properties shorten both T1W and T2W times, which enhances the signal intensity and contrast, enabling effective visualization of Mn accumulation in the entire brain. A subset of mice was treated with amphetamine 1 hour before euthanasia. SmartSPIM light sheet microscopy with cleared whole brains and cFos and tyrosine hydroxylase (TH) labeling enabled an unbiased automated counting and densitometric analysis of TH and cFos positive cells. Immunohistochemistry was conducted to measure synaptic protein markers and quantify changes in neurotransmitter regulation. Mn exposure elevated Mn brain levels and potentiated stimulant effects in males. The globus pallidus, substantia nigra, thalamus, and striatum exhibited more pronounced T1W shortening, indicating regional susceptibility to Mn accumulation (p<0.0001, 2-Way ANOVA). In the cleared whole brains, initial analyses suggest that TH and c-Fos co-staining mirrors behavioral data with decreased co-staining in DATT356M+/- mice. Ongoing studies will identify the molecular basis of the effect of Mn, including changes to DAergic metabolism and transport and post-translational modification to the DAT. These findings demonstrate that alterations in T1W relaxation times, as measured by MRR, may serve as an early biomarker for Mn neurotoxicity. This neuroimaging approach exhibits remarkable accuracy in identifying Mn-susceptible brain regions, with a spatial resolution and sensitivity that surpasses current conventional dissection and mass spectrometry approaches. The capability to label and map TH and cFos expression across the entire brain provides insights into whole-brain neuronal activation and its connections to functional neural circuits and behavior following amphetamine and methylphenidate administration.Keywords: manganese, environmental toxicology, dopamine dysfunction, biomarkers, drinking water, light sheet microscopy, magnetic resonance relaxometry (MRR)
Procedia PDF Downloads 108 Design and Construction of a Solar Dehydration System as a Technological Strategy for Food Sustainability in Difficult-to-Access Territories
Authors: Erika T. Fajardo-Ariza, Luis A. Castillo-Sanabria, Andrea Nieto-Veloza, Carlos M. Zuluaga-Domínguez
Abstract:
The growing emphasis on sustainable food production and preservation has driven the development of innovative solutions to minimize postharvest losses and improve market access for small-scale farmers. This project focuses on designing, constructing, and selecting materials for solar dryers in certain regions of Colombia where inadequate infrastructure limits access to major commercial hubs. Postharvest losses pose a significant challenge, impacting food security and farmer income. Addressing these losses is crucial for enhancing the value of agricultural products and supporting local economies. A comprehensive survey of local farmers revealed substantial challenges, including limited market access, inefficient transportation, and significant postharvest losses. For crops such as coffee, bananas, and citrus fruits, losses range from 0% to 50%, driven by factors like labor shortages, adverse climatic conditions, and transportation difficulties. To address these issues, the project prioritized selecting effective materials for the solar dryer. Various materials, recovered acrylic, original acrylic, glass, and polystyrene, were tested for their performance. The tests showed that recovered acrylic and glass were most effective in increasing the temperature difference between the interior and the external environment. The solar dryer was designed using Fusion 360® software (Autodesk, USA) and adhered to architectural guidelines from Architectural Graphic Standards. It features up to sixteen aluminum trays, each with a maximum load capacity of 3.5 kg, arranged in two levels to optimize drying efficiency. The constructed dryer was then tested with two locally available plant materials: green plantains (Musa paradisiaca L.) and snack bananas (Musa AA Simonds). To monitor performance, Thermo hygrometers and an Arduino system recorded internal and external temperature and humidity at one-minute intervals. Despite challenges such as adverse weather conditions and delays in local government funding, the active involvement of local producers was a significant advantage, fostering ownership and understanding of the project. The solar dryer operated under conditions of 31°C dry bulb temperature (Tbs), 55% relative humidity, and 21°C wet bulb temperature (Tbh). The drying curves showed a consistent drying period with critical moisture content observed between 200 and 300 minutes, followed by a sharp decrease in moisture loss, reaching an equilibrium point after 3,400 minutes. Although the solar dryer requires more time and is highly dependent on atmospheric conditions, it can approach the efficiency of an electric dryer when properly optimized. The successful design and construction of solar dryer systems in difficult-to-access areas represent a significant advancement in agricultural sustainability and postharvest loss reduction. By choosing effective materials such as recovered acrylic and implementing a carefully planned design, the project provides a valuable tool for local farmers. The initiative not only improves the quality and marketability of agricultural products but also offers broader environmental benefits, such as reduced reliance on fossil fuels and decreased waste. Additionally, it supports economic growth by enhancing the value of crops and potentially increasing farmer income. The successful implementation and testing of the dryer, combined with the engagement of local stakeholders, highlight its potential for replication and positive impact in similar contexts.Keywords: drying technology, postharvest loss reduction, solar dryers, sustainable agriculture
Procedia PDF Downloads 317 Blue Economy and Marine Mining
Authors: Fani Sakellariadou
Abstract:
The Blue Economy includes all marine-based and marine-related activities. They correspond to established, emerging as well as unborn ocean-based industries. Seabed mining is an emerging marine-based activity; its operations depend particularly on cutting-edge science and technology. The 21st century will face a crisis in resources as a consequence of the world’s population growth and the rising standard of living. The natural capital stored in the global ocean is decisive for it to provide a wide range of sustainable ecosystem services. Seabed mineral deposits were identified as having a high potential for critical elements and base metals. They have a crucial role in the fast evolution of green technologies. The major categories of marine mineral deposits are deep-sea deposits, including cobalt-rich ferromanganese crusts, polymetallic nodules, phosphorites, and deep-sea muds, as well as shallow-water deposits including marine placers. Seabed mining operations may take place within continental shelf areas of nation-states. In international waters, the International Seabed Authority (ISA) has entered into 15-year contracts for deep-seabed exploration with 21 contractors. These contracts are for polymetallic nodules (18 contracts), polymetallic sulfides (7 contracts), and cobalt-rich ferromanganese crusts (5 contracts). Exploration areas are located in the Clarion-Clipperton Zone, the Indian Ocean, the Mid Atlantic Ridge, the South Atlantic Ocean, and the Pacific Ocean. Potential environmental impacts of deep-sea mining include habitat alteration, sediment disturbance, plume discharge, toxic compounds release, light and noise generation, and air emissions. They could cause burial and smothering of benthic species, health problems for marine species, biodiversity loss, reduced photosynthetic mechanism, behavior change and masking acoustic communication for mammals and fish, heavy metals bioaccumulation up the food web, decrease of the content of dissolved oxygen, and climate change. An important concern related to deep-sea mining is our knowledge gap regarding deep-sea bio-communities. The ecological consequences that will be caused in the remote, unique, fragile, and little-understood deep-sea ecosystems and inhabitants are still largely unknown. The blue economy conceptualizes oceans as developing spaces supplying socio-economic benefits for current and future generations but also protecting, supporting, and restoring biodiversity and ecological productivity. In that sense, people should apply holistic management and make an assessment of marine mining impacts on ecosystem services, including the categories of provisioning, regulating, supporting, and cultural services. The variety in environmental parameters, the range in sea depth, the diversity in the characteristics of marine species, and the possible proximity to other existing maritime industries cause a span of marine mining impact the ability of ecosystems to support people and nature. In conclusion, the use of the untapped potential of the global ocean demands a liable and sustainable attitude. Moreover, there is a need to change our lifestyle and move beyond the philosophy of single-use. Living in a throw-away society based on a linear approach to resource consumption, humans are putting too much pressure on the natural environment. Applying modern, sustainable and eco-friendly approaches according to the principle of circular economy, a substantial amount of natural resource savings will be achieved. Acknowledgement: This work is part of the MAREE project, financially supported by the Division VI of IUPAC. This work has been partly supported by the University of Piraeus Research Center.Keywords: blue economy, deep-sea mining, ecosystem services, environmental impacts
Procedia PDF Downloads 846 Open Science Philosophy, Research and Innovation
Authors: C.Ardil
Abstract:
Open Science translates the understanding and application of various theories and practices in open science philosophy, systems, paradigms and epistemology. Open Science originates with the premise that universal scientific knowledge is a product of a collective scholarly and social collaboration involving all stakeholders and knowledge belongs to the global society. Scientific outputs generated by public research are a public good that should be available to all at no cost and without barriers or restrictions. Open Science has the potential to increase the quality, impact and benefits of science and to accelerate advancement of knowledge by making it more reliable, more efficient and accurate, better understandable by society and responsive to societal challenges, and has the potential to enable growth and innovation through reuse of scientific results by all stakeholders at all levels of society, and ultimately contribute to growth and competitiveness of global society. Open Science is a global movement to improve accessibility to and reusability of research practices and outputs. In its broadest definition, it encompasses open access to publications, open research data and methods, open source, open educational resources, open evaluation, and citizen science. The implementation of open science provides an excellent opportunity to renegotiate the social roles and responsibilities of publicly funded research and to rethink the science system as a whole. Open Science is the practice of science in such a way that others can collaborate and contribute, where research data, lab notes and other research processes are freely available, under terms that enable reuse, redistribution and reproduction of the research and its underlying data and methods. Open Science represents a novel systematic approach to the scientific process, shifting from the standard practices of publishing research results in scientific publications towards sharing and using all available knowledge at an earlier stage in the research process, based on cooperative work and diffusing scholarly knowledge with no barriers and restrictions. Open Science refers to efforts to make the primary outputs of publicly funded research results (publications and the research data) publicly accessible in digital format with no limitations. Open Science is about extending the principles of openness to the whole research cycle, fostering, sharing and collaboration as early as possible, thus entailing a systemic change to the way science and research is done. Open Science is the ongoing transition in how open research is carried out, disseminated, deployed, and transformed to make scholarly research more open, global, collaborative, creative and closer to society. Open Science involves various movements aiming to remove the barriers for sharing any kind of output, resources, methods or tools, at any stage of the research process. Open Science embraces open access to publications, research data, source software, collaboration, peer review, notebooks, educational resources, monographs, citizen science, or research crowdfunding. The recognition and adoption of open science practices, including open science policies that increase open access to scientific literature and encourage data and code sharing, is increasing in the open science philosophy. Revolutionary open science policies are motivated by ethical, moral or utilitarian arguments, such as the right to access digital research literature for open source research or science data accumulation, research indicators, transparency in the field of academic practice, and reproducibility. Open science philosophy is adopted primarily to demonstrate the benefits of open science practices. Researchers use open science applications for their own advantage in order to get more offers, increase citations, attract media attention, potential collaborators, career opportunities, donations and funding opportunities. In open science philosophy, open data findings are evidence that open science practices provide significant benefits to researchers in scientific research creation, collaboration, communication, and evaluation according to more traditional closed science practices. Open science considers concerns such as the rigor of peer review, common research facts such as financing and career development, and the sacrifice of author rights. Therefore, researchers are recommended to implement open science research within the framework of existing academic evaluation and incentives. As a result, open science research issues are addressed in the areas of publishing, financing, collaboration, resource management and sharing, career development, discussion of open science questions and conclusions.Keywords: Open Science, Open Science Philosophy, Open Science Research, Open Science Data
Procedia PDF Downloads 1325 Reassembling a Fragmented Border Landscape at Crossroads: Indigenous Rights, Rural Sustainability, Regional Integration and Post-Colonial Justice in Hong Kong
Authors: Chiu-Yin Leung
Abstract:
This research investigates a complex assemblage among indigenous identities, socio-political organization and national apparatus in the border landscape of post-colonial Hong Kong. This former British colony had designated a transient mode of governance in its New Territories and particularly the northernmost borderland in 1951-2012. With a discriminated system of land provisions for the indigenous villagers, the place has been inherited with distinctive village-based culture, historic monuments and agrarian practices until its sovereignty return into the People’s Republic of China. In its latest development imperatives by the national strategic planning, the frontier area of Hong Kong has been identified as a strategy site for regional economic integration in South China, with cross-border projects of innovation and technology zones, mega-transport infrastructure and inter-jurisdictional arrangement. Contemporary literature theorizes borders as the material and discursive production of territoriality, which manifest in state apparatus and the daily lives of its citizens and condense in the contested articulations of power, security and citizenship. Drawing on the concept of assemblage, this paper attempts to tract how the border regime and infrastructure in Hong Kong as a city are deeply ingrained in the everyday lived spaces of the local communities but also the changing urban and regional strategies across different longitudinal moments. Through an intensive ethnographic fieldwork among the borderland villages since 2008 and the extensive analysis of colonial archives, new development plans and spatial planning frameworks, the author navigates the genealogy of the border landscape in Ta Kwu Ling frontier area and its implications as the milieu for new state space, covering heterogeneous fields particularly in indigenous rights, heritage preservation, rural sustainability and regional economy. Empirical evidence suggests an apparent bias towards indigenous power and colonial representation in classifying landscape values and conserving historical monuments. Squatter and farm tenants are often deprived of property rights, statutory participation and livelihood option in the planning process. The postcolonial bureaucracies have great difficulties in mobilizing resources to catch up with the swift, political-first approach of the mainland counterparts. Meanwhile, the cultural heritage, lineage network and memory landscape are not protected altogether with any holistic view or collaborative effort across the border. The enactment of land resumption and compensation scheme is furthermore disturbed by lineage-based customary law, technocratic bureaucracy, intra-community conflicts and multi-scalar political mobilization. As many traces of colonial misfortune and tyranny have been whitewashed without proper management, the author argues that postcolonial justice is yet reconciled in this fragmented border landscape. The assemblage of border in mainstream representation has tended to oversimplify local struggles as a collective mist and setup a wider production of schizophrenia experiences in the discussion of further economic integration among Hong Kong and other mainland cities in the Pearl River Delta Region. The research is expected to shed new light on the theorizing of border regions and postcolonialism beyond Eurocentric perspectives. In reassembling the borderland experiences with other arrays in state governance, village organization and indigenous identities, the author also suggests an alternative epistemology in reconciling socio-spatial differences and opening up imaginaries for positive interventions.Keywords: heritage conservation, indigenous communities, post-colonial borderland, regional development, rural sustainability
Procedia PDF Downloads 2084 Regenerative Agriculture Standing at the Intersection of Design, Mycology, and Soil Fertility
Authors: Andrew Gennett
Abstract:
Designing for fungal development means embracing the symbiotic relationship between the living system and built environment. The potential of mycelium post-colonization is explored for the fabrication of advanced pure mycelium products, going beyond the conventional methods of aggregating materials. Fruiting induction imparts desired material properties such as enhanced environmental resistance. Production approach allows for simultaneous generation of multiple products while scaling up raw materials supply suitable for architectural applications. The following work explores the integration of fungal environmental perception with computational design of built fruiting chambers. Polyporales, are classified by their porous reproductive tissues supported by a wood-like context tissue covered by a hard waterproofing coat of hydrobpobins. Persisting for years in the wild, these species represent material properties that would be highly desired in moving beyond flat sheets of arial mycelium as with leather or bacon applications. Understanding the inherent environmental perception of fungi has become the basis for working with and inducing desired hyphal differentiation. Working within the native signal interpretation of a mycelium mass during fruiting induction provides the means to apply textures and color to the final finishing coat. A delicate interplay between meeting human-centered goals while designing around natural processes of living systems represents a blend of art and science. Architecturally, physical simulations inform model design for simple modular fruiting chambers that change as fungal growth progresses, while biological life science principles describe the internal computations occurring within the fungal hyphae. First, a form filling phase of growth is controlled by growth chamber environment. Second, an initiation phase of growth forms the final exterior finishing texture. Hyphal densification induces cellular cascades, in turn producing the classical hardened cuticle, UV protective molecule production, as well, as waterproofing finish. Upon fruiting process completion, the fully colonized spent substrate holds considerable value and is not considered waste. Instead, it becomes a valuable resource in the next cycle of production scale-up. However, the acquisition of new substrate resources poses a critical question, particularly as these resources become increasingly scarce. Pursuing a regenerative design paradigm from the environmental perspective, the usage of “agricultural waste” for architectural materials would prove a continuation of the destructive practices established by the previous industrial regime. For these residues from fields and forests serve a vital ecological role protecting the soil surface in combating erosion while reducing evaporation and fostering a biologically diverse food web. Instead, urban centers have been identified as abundant sources of new substrate material. Diverting the waste from secondary locations such as food processing centers, papers mills, and recycling facilities not only reduces landfill burden but leverages the latent value of these waste steams as precious resources for mycelium cultivation. In conclusion, working with living systems through innovative built environments for fungal development, provides the needed gain of function and resilience of mycelium products. The next generation of sustainable fungal products will go beyond the current binding process, with a focus upon reducing landfill burden from urban centers. In final considerations, biophilic material builds to an ecologically regenerative recycling production cycle.Keywords: regenerative agriculture, mycelium fabrication, growth chamber design, sustainable resource acquisition, fungal morphogenesis, soil fertility
Procedia PDF Downloads 663 A Comprehensive Study of Spread Models of Wildland Fires
Authors: Manavjit Singh Dhindsa, Ursula Das, Kshirasagar Naik, Marzia Zaman, Richard Purcell, Srinivas Sampalli, Abdul Mutakabbir, Chung-Horng Lung, Thambirajah Ravichandran
Abstract:
These days, wildland fires, also known as forest fires, are more prevalent than ever. Wildfires have major repercussions that affect ecosystems, communities, and the environment in several ways. Wildfires lead to habitat destruction and biodiversity loss, affecting ecosystems and causing soil erosion. They also contribute to poor air quality by releasing smoke and pollutants that pose health risks, especially for individuals with respiratory conditions. Wildfires can damage infrastructure, disrupt communities, and cause economic losses. The economic impact of firefighting efforts, combined with their direct effects on forestry and agriculture, causes significant financial difficulties for the areas impacted. This research explores different forest fire spread models and presents a comprehensive review of various techniques and methodologies used in the field. A forest fire spread model is a computational or mathematical representation that is used to simulate and predict the behavior of a forest fire. By applying scientific concepts and data from empirical studies, these models attempt to capture the intricate dynamics of how a fire spreads, taking into consideration a variety of factors like weather patterns, topography, fuel types, and environmental conditions. These models assist authorities in understanding and forecasting the potential trajectory and intensity of a wildfire. Emphasizing the need for a comprehensive understanding of wildfire dynamics, this research explores the approaches, assumptions, and findings derived from various models. By using a comparison approach, a critical analysis is provided by identifying patterns, strengths, and weaknesses among these models. The purpose of the survey is to further wildfire research and management techniques. Decision-makers, researchers, and practitioners can benefit from the useful insights that are provided by synthesizing established information. Fire spread models provide insights into potential fire behavior, facilitating authorities to make informed decisions about evacuation activities, allocating resources for fire-fighting efforts, and planning for preventive actions. Wildfire spread models are also useful in post-wildfire mitigation strategies as they help in assessing the fire's severity, determining high-risk regions for post-fire dangers, and forecasting soil erosion trends. The analysis highlights the importance of customized modeling approaches for various circumstances and promotes our understanding of the way forest fires spread. Some of the known models in this field are Rothermel’s wildland fuel model, FARSITE, WRF-SFIRE, FIRETEC, FlamMap, FSPro, cellular automata model, and others. The key characteristics that these models consider include weather (includes factors such as wind speed and direction), topography (includes factors like landscape elevation), and fuel availability (includes factors like types of vegetation) among other factors. The models discussed are physics-based, data-driven, or hybrid models, also utilizing ML techniques like attention-based neural networks to enhance the performance of the model. In order to lessen the destructive effects of forest fires, this initiative aims to promote the development of more precise prediction tools and effective management techniques. The survey expands its scope to address the practical needs of numerous stakeholders. Access to enhanced early warning systems enables decision-makers to take prompt action. Emergency responders benefit from improved resource allocation strategies, strengthening the efficacy of firefighting efforts.Keywords: artificial intelligence, deep learning, forest fire management, fire risk assessment, fire simulation, machine learning, remote sensing, wildfire modeling
Procedia PDF Downloads 812 Enhancing Disaster Resilience: Advanced Natural Hazard Assessment and Monitoring
Authors: Mariza Kaskara, Stella Girtsou, Maria Prodromou, Alexia Tsouni, Christodoulos Mettas, Stavroula Alatza, Kyriaki Fotiou, Marios Tzouvaras, Charalampos Kontoes, Diofantos Hadjimitsis
Abstract:
Natural hazard assessment and monitoring are crucial in managing the risks associated with fires, floods, and geohazards, particularly in regions prone to these natural disasters, such as Greece and Cyprus. Recent advancements in technology, developed by the BEYOND Center of Excellence of the National Observatory of Athens, have been successfully applied in Greece and are now set to be transferred to Cyprus. The implementation of these advanced technologies in Greece has significantly improved the country's ability to respond to these natural hazards. For wildfire risk assessment, a scalar wildfire occurrence risk index is created based on the predictions of machine learning models. Predicting fire danger is crucial for the sustainable management of forest fires as it provides essential information for designing effective prevention measures and facilitating response planning for potential fire incidents. A reliable forecast of fire danger is a key component of integrated forest fire management and is heavily influenced by various factors that affect fire ignition and spread. The fire risk model is validated by the sensitivity and specificity metric. For flood risk assessment, a multi-faceted approach is employed, including the application of remote sensing techniques, the collection and processing of data from the most recent population and building census, technical studies and field visits, as well as hydrological and hydraulic simulations. All input data are used to create precise flood hazard maps according to various flooding scenarios, detailed flood vulnerability and flood exposure maps, which will finally produce the flood risk map. Critical points are identified, and mitigation measures are proposed for the worst-case scenario, namely, refuge areas are defined, and escape routes are designed. Flood risk maps can assist in raising awareness and save lives. Validation is carried out through historical flood events using remote sensing data and records from the civil protection authorities. For geohazards monitoring (e.g., landslides, subsidence), Synthetic Aperture Radar (SAR) and optical satellite imagery are combined with geomorphological and meteorological data and other landslide/ground deformation contributing factors. To monitor critical infrastructures, including dams, advanced InSAR methodologies are used for identifying surface movements through time. Monitoring these hazards provides valuable information for understanding processes and could lead to early warning systems to protect people and infrastructure. Validation is carried out through both geotechnical expert evaluations and visual inspections. The success of these systems in Greece has paved the way for their transfer to Cyprus to enhance Cyprus's capabilities in natural hazard assessment and monitoring. This transfer is being made through capacity building activities, fostering continuous collaboration between Greek and Cypriot experts. Apart from the knowledge transfer, small demonstration actions are implemented to showcase the effectiveness of these technologies in real-world scenarios. In conclusion, the transfer of advanced natural hazard assessment technologies from Greece to Cyprus represents a significant step forward in enhancing the region's resilience to disasters. EXCELSIOR project funds knowledge exchange, demonstration actions and capacity-building activities and is committed to empower Cyprus with the tools and expertise to effectively manage and mitigate the risks associated with these natural hazards. Acknowledgement:Authors acknowledge the 'EXCELSIOR': ERATOSTHENES: Excellence Research Centre for Earth Surveillance and Space-Based Monitoring of the Environment H2020 Widespread Teaming project.Keywords: earth observation, monitoring, natural hazards, remote sensing
Procedia PDF Downloads 381 Recent Developments in E-waste Management in India
Authors: Rajkumar Ghosh, Bhabani Prasad Mukhopadhay, Ananya Mukhopadhyay, Harendra Nath Bhattacharya
Abstract:
This study investigates the global issue of electronic waste (e-waste), focusing on its prevalence in India and other regions. E-waste has emerged as a significant worldwide problem, with India contributing a substantial share of annual e-waste generation. The primary sources of e-waste in India are computer equipment and mobile phones. Many developed nations utilize India as a dumping ground for their e-waste, with major contributions from the United States, China, Europe, Taiwan, South Korea, and Japan. The study identifies Maharashtra, Tamil Nadu, Mumbai, and Delhi as prominent contributors to India's e-waste crisis. This issue is contextualized within the broader framework of the United Nations' 2030 Agenda for Sustainable Development, which encompasses 17 Sustainable Development Goals (SDGs) and 169 associated targets to address poverty, environmental preservation, and universal prosperity. The study underscores the interconnectedness of e-waste management with several SDGs, including health, clean water, economic growth, sustainable cities, responsible consumption, and ocean conservation. Central Pollution Control Board (CPCB) data reveals that e-waste generation surpasses that of plastic waste, increasing annually at a rate of 31%. However, only 20% of electronic waste is recycled through organized and regulated methods in underdeveloped nations. In Europe, efficient e-waste management stands at just 35%. E-waste pollution poses serious threats to soil, groundwater, and public health due to toxic components such as mercury, lead, bromine, and arsenic. Long-term exposure to these toxins, notably arsenic in microchips, has been linked to severe health issues, including cancer, neurological damage, and skin disorders. Lead exposure, particularly concerning for children, can result in brain damage, kidney problems, and blood disorders. The study highlights the problematic transboundary movement of e-waste, with approximately 352,474 metric tonnes of electronic waste illegally shipped from Europe to developing nations annually, mainly to Africa, including Nigeria, Ghana, and Tanzania. Effective e-waste management, underpinned by appropriate infrastructure, regulations, and policies, offers opportunities for job creation and aligns with the objectives of the 2030 Agenda for SDGs, especially in the realms of decent work, economic growth, and responsible production and consumption. E-waste represents hazardous pollutants and valuable secondary resources, making it a focal point for anthropogenic resource exploitation. The United Nations estimates that e-waste holds potential secondary raw materials worth around 55 billion Euros. The study also identifies numerous challenges in e-waste management, encompassing the sheer volume of e-waste, child labor, inadequate legislation, insufficient infrastructure, health concerns, lack of incentive schemes, limited awareness, e-waste imports, high costs associated with recycling plant establishment, and more. To mitigate these issues, the study offers several solutions, such as providing tax incentives for scrap dealers, implementing reward and reprimand systems for e-waste management compliance, offering training on e-waste handling, promoting responsible e-waste disposal, advancing recycling technologies, regulating e-waste imports, and ensuring the safe disposal of domestic e-waste. A mechanism, Buy-Back programs, will compensate customers in cash when they deposit unwanted digital products. This E-waste could contain any portable electronic device, such as cell phones, computers, tablets, etc. Addressing the e-waste predicament necessitates a multi-faceted approach involving government regulations, industry initiatives, public awareness campaigns, and international cooperation to minimize environmental and health repercussions while harnessing the economic potential of recycling and responsible management.Keywords: e-waste management, sustainable development goal, e-waste disposal, recycling technology, buy-back policy
Procedia PDF Downloads 86