Search results for: traditional techniques
83 India's Geothermal Energy Landscape and Role of Geophysical Methods in Unravelling Untapped Reserves
Authors: Satya Narayan
Abstract:
India, a rapidly growing economy with a burgeoning population, grapples with the dual challenge of meeting rising energy demands and reducing its carbon footprint. Geothermal energy, an often overlooked and underutilized renewable source, holds immense potential for addressing this challenge. Geothermal resources offer a valuable, consistent, and sustainable energy source, and may significantly contribute to India's energy. This paper discusses the importance of geothermal exploration in India, emphasizing its role in achieving sustainable energy production while mitigating environmental impacts. It also delves into the methodology employed to assess geothermal resource feasibility, including geophysical surveys and borehole drilling. The results and discussion sections highlight promising geothermal sites across India, illuminating the nation's vast geothermal potential. It detects potential geothermal reservoirs, characterizes subsurface structures, maps temperature gradients, monitors fluid flow, and estimates key reservoir parameters. Globally, geothermal energy falls into high and low enthalpy categories, with India mainly having low enthalpy resources, especially in hot springs. The northwestern Himalayan region boasts high-temperature geothermal resources due to geological factors. Promising sites, like Puga Valley, Chhumthang, and others, feature hot springs suitable for various applications. The Son-Narmada-Tapti lineament intersects regions rich in geological history, contributing to geothermal resources. Southern India, including the Godavari Valley, has thermal springs suitable for power generation. The Andaman-Nicobar region, linked to subduction and volcanic activity, holds high-temperature geothermal potential. Geophysical surveys, utilizing gravity, magnetic, seismic, magnetotelluric, and electrical resistivity techniques, offer vital information on subsurface conditions essential for detecting, evaluating, and exploiting geothermal resources. The gravity and magnetic methods map the depth of the mantle boundary (high-temperature) and later accurately determine the Curie depth. Electrical methods indicate the presence of subsurface fluids. Seismic surveys create detailed sub-surface images, revealing faults and fractures and establishing possible connections to aquifers. Borehole drilling is crucial for assessing geothermal parameters at different depths. Detailed geochemical analysis and geophysical surveys in Dholera, Gujarat, reveal untapped geothermal potential in India, aligning with renewable energy goals. In conclusion, geophysical surveys and borehole drilling play a pivotal role in economically viable geothermal site selection and feasibility assessments. With ongoing exploration and innovative technology, these surveys effectively minimize drilling risks, optimize borehole placement, aid in environmental impact evaluations, and facilitate remote resource exploration. Their cost-effectiveness informs decisions regarding geothermal resource location and extent, ultimately promoting sustainable energy and reducing India's reliance on conventional fossil fuels.Keywords: geothermal resources, geophysical methods, exploration, exploitation
Procedia PDF Downloads 8782 Design and Construction of a Home-Based, Patient-Led, Therapeutic, Post-Stroke Recovery System Using Iterative Learning Control
Authors: Marco Frieslaar, Bing Chu, Eric Rogers
Abstract:
Stroke is a devastating illness that is the second biggest cause of death in the world (after heart disease). Where it does not kill, it leaves survivors with debilitating sensory and physical impairments that not only seriously harm their quality of life, but also cause a high incidence of severe depression. It is widely accepted that early intervention is essential for recovery, but current rehabilitation techniques largely favor hospital-based therapies which have restricted access, expensive and specialist equipment and tend to side-step the emotional challenges. In addition, there is insufficient funding available to provide the long-term assistance that is required. As a consequence, recovery rates are poor. The relatively unexplored solution is to develop therapies that can be harnessed in the home and are formulated from technologies that already exist in everyday life. This would empower individuals to take control of their own improvement and provide choice in terms of when and where they feel best able to undertake their own healing. This research seeks to identify how effective post-stroke, rehabilitation therapy can be applied to upper limb mobility, within the physical context of a home rather than a hospital. This is being achieved through the design and construction of an automation scheme, based on iterative learning control and the Riener muscle model, that has the ability to adapt to the user and react to their level of fatigue and provide tangible physical recovery. It utilizes a SMART Phone and laptop to construct an iterative learning control (ILC) system, that monitors upper arm movement in three dimensions, as a series of exercises are undertaken. The equipment generates functional electrical stimulation to assist in muscle activation and thus improve directional accuracy. In addition, it monitors speed, accuracy, areas of motion weakness and similar parameters to create a performance index that can be compared over time and extrapolated to establish an independent and objective assessment scheme, plus an approximate estimation of predicted final outcome. To further extend its assessment capabilities, nerve conduction velocity readings are taken by the software, between the shoulder and hand muscles. This is utilized to measure the speed of response of neuron signal transfer along the arm and over time, an online indication of regeneration levels can be obtained. This will prove whether or not sufficient training intensity is being achieved even before perceivable movement dexterity is observed. The device also provides the option to connect to other users, via the internet, so that the patient can avoid feelings of isolation and can undertake movement exercises together with others in a similar position. This should create benefits not only for the encouragement of rehabilitation participation, but also an emotional support network potential. It is intended that this approach will extend the availability of stroke recovery options, enable ease of access at a low cost, reduce susceptibility to depression and through these endeavors, enhance the overall recovery success rate.Keywords: home-based therapy, iterative learning control, Riener muscle model, SMART phone, stroke rehabilitation
Procedia PDF Downloads 26581 Enhancing Photocatalytic Activity of Oxygen Vacancies-Rich Tungsten Trioxide (WO₃) for Sustainable Energy Conversion and Water Purification
Authors: Satam Alotibi, Osama A. Hussein, Aziz H. Al-Shaibani, Nawaf A. Al-Aqeel, Abdellah Kaiba, Fatehia S. Alhakami, Mohammed Alyami, Talal F. Qahtan
Abstract:
The demand for sustainable and efficient energy conversion using solar energy has grown rapidly in recent years. In this pursuit, solar-to-chemical conversion has emerged as a promising approach, with oxygen vacancies-rich tungsten trioxide (WO₃) playing a crucial role. This study presents a method for synthesizing oxygen vacancies-rich WO3, resulting in a significant enhancement of its photocatalytic activity, representing a significant step towards sustainable energy solutions. Experimental results underscore the importance of oxygen vacancies in modifying the properties of WO₃. These vacancies introduce additional energy states within the material, leading to a reduction in the bandgap, increased light absorption, and acting as electron traps, thereby reducing emissions. Our focus lies in developing oxygen vacancies-rich WO₃, which demonstrates unparalleled potential for improved photocatalytic applications. The effectiveness of oxygen vacancies-rich WO₃ in solar-to-chemical conversion was showcased through rigorous assessments of its photocatalytic degradation performance. Sunlight irradiation was employed to evaluate the material's effectiveness in degrading organic pollutants in wastewater. The results unequivocally demonstrate the superior photocatalytic performance of oxygen vacancies-rich WO₃ compared to conventional WO₃ nanomaterials, establishing its efficacy in sustainable and efficient energy conversion. Furthermore, the synthesized material is utilized to fabricate films, which are subsequently employed in immobilized WO₃ and oxygen vacancies-rich WO₃ reactors for water purification under natural sunlight irradiation. This application offers a sustainable and efficient solution for water treatment, harnessing solar energy for effective decontamination. In addition to investigating the photocatalytic capabilities, we extensively analyze the structural and chemical properties of the synthesized material. The synthesis process involves in situ thermal reduction of WO₃ nano-powder in a nitrogen environment, meticulously monitored using thermogravimetric analysis (TGA) to ensure precise control over the synthesis of oxygen vacancies-rich WO₃. Comprehensive characterization techniques such as UV-Vis spectroscopy, X-ray photoelectron spectroscopy (XPS), FTIR, Raman spectroscopy, scanning electron microscopy (SEM), transmission electron microscopy (TEM), and selected area electron diffraction (SAED) provide deep insights into the material's optical properties, chemical composition, elemental states, structure, surface properties, and crystalline structure. This study represents a significant advancement in sustainable energy conversion through solar-to-chemical processes and water purification. By harnessing the unique properties of oxygen vacancies-rich WO₃, we not only enhance our understanding of energy conversion mechanisms but also pave the way for the development of highly efficient and environmentally friendly photocatalytic materials. The application of this material in water purification demonstrates its versatility and potential to address critical environmental challenges. These findings bring us closer to a sustainable energy future and cleaner water resources, laying a solid foundation for a more sustainable planet.Keywords: sustainable energy conversion, solar-to-chemical conversion, oxygen vacancies-rich tungsten trioxide (WO₃), photocatalytic activity enhancement, water purification
Procedia PDF Downloads 6980 Geotechnical Challenges for the Use of Sand-sludge Mixtures in Covers for the Rehabilitation of Acid-Generating Mine Sites
Authors: Mamert Mbonimpa, Ousseynou Kanteye, Élysée Tshibangu Ngabu, Rachid Amrou, Abdelkabir Maqsoud, Tikou Belem
Abstract:
The management of mine wastes (waste rocks and tailings) containing sulphide minerals such as pyrite and pyrrhotite represents the main environmental challenge for the mining industry. Indeed, acid mine drainage (AMD) can be generated when these wastes are exposed to water and air. AMD is characterized by low pH and high concentrations of heavy metals, which are toxic to plants, animals, and humans. It affects the quality of the ecosystem through water and soil pollution. Different techniques involving soil materials can be used to control AMD generation, including impermeable covers (compacted clays) and oxygen barriers. The latter group includes covers with capillary barrier effects (CCBE), a multilayered cover that include the moisture retention layer playing the role of an oxygen barrier. Once AMD is produced at a mine site, it must be treated so that the final effluent at the mine site complies with regulations and can be discharged into the environment. Active neutralization with lime is one of the treatment methods used. This treatment produces sludge that is usually stored in sedimentation ponds. Other sludge management alternatives have been examined in recent years, including sludge co-disposal with tailings or waste rocks, disposal in underground mine excavations, and storage in technical landfill sites. Considering the ability of AMD neutralization sludge to maintain an alkaline to neutral pH for decades or even centuries, due to the excess alkalinity induced by residual lime within the sludge, valorization of sludge in specific applications could be an interesting management option. If done efficiently, the reuse of sludge could free up storage ponds and thus reduce the environmental impact. It should be noted that mixtures of sludge and soils could potentially constitute usable materials in CCBE for the rehabilitation of acid-generating mine sites, while sludge alone is not suitable for this purpose. The high sludge water content (up to 300%), even after sedimentation, can, however, constitute a geotechnical challenge. Adding lime to the mixtures can reduce the water content and improve the geotechnical properties. The objective of this paper is to investigate the impact of the sludge content (30, 40 and 50%) in sand-sludge mixtures (SSM) on their hydrogeotechnical properties (compaction, shrinkage behaviour, saturated hydraulic conductivity, and water retention curve). The impact of lime addition (dosages from 2% to 6%) on the moisture content, dry density after compaction and saturated hydraulic conductivity of SSM was also investigated. Results showed that sludge adding to sand significantly improves the saturated hydraulic conductivity and water retention capacity, but the shrinkage increased with sludge content. The dry density after compaction of lime-treated SSM increases with the lime dosage but remains lower than the optimal dry density of the untreated mixtures. The saturated hydraulic conductivity of lime-treated SSM after 24 hours of cure decreases by 3 orders of magnitude. Considering the hydrogeotechnical properties obtained with these mixtures, it would be possible to design CCBE whose moisture retention layer is made of SSM. Physical laboratory models confirmed the performance of such CCBE.Keywords: mine waste, AMD neutralization sludge, sand-sludge mixture, hydrogeotechnical properties, mine site reclamation, CCBE
Procedia PDF Downloads 5779 Exploring Managerial Approaches towards Green Manufacturing: A Thematic Analysis
Authors: Hakimeh Masoudigavgani
Abstract:
Since manufacturing firms deplete non-renewable resources and pollute air, soil, and water in greatly unsustainable manner, industrial activities or production of products are considered to be a key contributor to adverse environmental impacts. Hence, management strategies and approaches that involve an effective supply chain decision process in a manufacturing sector could be extremely significant to the application of environmental initiatives. Green manufacturing (GM) is one of these strategies which minimises negative effects on the environment through reducing greenhouse gas emissions, waste, and the consumption of energy and natural resources. This paper aims to explore what greening methods and mechanisms could be applied in the manufacturing supply chain and what are the outcomes of adopting these methods in terms of abating environmental burdens? The study is an interpretive research with an exploratory approach, using thematic analysis by coding text, breaking down and grouping the content of collected literature into various themes and categories. It is found that green supply chain could be attained through execution of some pre-production strategies including green building, eco-design, and green procurement as well as a number of in-production and post-production strategies involving green manufacturing and green logistics. To achieve an effective GM, the pre-production strategies are suggested to be employed. This paper defines GM as (1) the analysis of the ecological impacts generated by practices, products, production processes, and operational functions, and (2) the implementation of greening methods to reduce damaging influences of them on the natural environment. Analysis means assessing, monitoring, and auditing of practices in order to measure and pinpoint their harmful impacts. Moreover, greening methods involved within GM (arranged in order from the least to the most level of environmental compliance and techniques) consist of: •product stewardship (e.g. less use of toxic, non-renewable, and hazardous materials in the manufacture of the product; and stewardship of the environmental problems with regard to the product in all production, use, and end-of-life stages); •process stewardship (e.g. controlling carbon emission, energy and resources usage, transportation method, and disposal; reengineering polluting processes; recycling waste materials generated in production); •lean and clean production practices (e.g. elimination of waste, materials replacement, materials reduction, resource-efficient consumption, energy-efficient usage, emission reduction, managerial assessment, waste re-use); •use of eco-industrial parks (e.g. a shared warehouse, shared logistics management system, energy co-generation plant, effluent treatment). However, the focus of this paper is only on methods related to the in-production phase and needs further research on both pre-production and post-production environmental innovations. The outlined methods in this investigation may possibly be taken into account by policy/decision makers. Additionally, the proposed future research direction and identified gaps can be filled by scholars and researchers. The paper compares and contrasts a variety of viewpoints and enhances the body of knowledge by building a definition for GM through synthesising literature and categorising the strategic concept of greening methods, drivers, barriers, and successful implementing tactics.Keywords: green manufacturing (GM), product stewardship, process stewardship, clean production, eco-industrial parks (EIPs)
Procedia PDF Downloads 58278 Two Houses in the Arabian Desert: Assessing the Built Work of RCR Architects in the UAE
Authors: Igor Peraza Curiel, Suzanne Strum
Abstract:
Today, when many foreign architects are receiving commissions in the United Arab Emirates, it is essential to analyze how their designs are influenced by the region's culture, environment, and building traditions. This study examines the approach to siting, geometry, construction methods, and material choices in two private homes for a family in Dubai, a project being constructed on adjacent sites by the acclaimed Spanish team of RCR Architects. Their third project in Dubai, the houses mark a turning point in their design approach to the desert. The Pritzker Prize-winning architects of RCR gained renown for building works deeply responsive to the history, landscape, and customs of their hometown in a volcanic area of the Catalonia region of Spain. Key formative projects and their entry to practice in UAE will be analyzed according to the concepts of place identity, the poetics of construction, and material imagination. The poetics of construction, a theoretical position with a long practical tradition, was revived by the British critic Kenneth Frampton. The idea of architecture as a constructional craft is related to the concepts of material imagination and place identity--phenomenological concerns with the creative engagement with local matter and topography that are at the very essence of RCR's way of designing, detailing, and making. Our study situates RCR within the challenges of building in the region, where western forms and means have largely replaced the ingenious responsiveness of indigenous architecture to the climate and material scarcity. The dwellings, iterations of the same steel and concrete vaulting system, highlight the conceptual framework of RCR's design approach to offer a study in contemporary critical regionalism. The Kama House evokes Bedouin tents, while the Alwah House takes the form of desert dunes in response to the temporality of the winds. Metal mesh screens designed to capture the shifting sands will complete the forms. The original research draws on interviews with the architects and unique documentation provided by them and collected by the authors during on-site visits. By examining the two houses in-depth, this paper foregrounds a series of timely questions: 1) What is the impact of the local climatic, cultural, and material conditions on their project in the UAE? 2) How does this work further their experiences in the region? 3) How has RCR adapted their construction techniques as their work expands beyond familiar settings? The investigation seeks to understand how the design methodology developed for more than 20 years and enmeshed in the regional milieu of their hometown can transform as the architects encounter unique characteristics and values in the Middle East. By focusing on the contemporary interpretation of Arabic geometry and elements, the houses reveal the role of geometry, tectonics, and material specificity in the realization from conceptual sketches to built form. In emphasizing the importance of regional responsiveness, the dynamics of international construction practice, and detailing this study highlights essential issues for professionals and students looking to practice in an increasingly global market.Keywords: material imagination, regional responsiveness, place identity, poetics of construction
Procedia PDF Downloads 14777 Miniaturizing the Volumetric Titration of Free Nitric Acid in U(vi) Solutions: On the Lookout for a More Sustainable Process Radioanalytical Chemistry through Titration-On-A-Chip
Authors: Jose Neri, Fabrice Canto, Alastair Magnaldo, Laurent Guillerme, Vincent Dugas
Abstract:
A miniaturized and automated approach for the volumetric titration of free nitric acid in U(VI) solutions is presented. Free acidity measurement refers to the acidity quantification in solutions containing hydrolysable heavy metal ions such as U(VI), U(IV) or Pu(IV) without taking into account the acidity contribution from the hydrolysis of such metal ions. It is, in fact, an operation having an essential role for the control of the nuclear fuel recycling process. The main objective behind the technical optimization of the actual ‘beaker’ method was to reduce the amount of radioactive substance to be handled by the laboratory personnel, to ease the instrumentation adjustability within a glove-box environment and to allow a high-throughput analysis for conducting more cost-effective operations. The measurement technique is based on the concept of the Taylor-Aris dispersion in order to create inside of a 200 μm x 5cm circular cylindrical micro-channel a linear concentration gradient in less than a second. The proposed analytical methodology relies on the actinide complexation using pH 5.6 sodium oxalate solution and subsequent alkalimetric titration of nitric acid with sodium hydroxide. The titration process is followed with a CCD camera for fluorescence detection; the neutralization boundary can be visualized in a detection range of 500nm- 600nm thanks to the addition of a pH sensitive fluorophore. The operating principle of the developed device allows the active generation of linear concentration gradients using a single cylindrical micro channel. This feature simplifies the fabrication and ease of use of the micro device, as it does not need a complex micro channel network or passive mixers to generate the chemical gradient. Moreover, since the linear gradient is determined by the liquid reagents input pressure, its generation can be fully achieved in faster intervals than one second, being a more timely-efficient gradient generation process compared to other source-sink passive diffusion devices. The resulting linear gradient generator device was therefore adapted to perform for the first time, a volumetric titration on a chip where the amount of reagents used is fixed to the total volume of the micro channel, avoiding an important waste generation like in other flow-based titration techniques. The associated analytical method is automated and its linearity has been proven for the free acidity determination of U(VI) samples containing up to 0.5M of actinide ion and nitric acid in a concentration range of 0.5M to 3M. In addition to automation, the developed analytical methodology and technique greatly improves the standard off-line oxalate complexation and alkalimetric titration method by reducing a thousand fold the required sample volume, forty times the nuclear waste per analysis as well as the analysis time by eight-fold. The developed device represents, therefore, a great step towards an easy-to-handle nuclear-related application, which in the short term could be used to improve laboratory safety as much as to reduce the environmental impact of the radioanalytical chain.Keywords: free acidity, lab-on-a-chip, linear concentration gradient, Taylor-Aris dispersion, volumetric titration
Procedia PDF Downloads 38876 Invisible to Invaluable - How Social Media is Helping Tackle Stigma and Discrimination Against Informal Waste Pickers of Bengaluru
Authors: Varinder Kaur Gambhir, Neema Gupta, Sonal Tickoo Chaudhuri
Abstract:
Bengaluru, a rapidly growing metropolis in India, with a population of 12.5 million citizens, generates 5,757 metric tonnes of solid waste per day. Despite their invaluable contribution to waste management, society and the economy, waste pickers face significant stigma, suspicion and contempt and are left with a sense of shame about their work. In this context, BBC Media Action was funded by the H&M Foundation to develop a 3-year multi-phase social media campaign to shift perceptions of waste picking and informal waste pickers amongst the Bengaluru population. Research has been used to inform project strategy and adaptation, at all stages. Formative research to inform campaign strategy used mixed methods– 14 focused group discussions followed by 406 online surveys – to explore people’s knowledge of, and attitudes towards waste pickers, and identify potential barriers and motivators to changing perceptions. Use of qualitative techniques like metaphor maps (using bank of pictures rather than direct questions to understand mindsets) helped establish the invisibility of informal waste pickers, and the quantitative research enabled audience segmentation based on attitudes towards informal waste pickers. To pretest the campaign idea, eight I-GDs (individual interaction followed by group discussions) were conducted to allow interviewees to first freely express their feelings individually, before discussing in a group. Robert Plucthik’s ‘wheel of emotions’ was used to understand audience’s emotional response to the content. A robust monitoring and evaluation is being conducted (baseline and first phase of monitoring already completed) using a rotating longitudinal panel of 1,800 social media users (exposed and unexposed to the campaign), recruited face to face and representative of the social media universe of Bengaluru city. In addition, qualitative in-depth interviews are being conducted after each phase to better understand change drivers. The research methodology and ethical protocols for impact evaluation have been independently reviewed by an Institutional Review Board. Formative research revealed that while waste on the streets is visible and is of concern to the public, informal waste pickers are virtually ‘invisible’, for most people in Bengaluru Pretesting research revealed that the creative outputs evoked emotions like acceptance and gratitude towards waste-pickers, suggesting that the content had the potential to encourage attitudinal change. After the first phase of campaign, social media analytics show that #Invaluables content reached at least 2.6 million unique people (21% of the Bengaluru population) through Facebook and Instagram. Further, impact monitoring results show significant improvements in spontaneous awareness of different segments of informal waste pickers ( such as sorters at scrap shops or dry waste collection centres -from 10% at baseline to 16% amongst exposed and no change amongst unexposed), recognition that informal waste pickers help the environment (71% at baseline to 77% among exposed and no change among unexposed) and greater discussion about informal waste pickers among those exposed (60%) as against not exposed (49%). Using the insights from this research, the planned social media intervention is designed to increase the visibility of and appreciation for the work of waste pickers in Bengaluru, supporting a more inclusive society.Keywords: awareness, discussion, discrimination, informal waste pickers, invisibility, social media campaign, waste management
Procedia PDF Downloads 10875 'Sextually' Active: Teens, 'Sexting' and Gendered Double Standards in the Digital Age
Authors: Annalise Weckesser, Alex Wade, Clara Joergensen, Jerome Turner
Abstract:
Introduction: Digital mobile technologies afford Generation M a number of opportunities in terms of communication, creativity and connectivity in their social interactions. Yet these young people’s use of such technologies is often the source of moral panic with accordant social anxiety especially prevalent in media representations of teen ‘sexting,’ or the sending of sexually explicit images via smartphones. Thus far, most responses to youth sexting have largely been ineffective or unjust with adult authorities sometimes blaming victims of non-consensual sexting, using child pornography laws to paradoxically criminalise those they are designed to protect, and/or advising teenagers to simply abstain from the practice. Prevention strategies are further skewed, with sex education initiatives often targeted at girls, implying that they shoulder the responsibility of minimising the risks associated with sexting (e.g. revenge porn and sexual predation). Purpose of Study: Despite increasing public interest and concern about ‘teen sexting,’ there remains a dearth of research with young people regarding their experiences of navigating sex and relationships in the current digital media landscape. Furthermore, young people's views on sexting are rarely solicited in the policy and educational strategies aimed at them. To address this research-policy-education gap, an interdisciplinary team of four researchers (from anthropology, media, sociology and education) have undertaken a peer-to-peer research project to co-create a sexual health intervention. Methods: In the winter of 2015-2016, the research team conducted serial group interviews with four cohorts of students (aged 13 to 15) from a secondary school in the West Midlands, UK. To facilitate open dialogue, girls and boys were interviewed separately, and each group consisted of no more than four pupils. The team employed a range of participatory techniques to elicit young people’s views on sexting, its consequences, and its interventions. A final focus group session was conducted with all 14 male and female participants to explore developing a peer-to-peer ‘safe sexting’ education intervention. Findings: This presentation will highlight the ongoing, ‘old school’ sexual double standards at work within this new digital frontier. In the sharing of ‘nudes’ (teens’ preferred term to ‘sexting’) via social media apps (e.g. Snapchat and WhatsApp), girls felt sharing images was inherently risky and feared being blamed and ‘slut-shamed.’ In contrast, boys were seen to gain in social status if they accumulated nudes of female peers. Further, if boys had nudes of themselves shared without consent, they felt they were expected to simply ‘tough it out.’ The presentation will also explore what forms of supports teens desire to help them in their day-to-day navigation of these digitally mediated, heteronormative performances of teen femininity and masculinity expected of them. Conclusion: This is the first research project, within UK, conducted with rather than about teens and the phenomenon of sexting. It marks a timely and important contribution to the nascent, but growing body of knowledge on gender, sexual politics and the digital mobility of sexual images created by and circulated amongst young people.Keywords: teens, sexting, gender, sexual politics
Procedia PDF Downloads 23874 Surface Plasmon Resonance Imaging-Based Epigenetic Assay for Blood DNA Post-Traumatic Stress Disorder Biomarkers
Authors: Judy M. Obliosca, Olivia Vest, Sandra Poulos, Kelsi Smith, Tammy Ferguson, Abigail Powers Lott, Alicia K. Smith, Yang Xu, Christopher K. Tison
Abstract:
Post-Traumatic Stress Disorder (PTSD) is a mental health problem that people may develop after experiencing traumatic events such as combat, natural disasters, and major emotional challenges. Tragically, the number of military personnel with PTSD correlates directly with the number of veterans who attempt suicide, with the highest rate in the Army. Research has shown epigenetic risks in those who are prone to several psychiatric dysfunctions, particularly PTSD. Once initiated in response to trauma, epigenetic alterations in particular, the DNA methylation in the form of 5-methylcytosine (5mC) alters chromatin structure and represses gene expression. Current methods to detect DNA methylation, such as bisulfite-based genomic sequencing techniques, are laborious and have massive analysis workflow while still having high error rates. A faster and simpler detection method of high sensitivity and precision would be useful in a clinical setting to confirm potential PTSD etiologies, prevent other psychiatric disorders, and improve military health. A nano-enhanced Surface Plasmon Resonance imaging (SPRi)-based assay that simultaneously detects site-specific 5mC base (termed as PTSD base) in methylated genes related to PTSD is being developed. The arrays on a sensing chip were first constructed for parallel detection of PTSD bases using synthetic and genomic DNA (gDNA) samples. For the gDNA sample extracted from the whole blood of a PTSD patient, the sample was first digested using specific restriction enzymes, and fragments were denatured to obtain single-stranded methylated target genes (ssDNA). The resulting mixture of ssDNA was then injected into the assay platform, where targets were captured by specific DNA aptamer probes previously immobilized on the surface of a sensing chip. The PTSD bases in targets were detected by anti-5-methylcytosine antibody (anti-5mC), and the resulting signals were then enhanced by the universal nanoenhancer. Preliminary results showed successful detection of a PTSD base in a gDNA sample. Brighter spot images and higher delta values (control-subtracted reflectivity signal) relative to those of the control were observed. We also implemented the in-house surface activation system for detection and developed SPRi disposable chips. Multiplexed PTSD base detection of target methylated genes in blood DNA from PTSD patients of severity conditions (asymptomatic and severe) was conducted. This diagnostic capability being developed is a platform technology, and upon successful implementation for PTSD, it could be reconfigured for the study of a wide variety of neurological disorders such as traumatic brain injury, Alzheimer’s disease, schizophrenia, and Huntington's disease and can be extended to the analyses of other sample matrices such as urine and saliva.Keywords: epigenetic assay, DNA methylation, PTSD, whole blood, multiplexing
Procedia PDF Downloads 12873 Multilocus Phylogenetic Approach Reveals Informative DNA Barcodes for Studying Evolution and Taxonomy of Fusarium Fungi
Authors: Alexander A. Stakheev, Larisa V. Samokhvalova, Sergey K. Zavriev
Abstract:
Fusarium fungi are among the most devastating plant pathogens distributed all over the world. Significant reduction of grain yield and quality caused by Fusarium leads to multi-billion dollar annual losses to the world agricultural production. These organisms can also cause infections in immunocompromised persons and produce the wide range of mycotoxins, such as trichothecenes, fumonisins, and zearalenone, which are hazardous to human and animal health. Identification of Fusarium fungi based on the morphology of spores and spore-forming structures, colony color and appearance on specific culture media is often very complicated due to the high similarity of these features for closely related species. Modern Fusarium taxonomy increasingly uses data of crossing experiments (biological species concept) and genetic polymorphism analysis (phylogenetic species concept). A number of novel Fusarium sibling species has been established using DNA barcoding techniques. Species recognition is best made with the combined phylogeny of intron-rich protein coding genes and ribosomal DNA sequences. However, the internal transcribed spacer of (ITS), which is considered to be universal DNA barcode for Fungi, is not suitable for genus Fusarium, because of its insufficient variability between closely related species and the presence of non-orthologous copies in the genome. Nowadays, the translation elongation factor 1 alpha (TEF1α) gene is the “gold standard” of Fusarium taxonomy, but the search for novel informative markers is still needed. In this study, we used two novel DNA markers, frataxin (FXN) and heat shock protein 90 (HSP90) to discover phylogenetic relationships between Fusarium species. Multilocus phylogenetic analysis based on partial sequences of TEF1α, FXN, HSP90, as well as intergenic spacer of ribosomal DNA (IGS), beta-tubulin (β-TUB) and phosphate permease (PHO) genes has been conducted for 120 isolates of 19 Fusarium species from different climatic zones of Russia and neighboring countries using maximum likelihood (ML) and maximum parsimony (MP) algorithms. Our analyses revealed that FXN and HSP90 genes could be considered as informative phylogenetic markers, suitable for evolutionary and taxonomic studies of Fusarium genus. It has been shown that PHO gene possesses more variable (22 %) and parsimony informative (19 %) characters than other markers, including TEF1α (12 % and 9 %, correspondingly) when used for elucidating phylogenetic relationships between F. avenaceum and its closest relatives – F. tricinctum, F. acuminatum, F. torulosum. Application of novel DNA barcodes confirmed the fact that F. arthrosporioides do not represent a separate species but only a subspecies of F. avenaceum. Phylogeny based on partial PHO and FXN sequences revealed the presence of separate cluster of four F. avenaceum strains which were closer to F. torulosum than to major F. avenaceum clade. The strain F-846 from Moldova, morphologically identified as F. poae, formed a separate lineage in all the constructed dendrograms, and could potentially be considered as a separate species, but more information is needed to confirm this conclusion. Variable sites in PHO sequences were used for the first-time development of specific qPCR-based diagnostic assays for F. acuminatum and F. torulosum. This work was supported by Russian Foundation for Basic Research (grant № 15-29-02527).Keywords: DNA barcode, fusarium, identification, phylogenetics, taxonomy
Procedia PDF Downloads 32472 Learning from Dendrites: Improving the Point Neuron Model
Authors: Alexander Vandesompele, Joni Dambre
Abstract:
The diversity in dendritic arborization, as first illustrated by Santiago Ramon y Cajal, has always suggested a role for dendrites in the functionality of neurons. In the past decades, thanks to new recording techniques and optical stimulation methods, it has become clear that dendrites are not merely passive electrical components. They are observed to integrate inputs in a non-linear fashion and actively participate in computations. Regardless, in simulations of neural networks dendritic structure and functionality are often overlooked. Especially in a machine learning context, when designing artificial neural networks, point neuron models such as the leaky-integrate-and-fire (LIF) model are dominant. These models mimic the integration of inputs at the neuron soma, and ignore the existence of dendrites. In this work, the LIF point neuron model is extended with a simple form of dendritic computation. This gives the LIF neuron increased capacity to discriminate spatiotemporal input sequences, a dendritic functionality as observed in another study. Simulations of the spiking neurons are performed using the Bindsnet framework. In the common LIF model, incoming synapses are independent. Here, we introduce a dependency between incoming synapses such that the post-synaptic impact of a spike is not only determined by the weight of the synapse, but also by the activity of other synapses. This is a form of short term plasticity where synapses are potentiated or depressed by the preceding activity of neighbouring synapses. This is a straightforward way to prevent inputs from simply summing linearly at the soma. To implement this, each pair of synapses on a neuron is assigned a variable,representing the synaptic relation. This variable determines the magnitude ofthe short term plasticity. These variables can be chosen randomly or, more interestingly, can be learned using a form of Hebbian learning. We use Spike-Time-Dependent-Plasticity (STDP), commonly used to learn synaptic strength magnitudes. If all neurons in a layer receive the same input, they tend to learn the same through STDP. Adding inhibitory connections between the neurons creates a winner-take-all (WTA) network. This causes the different neurons to learn different input sequences. To illustrate the impact of the proposed dendritic mechanism, even without learning, we attach five input neurons to two output neurons. One output neuron isa regular LIF neuron, the other output neuron is a LIF neuron with dendritic relationships. Then, the five input neurons are allowed to fire in a particular order. The membrane potentials are reset and subsequently the five input neurons are fired in the reversed order. As the regular LIF neuron linearly integrates its inputs at the soma, the membrane potential response to both sequences is similar in magnitude. In the other output neuron, due to the dendritic mechanism, the membrane potential response is different for both sequences. Hence, the dendritic mechanism improves the neuron’s capacity for discriminating spa-tiotemporal sequences. Dendritic computations improve LIF neurons even if the relationships between synapses are established randomly. Ideally however, a learning rule is used to improve the dendritic relationships based on input data. It is possible to learn synaptic strength with STDP, to make a neuron more sensitive to its input. Similarly, it is possible to learn dendritic relationships with STDP, to make the neuron more sensitive to spatiotemporal input sequences. Feeding structured data to a WTA network with dendritic computation leads to a significantly higher number of discriminated input patterns. Without the dendritic computation, output neurons are less specific and may, for instance, be activated by a sequence in reverse order.Keywords: dendritic computation, spiking neural networks, point neuron model
Procedia PDF Downloads 13471 A Critical Evaluation of Occupational Safety and Health Management Systems' Implementation: Case of Mutare Urban Timber Processing Factories, Zimbabwe
Authors: Johanes Mandowa
Abstract:
The study evaluated the status of Occupational Safety and Health Management Systems’ (OSHMSs) implementation by Mutare urban timber processing factories. A descriptive cross sectional survey method was utilized in the study. Questionnaires, interviews and direct observations were the techniques employed to extract primary data from the respondents. Secondary data was acquired from OSH encyclopedia, OSH journals, newspaper articles, internet, past research papers, African Newsletter on OSH and NSSA On-guard magazines among others. Analysis of data collected was conducted using statistical and descriptive methods. Results revealed an unpleasant low uptake rate (16%) of OSH Management Systems by Mutare urban timber processing factories. On a comparative basis, low implementation levels were more pronounced in small timber processing factories than in large factories. The low uptake rate of OSH Management Systems revealed by the study validates the Government of Zimbabwe and its social partners’ observation that the dismal Zimbabwe OSH performance was largely due to non implementation of safety systems at most workplaces. The results exhibited a relationship between availability of a SHE practitioner in Mutare urban timber processing factories and OSHMS implementation. All respondents and interviewees’ agreed that OSH Management Systems are handy in curbing occupational injuries and diseases. It emerged from the study that the top barriers to implementation of safety systems are lack of adequate financial resources, lack of top management commitment and lack of OSHMS implementation expertise. Key motivators for OSHMSs establishment were cited as provision of adequate resources (76%), strong employee involvement (64%) and strong senior management commitment and involvement (60%). Study results demonstrated that both OSHMSs implementation barriers and motivators affect all Mutare urban timber processing factories irrespective of size. The study recommends enactment of a law by Ministry of Public Service, Labour and Social Welfare in consultation with NSSA to make availability of an OSHMS and qualified SHE practitioner mandatory at every workplace. More so, the enacted law should prescribe minimum educational qualification required for one to practice as a SHE practitioner. Ministry of Public Service, Labour and Social Welfare and NSSA should also devise incentives such as reduced WCIF premiums for good OSH performance to cushion Mutare urban timber processing factories from OSHMS implementation costs. The study recommends the incorporation of an OSH module in the academic curriculums of all programmes offered at tertiary institutions so as to ensure that graduates who later end up assuming influential management positions in Mutare urban timber processing factories are abreast with the necessity of OSHMSs in preventing occupational injuries and diseases. In the quest to further boost management’s awareness on the importance of OSHMSs, NSSA and SAZ are urged by the study to conduct OSHMSs awareness breakfast meetings targeting executive management on a periodic basis. The Government of Zimbabwe through the Ministry of Public Service, Labour and Social Welfare should also engage ILO Country Office for Zimbabwe to solicit for ILO’s technical assistance so as to enhance the effectiveness of NSSA’s and SAZ’s OSHMSs promotional programmes.Keywords: occupational safety health management system, national social security authority, standard association of Zimbabwe, Mutare urban timber processing factories, ministry of public service, labour and social welfare
Procedia PDF Downloads 34070 Assessment of Occupational Exposure and Individual Radio-Sensitivity in People Subjected to Ionizing Radiation
Authors: Oksana G. Cherednichenko, Anastasia L. Pilyugina, Sergey N.Lukashenko, Elena G. Gubitskaya
Abstract:
The estimation of accumulated radiation doses in people professionally exposed to ionizing radiation was performed using methods of biological (chromosomal aberrations frequency in lymphocytes) and physical (radionuclides analysis in urine, whole-body radiation meter, individual thermoluminescent dosimeters) dosimetry. A group of 84 "A" category employees after their work in the territory of former Semipalatinsk test site (Kazakhstan) was investigated. The dose rate in some funnels exceeds 40 μSv/h. After radionuclides determination in urine using radiochemical and WBC methods, it was shown that the total effective dose of personnel internal exposure did not exceed 0.2 mSv/year, while an acceptable dose limit for staff is 20 mSv/year. The range of external radiation doses measured with individual thermo-luminescent dosimeters was 0.3-1.406 µSv. The cytogenetic examination showed that chromosomal aberrations frequency in staff was 4.27±0.22%, which is significantly higher than at the people from non-polluting settlement Tausugur (0.87±0.1%) (р ≤ 0.01) and citizens of Almaty (1.6±0.12%) (р≤ 0.01). Chromosomal type aberrations accounted for 2.32±0.16%, 0.27±0.06% of which were dicentrics and centric rings. The cytogenetic analysis of different types group radiosensitivity among «professionals» (age, sex, ethnic group, epidemiological data) revealed no significant differences between the compared values. Using various techniques by frequency of dicentrics and centric rings, the average cumulative radiation dose for group was calculated, and that was 0.084-0.143 Gy. To perform comparative individual dosimetry using physical and biological methods of dose assessment, calibration curves (including own ones) and regression equations based on general frequency of chromosomal aberrations obtained after irradiation of blood samples by gamma-radiation with the dose rate of 0,1 Gy/min were used. Herewith, on the assumption of individual variation of chromosomal aberrations frequency (1–10%), the accumulated dose of radiation varied 0-0.3 Gy. The main problem in the interpretation of individual dosimetry results is reduced to different reaction of the objects to irradiation - radiosensitivity, which dictates the need of quantitative definition of this individual reaction and its consideration in the calculation of the received radiation dose. The entire examined contingent was assigned to a group based on the received dose and detected cytogenetic aberrations. Radiosensitive individuals, at the lowest received dose in a year, showed the highest frequency of chromosomal aberrations (5.72%). In opposite, radioresistant individuals showed the lowest frequency of chromosomal aberrations (2.8%). The cohort correlation according to the criterion of radio-sensitivity in our research was distributed as follows: radio-sensitive (26.2%) — medium radio-sensitivity (57.1%), radioresistant (16.7%). Herewith, the dispersion for radioresistant individuals is 2.3; for the group with medium radio-sensitivity — 3.3; and for radio-sensitive group — 9. These data indicate the highest variation of characteristic (reactions to radiation effect) in the group of radio-sensitive individuals. People with medium radio-sensitivity show significant long-term correlation (0.66; n=48, β ≥ 0.999) between the values of doses defined according to the results of cytogenetic analysis and dose of external radiation obtained with the help of thermoluminescent dosimeters. Mathematical models based on the type of violation of the radiation dose according to the professionals radiosensitivity level were offered.Keywords: biodosimetry, chromosomal aberrations, ionizing radiation, radiosensitivity
Procedia PDF Downloads 18569 Exploring the Dose-Response Association of Lifestyle Behaviors and Mental Health among High School Students in the US: A Secondary Analysis of 2021 Adolescent Behaviors and Experiences Survey Data
Authors: Layla Haidar, Shari Esquenazi-Karonika
Abstract:
Introduction: Mental health includes one’s emotional, psychological, and interpersonal well-being; it ranges from “good” to “poor” on a continuum. At the individual-level, it affects how a person thinks, feels, and acts. Moreover, it determines how they cope with stress, relate to others, and interface with their surroundings. Research has yielded that mental health is directly related with short- and long-term physical health (including chronic disease), health risk behaviors, education-level, employment, and social relationships. As is the case with physical conditions like diabetes, heart disease, and cancer, mitigating the behavioral and genetic risks of debilitating mental health conditions like anxiety and depression can nurture a healthier quality of mental health throughout one’s life. In order to maximize the benefits of prevention, it is important to identify modifiable risks and develop protective habits earlier in life. Methods: The Adolescent Behaviors and Experiences Survey (ABES) dataset was used for this study. The ABES survey was administered to high school students (9th-12th grade) during January 2021- June 2021 by the Centers for Disease Control and Prevention (CDC). The data was analyzed to identify any associations between feelings of sadness, hopelessness, or increased suicidality among high school students with relation to their participation on one or more sports teams and their average daily consumed screen time. Data was analyzed using descriptive and multivariable analytic techniques. A multinomial logistic regression of each variable was conducted to examine if there was an association, while controlling for grade-level, sex, and race. Results: The findings from this study are insightful for administrators and policymakers who wish to address mounting concerns related to student mental health. The study revealed that compared to a student who participated on zero sports teams, students who participated in 1 or more sports teams showed a significantly increased risk of depression (p<0.05). Conversely, the rate of depression in students was significantly less in those who consumed 5 or more hours of screen time per day, compared to those who consumed less than 1 hour per day of screen time (p<0.05). Conclusion: These findings are informative and highlight the importance of understanding the nuances of student participation on sports teams (e.g., physical exertion, social dynamics of team, and the level of competitiveness within the sport). Likewise, the context of an individual’s screen time (e.g., social media, engaging in team-based video games, or watching television) can inform parental or school-based policies about screen time activity. Although physical activity has been proven to be important for emotional and physical well-being of youth, playing on multiple teams could have negative consequences on the emotional state of high school students potentially due to fatigue, overtraining, and injuries. Existing literature has highlighted the negative effects of screen time; however, further research needs to consider the type of screen-based consumption to better understand its effects on mental health.Keywords: behavioral science, mental health, adolescents, prevention
Procedia PDF Downloads 10868 Re-Designing Community Foodscapes to Enhance Social Inclusion in Sustainable Urban Environments
Authors: Carles Martinez-Almoyna Gual, Jiwon Choi
Abstract:
Urban communities face risks of disintegration and segregation as a consequence of globalised migration processes towards urban environments. Linking social and cultural components with environmental and economic dimensions becomes the goal of all the disciplines that aim to shape more sustainable urban environments. Solutions require interdisciplinary approaches and the use of a complex array of tools. One of these tools is the implementation of urban farming, which provides a wide range of advantages for creating more inclusive spaces and integrated communities. Since food is strongly related to the values and identities of any cultural group, it can be used as a medium to promote social inclusion in the context of urban multicultural societies. By bringing people together into specific urban sites, food production can be integrated into multifunctional spaces while addressing social, economic and ecological goals. The goal of this research is to assess different approaches to urban agriculture by analysing three existing community gardens located in Newtown, a suburb of Wellington, New Zealand. As a context for developing research, Newtown offers different approaches to urban farming and is really valuable for observing current trends of socialization in diverse and multicultural societies. All three spaces are located on public land owned by Wellington City Council and confined to a small, complex and progressively denser urban area. The developed analysis was focused on social, cultural and physical dimensions, combining community engagement with different techniques of spatial assessment. At the same time, a detailed investigation of each community garden was conducted with comparative analysis methodologies. This multidirectional setting of the analysis was established for extracting from the case studies both specific and typological knowledge. Each site was analysed and categorised under three broad themes: people, space and food. The analysis revealed that all three case studies had really different spatial settings, different approaches to food production and varying profiles of supportive communities. The main differences identified were demographics, values, objectives, internal organization, appropriation, and perception of the space. The community gardens were approached as case studies for developing design research. Following participatory design processes with the different communities, the knowledge gained from the analysis was used for proposing changes in the physical environment. The end goal of the design research was to improve the capacity of the spaces to facilitate social inclusiveness. In order to generate tangible changes, a range of small, strategic and feasible spatial interventions was explored. The smallness of the proposed interventions facilitates implementation by reducing time frames, technical resources, funding needs, and legal processes, working within the community´s own realm. These small interventions are expected to be implemented over time as part of an ongoing collaboration between the different communities, the university, and the local council. The applied research methodology showcases the capacity of universities to develop civic engagement by working with real communities that have concrete needs and face overall threats of disintegration and segregation.Keywords: community gardening, landscape architecture, participatory design, placemaking, social inclusion
Procedia PDF Downloads 12867 Open Science Philosophy, Research and Innovation
Authors: C.Ardil
Abstract:
Open Science translates the understanding and application of various theories and practices in open science philosophy, systems, paradigms and epistemology. Open Science originates with the premise that universal scientific knowledge is a product of a collective scholarly and social collaboration involving all stakeholders and knowledge belongs to the global society. Scientific outputs generated by public research are a public good that should be available to all at no cost and without barriers or restrictions. Open Science has the potential to increase the quality, impact and benefits of science and to accelerate advancement of knowledge by making it more reliable, more efficient and accurate, better understandable by society and responsive to societal challenges, and has the potential to enable growth and innovation through reuse of scientific results by all stakeholders at all levels of society, and ultimately contribute to growth and competitiveness of global society. Open Science is a global movement to improve accessibility to and reusability of research practices and outputs. In its broadest definition, it encompasses open access to publications, open research data and methods, open source, open educational resources, open evaluation, and citizen science. The implementation of open science provides an excellent opportunity to renegotiate the social roles and responsibilities of publicly funded research and to rethink the science system as a whole. Open Science is the practice of science in such a way that others can collaborate and contribute, where research data, lab notes and other research processes are freely available, under terms that enable reuse, redistribution and reproduction of the research and its underlying data and methods. Open Science represents a novel systematic approach to the scientific process, shifting from the standard practices of publishing research results in scientific publications towards sharing and using all available knowledge at an earlier stage in the research process, based on cooperative work and diffusing scholarly knowledge with no barriers and restrictions. Open Science refers to efforts to make the primary outputs of publicly funded research results (publications and the research data) publicly accessible in digital format with no limitations. Open Science is about extending the principles of openness to the whole research cycle, fostering, sharing and collaboration as early as possible, thus entailing a systemic change to the way science and research is done. Open Science is the ongoing transition in how open research is carried out, disseminated, deployed, and transformed to make scholarly research more open, global, collaborative, creative and closer to society. Open Science involves various movements aiming to remove the barriers for sharing any kind of output, resources, methods or tools, at any stage of the research process. Open Science embraces open access to publications, research data, source software, collaboration, peer review, notebooks, educational resources, monographs, citizen science, or research crowdfunding. The recognition and adoption of open science practices, including open science policies that increase open access to scientific literature and encourage data and code sharing, is increasing in the open science philosophy. Revolutionary open science policies are motivated by ethical, moral or utilitarian arguments, such as the right to access digital research literature for open source research or science data accumulation, research indicators, transparency in the field of academic practice, and reproducibility. Open science philosophy is adopted primarily to demonstrate the benefits of open science practices. Researchers use open science applications for their own advantage in order to get more offers, increase citations, attract media attention, potential collaborators, career opportunities, donations and funding opportunities. In open science philosophy, open data findings are evidence that open science practices provide significant benefits to researchers in scientific research creation, collaboration, communication, and evaluation according to more traditional closed science practices. Open science considers concerns such as the rigor of peer review, common research facts such as financing and career development, and the sacrifice of author rights. Therefore, researchers are recommended to implement open science research within the framework of existing academic evaluation and incentives. As a result, open science research issues are addressed in the areas of publishing, financing, collaboration, resource management and sharing, career development, discussion of open science questions and conclusions.Keywords: Open Science, Open Science Philosophy, Open Science Research, Open Science Data
Procedia PDF Downloads 13366 Colloid-Based Biodetection at Aqueous Electrical Interfaces Using Fluidic Dielectrophoresis
Authors: Francesca Crivellari, Nicholas Mavrogiannis, Zachary Gagnon
Abstract:
Portable diagnostic methods have become increasingly important for a number of different purposes: point-of-care screening in developing nations, environmental contamination studies, bio/chemical warfare agent detection, and end-user use for commercial health monitoring. The cheapest and most portable methods currently available are paper-based – lateral flow and dipstick methods are widely available in drug stores for use in pregnancy detection and blood glucose monitoring. These tests are successful because they are cheap to produce, easy to use, and require minimally invasive sampling. While adequate for their intended uses, in the realm of blood-borne pathogens and numerous cancers, these paper-based methods become unreliable, as they lack the nM/pM sensitivity currently achieved by clinical diagnostic methods. Clinical diagnostics, however, utilize techniques involving surface plasmon resonance (SPR) and enzyme-linked immunosorbent assays (ELISAs), which are expensive and unfeasible in terms of portability. To develop a better, competitive biosensor, we must reduce the cost of one, or increase the sensitivity of the other. Electric fields are commonly utilized in microfluidic devices to manipulate particles, biomolecules, and cells. Applications in this area, however, are primarily limited to interfaces formed between immiscible interfaces. Miscible, liquid-liquid interfaces are common in microfluidic devices, and are easily reproduced with simple geometries. Here, we demonstrate the use of electrical fields at liquid-liquid electrical interfaces, known as fluidic dielectrophoresis, (fDEP) for biodetection in a microfluidic device. In this work, we apply an AC electric field across concurrent laminar streams with differing conductivities and permittivities to polarize the interface and induce a discernible, near-immediate, frequency-dependent interfacial tilt. We design this aqueous electrical interface, which becomes the biosensing “substrate,” to be intelligent – it “moves” only when a target of interest is present. This motion requires neither labels nor expensive electrical equipment, so the biosensor is inexpensive and portable, yet still capable of sensitive detection. Nanoparticles, due to their high surface-area-to-volume ratio, are often incorporated to enhance detection capabilities of schemes like SPR and fluorimetric assays. Most studies currently investigate binding at an immobilized solid-liquid or solid-gas interface, where particles are adsorbed onto a planar surface, functionalized with a receptor to create a reactive substrate, and subsequently flushed with a fluid or gas with the relevant analyte. These typically involve many preparation and rinsing steps, and are susceptible to surface fouling. Our microfluidic device is continuously flowing and renewing the “substrate,” and is thus not subject to fouling. In this work, we demonstrate the ability to electrokinetically detect biomolecules binding to functionalized nanoparticles at liquid-liquid interfaces using fDEP. In biotin-streptavidin experiments, we report binding detection limits on the order of 1-10 pM, without amplifying signals or concentrating samples. We also demonstrate the ability to detect this interfacial motion, and thus the presence of binding, using impedance spectroscopy, allowing this scheme to become non-optical, in addition to being label-free.Keywords: biodetection, dielectrophoresis, microfluidics, nanoparticles
Procedia PDF Downloads 38865 Examining Three Psychosocial Factors of Tax Compliance in Self-Employed Individuals using the Mindspace Framework - Evidence from Australia and Pakistan
Authors: Amna Tariq Shah
Abstract:
Amid the pandemic, the contemporary landscape has experienced accelerated growth in small business activities and an expanding digital marketplace, further exacerbating the issue of non-compliance among self-employed individuals through aggressive tax planning and evasion. This research seeks to address these challenges by developing strategic tax policies that promote voluntary compliance and improve taxpayer facilitation. The study employs the innovative MINDSPACE framework to examine three psychosocial factors—tax communication, tax literacy, and shaming—to optimize policy responses, address administrative shortcomings, and ensure adequate revenue collection for public goods and services. Preliminary findings suggest that incomprehensible communication from tax authorities drives individuals to seek alternative, potentially biased sources of tax information, thereby exacerbating non-compliance. Furthermore, the study reveals low tax literacy among Australian and Pakistani respondents, with many struggling to navigate complex tax processes and comprehend tax laws. Consequently, policy recommendations include simplifying tax return filing and enhancing pre-populated tax returns. In terms of shaming, the research indicates that Australians, being an individualistic society, may not respond well to shaming techniques due to privacy concerns. In contrast, Pakistanis, as a collectivistic society, may be more receptive to naming and shaming approaches. The study employs a mixed-method approach, utilizing interviews and surveys to analyze the issue in both jurisdictions. The use of mixed methods allows for a more comprehensive understanding of tax compliance behavior, combining the depth of qualitative insights with the generalizability of quantitative data, ultimately leading to more robust and well-informed policy recommendations. By examining evidence from opposite jurisdictions, namely a developed country (Australia) and a developing country (Pakistan), the study's applicability is enhanced, providing perspectives from two disparate contexts that offer insights from opposite ends of the economic, cultural, and social spectra. The non-comparative case study methodology offers valuable insights into human behavior, which can be applied to other jurisdictions as well. The application of the MINDSPACE framework in this research is particularly significant, as it introduces a novel approach to tax compliance behavior analysis. By integrating insights from behavioral economics, the framework enables a comprehensive understanding of the psychological and social factors influencing taxpayer decision-making, facilitating the development of targeted and effective policy interventions. This research carries substantial importance as it addresses critical challenges in tax compliance and administration, with far-reaching implications for revenue collection and the provision of public goods and services. By investigating the psychosocial factors that influence taxpayer behavior and utilizing the MINDSPACE framework, the study contributes invaluable insights to the field of tax policy. These insights can inform policymakers and tax administrators in developing more effective tax policies that enhance taxpayer facilitation, address administrative obstacles, promote a more equitable and efficient tax system, and foster voluntary compliance, ultimately strengthening the financial foundation of governments and communities.Keywords: individual tax compliance behavior, psychosocial factors, tax non-compliance, tax policy
Procedia PDF Downloads 7764 Sensorless Machine Parameter-Free Control of Doubly Fed Reluctance Wind Turbine Generator
Authors: Mohammad R. Aghakashkooli, Milutin G. Jovanovic
Abstract:
The brushless doubly-fed reluctance generator (BDFRG) is an emerging, medium-speed alternative to a conventional wound rotor slip-ring doubly-fed induction generator (DFIG) in wind energy conversion systems (WECS). It can provide competitive overall performance and similar low failure rates of a typically 30% rated back-to-back power electronics converter in 2:1 speed ranges but with the following important reliability and cost advantages over DFIG: the maintenance-free operation afforded by its brushless structure, 50% synchronous speed with the same number of rotor poles (allowing the use of a more compact, and more efficient two-stage gearbox instead of a vulnerable three-stage one), and superior grid integration properties including simpler protection for the low voltage ride through compliance of the fractional converter due to the comparatively higher leakage inductances and lower fault currents. Vector controlled pulse-width-modulated converters generally feature a much lower total harmonic distortion relative to hysteresis counterparts with variable switching rates and as such have been a predominant choice for BDFRG (and DFIG) wind turbines. Eliminating a shaft position sensor, which is often required for control implementation in this case, would be desirable to address the associated reliability issues. This fact has largely motivated the recent growing research of sensorless methods and developments of various rotor position and/or speed estimation techniques for this purpose. The main limitation of all the observer-based control approaches for grid-connected wind power applications of the BDFRG reported in the open literature is the requirement for pre-commissioning procedures and prior knowledge of the machine inductances, which are usually difficult to accurately identify by off-line testing. A model reference adaptive system (MRAS) based sensor-less vector control scheme to be presented will overcome this shortcoming. The true machine parameter independence of the proposed field-oriented algorithm, offering robust, inherently decoupled real and reactive power control of the grid-connected winding, is achieved by on-line estimation of the inductance ratio, the underlying rotor angular velocity and position MRAS observer being reliant upon. Such an observer configuration will be more practical to implement and clearly preferable to the existing machine parameter dependent solutions, and especially bearing in mind that with very little modifications it can be adapted for commercial DFIGs with immediately obvious further industrial benefits and prospects of this work. The excellent encoder-less controller performance with maximum power point tracking in the base speed region will be demonstrated by realistic simulation studies using large-scale BDFRG design data and verified by experimental results on a small laboratory prototype of the WECS emulation facility.Keywords: brushless doubly fed reluctance generator, model reference adaptive system, sensorless vector control, wind energy conversion
Procedia PDF Downloads 6263 Integrated Mathematical Modeling and Advance Visualization of Magnetic Nanoparticle for Drug Delivery, Drug Release and Effects to Cancer Cell Treatment
Authors: Norma Binti Alias, Che Rahim Che The, Norfarizan Mohd Said, Sakinah Abdul Hanan, Akhtar Ali
Abstract:
This paper discusses on the transportation of magnetic drug targeting through blood within vessels, tissues and cells. There are three integrated mathematical models to be discussed and analyze the concentration of drug and blood flow through magnetic nanoparticles. The cell therapy brought advancement in the field of nanotechnology to fight against the tumors. The systematic therapeutic effect of Single Cells can reduce the growth of cancer tissue. The process of this nanoscale phenomena system is able to measure and to model, by identifying some parameters and applying fundamental principles of mathematical modeling and simulation. The mathematical modeling of single cell growth depends on three types of cell densities such as proliferative, quiescent and necrotic cells. The aim of this paper is to enhance the simulation of three types of models. The first model represents the transport of drugs by coupled partial differential equations (PDEs) with 3D parabolic type in a cylindrical coordinate system. This model is integrated by Non-Newtonian flow equations, leading to blood liquid flow as the medium for transportation system and the magnetic force on the magnetic nanoparticles. The interaction between the magnetic force on drug with magnetic properties produces induced currents and the applied magnetic field yields forces with tend to move slowly the movement of blood and bring the drug to the cancer cells. The devices of nanoscale allow the drug to discharge the blood vessels and even spread out through the tissue and access to the cancer cells. The second model is the transport of drug nanoparticles from the vascular system to a single cell. The treatment of the vascular system encounters some parameter identification such as magnetic nanoparticle targeted delivery, blood flow, momentum transport, density and viscosity for drug and blood medium, intensity of magnetic fields and the radius of the capillary. Based on two discretization techniques, finite difference method (FDM) and finite element method (FEM), the set of integrated models are transformed into a series of grid points to get a large system of equations. The third model is a single cell density model involving the three sets of first order PDEs equations for proliferating, quiescent and necrotic cells change over time and space in Cartesian coordinate which regulates under different rates of nutrients consumptions. The model presents the proliferative and quiescent cell growth depends on some parameter changes and the necrotic cells emerged as the tumor core. Some numerical schemes for solving the system of equations are compared and analyzed. Simulation and computation of the discretized model are supported by Matlab and C programming languages on a single processing unit. Some numerical results and analysis of the algorithms are presented in terms of informative presentation of tables, multiple graph and multidimensional visualization. As a conclusion, the integrated of three types mathematical modeling and the comparison of numerical performance indicates that the superior tool and analysis for solving the complete set of magnetic drug delivery system which give significant effects on the growth of the targeted cancer cell.Keywords: mathematical modeling, visualization, PDE models, magnetic nanoparticle drug delivery model, drug release model, single cell effects, avascular tumor growth, numerical analysis
Procedia PDF Downloads 42862 Design and Fabrication of AI-Driven Kinetic Facades with Soft Robotics for Optimized Building Energy Performance
Authors: Mohammadreza Kashizadeh, Mohammadamin Hashemi
Abstract:
This paper explores a kinetic building facade designed for optimal energy capture and architectural expression. The system integrates photovoltaic panels with soft robotic actuators for precise solar tracking, resulting in enhanced electricity generation compared to static facades. Driven by the growing interest in dynamic building envelopes, the exploration of facade systems are necessitated. Increased energy generation and regulation of energy flow within buildings are potential benefits offered by integrating photovoltaic (PV) panels as kinetic elements. However, incorporating these technologies into mainstream architecture presents challenges due to the complexity of coordinating multiple systems. To address this, the design leverages soft robotic actuators, known for their compliance, resilience, and ease of integration. Additionally, the project investigates the potential for employing Large Language Models (LLMs) to streamline the design process. The research methodology involved design development, material selection, component fabrication, and system assembly. Grasshopper (GH) was employed within the digital design environment for parametric modeling and scripting logic, and an LLM was experimented with to generate Python code for the creation of a random surface with user-defined parameters. Various techniques, including casting, Three-dimensional 3D printing, and laser cutting, were utilized to fabricate physical components. A modular assembly approach was adopted to facilitate installation and maintenance. A case study focusing on the application of this facade system to an existing library building at Polytechnic University of Milan is presented. The system is divided into sub-frames to optimize solar exposure while maintaining a visually appealing aesthetic. Preliminary structural analyses were conducted using Karamba3D to assess deflection behavior and axial loads within the cable net structure. Additionally, Finite Element (FE) simulations were performed in Abaqus to evaluate the mechanical response of the soft robotic actuators under pneumatic pressure. To validate the design, a physical prototype was created using a mold adapted for a 3D printer's limitations. Casting Silicone Rubber Sil 15 was used for its flexibility and durability. The 3D-printed mold components were assembled, filled with the silicone mixture, and cured. After demolding, nodes and cables were 3D-printed and connected to form the structure, demonstrating the feasibility of the design. This work demonstrates the potential of soft robotics and Artificial Intelligence (AI) for advancements in sustainable building design and construction. The project successfully integrates these technologies to create a dynamic facade system that optimizes energy generation and architectural expression. While limitations exist, this approach paves the way for future advancements in energy-efficient facade design. Continued research efforts will focus on cost reduction, improved system performance, and broader applicability.Keywords: artificial intelligence, energy efficiency, kinetic photovoltaics, pneumatic control, soft robotics, sustainable building
Procedia PDF Downloads 3561 Optimized Electron Diffraction Detection and Data Acquisition in Diffraction Tomography: A Complete Solution by Gatan
Authors: Saleh Gorji, Sahil Gulati, Ana Pakzad
Abstract:
Continuous electron diffraction tomography, also known as microcrystal electron diffraction (MicroED) or three-dimensional electron diffraction (3DED), is a powerful technique, which in combination with cryo-electron microscopy (cryo-ED), can provide atomic-scale 3D information about the crystal structure and composition of different classes of crystalline materials such as proteins, peptides, and small molecules. Unlike the well-established X-ray crystallography method, 3DED does not require large single crystals and can collect accurate electron diffraction data from crystals as small as 50 – 100 nm. This is a critical advantage as growing larger crystals, as required by X-ray crystallography methods, is often very difficult, time-consuming, and expensive. In most cases, specimens studied via 3DED method are electron beam sensitive, which means there is a limitation on the maximum amount of electron dose one can use to collect the required data for a high-resolution structure determination. Therefore, collecting data using a conventional scintillator-based fiber coupled camera brings additional challenges. This is because of the inherent noise introduced during the electron-to-photon conversion in the scintillator and transfer of light via the fibers to the sensor, which results in a poor signal-to-noise ratio and requires a relatively higher and commonly specimen-damaging electron dose rates, especially for protein crystals. As in other cryo-EM techniques, damage to the specimen can be mitigated if a direct detection camera is used which provides a high signal-to-noise ratio at low electron doses. In this work, we have used two classes of such detectors from Gatan, namely the K3® camera (a monolithic active pixel sensor) and Stela™ (that utilizes DECTRIS hybrid-pixel technology), to address this problem. The K3 is an electron counting detector optimized for low-dose applications (like structural biology cryo-EM), and Stela is also a counting electron detector but optimized for diffraction applications with high speed and high dynamic range. Lastly, data collection workflows, including crystal screening, microscope optics setup (for imaging and diffraction), stage height adjustment at each crystal position, and tomogram acquisition, can be one of the other challenges of the 3DED technique. Traditionally this has been all done manually or in a partly automated fashion using open-source software and scripting, requiring long hours on the microscope (extra cost) and extensive user interaction with the system. We have recently introduced Latitude® D in DigitalMicrograph® software, which is compatible with all pre- and post-energy-filter Gatan cameras and enables 3DED data acquisition in an automated and optimized fashion. Higher quality 3DED data enables structure determination with higher confidence, while automated workflows allow these to be completed considerably faster than before. Using multiple examples, this work will demonstrate how to direct detection electron counting cameras enhance 3DED results (3 to better than 1 Angstrom) for protein and small molecule structure determination. We will also show how Latitude D software facilitates collecting such data in an integrated and fully automated user interface.Keywords: continuous electron diffraction tomography, direct detection, diffraction, Latitude D, Digitalmicrograph, proteins, small molecules
Procedia PDF Downloads 10760 The Influence of Fashion Bloggers on the Pre-Purchase Decision for Online Fashion Products among Generation Y Female Malaysian Consumers
Authors: Mohd Zaimmudin Mohd Zain, Patsy Perry, Lee Quinn
Abstract:
This study explores how fashion consumers are influenced by fashion bloggers towards pre-purchase decision for online fashion products in a non-Western context. Malaysians rank among the world’s most avid online shoppers, with apparel the third most popular purchase category. However, extant research on fashion blogging focuses on the developed Western market context. Numerous international fashion retailers have entered the Malaysian market from luxury to fast fashion segments of the market; however Malaysian fashion consumers must balance religious and social norms for modesty with their dress style and adoption of fashion trends. Consumers increasingly mix and match Islamic and Western elements of dress to create new styles enabling them to follow Western fashion trends whilst paying respect to social and religious norms. Social media have revolutionised the way that consumers can search for and find information about fashion products. For online fashion brands with no physical presence, social media provide a means of discovery for consumers. By allowing the creation and exchange of user-generated content (UGC) online, they provide a public forum that gives individual consumers their own voices, as well as access to product information that facilitates their purchase decisions. Social media empower consumers and brands have important roles in facilitating conversations among consumers and themselves, to help consumers connect with them and one another. Fashion blogs have become an important fashion information sources. By sharing their personal style and inspiring their followers with what they wear on popular social media platforms such as Instagram, fashion bloggers have become fashion opinion leaders. By creating UGC to spread useful information to their followers, they influence the pre-purchase decision. Hence, successful Western fashion bloggers such as Chiara Ferragni may earn millions of US dollars every year, and some have created their own fashion ranges and beauty products, become judges in fashion reality shows, won awards, and collaborated with high street and luxury brands. As fashion blogging has become more established worldwide, increasing numbers of fashion bloggers have emerged from non-Western backgrounds to promote Islamic fashion styles, such as Hassanah El-Yacoubi and Dian Pelangi. This study adopts a qualitative approach using netnographic content analysis of consumer comments on two famous Malaysian fashion bloggers’ Instagram accounts during January-March 2016 and qualitative interviews with 16 Malaysian Generation Y fashion consumers during September-October 2016. Netnography adapts ethnographic techniques to the study of online communities or computer-mediated communications. Template analysis of the data involved coding comments according to the theoretical framework, which was developed from the literature review. Initial data analysis shows the strong influence of Malaysian fashion bloggers on their followers in terms of lifestyle and morals as well as fashion style. Followers were guided towards the mix and match trend of dress with Western and Islamic elements, for example, showing how vivid colours or accessories could be worked into an outfit whilst still respecting social and religious norms. The blogger’s Instagram account is a form of online community where followers can communicate and gain guidance and support from other followers, as well as from the blogger.Keywords: fashion bloggers, Malaysia, qualitative, social media
Procedia PDF Downloads 21959 Femicide: The Political and Social Blind Spot in the Legal and Welfare State of Germany
Authors: Kristina F. Wolff
Abstract:
Background: In the Federal Republic of Germany, violence against women is deeply embedded in society. Germany is, as of March 2020, the most populous member state of the European Union with 83.2 million inhabitants and, although more than half of its inhabitants are women, gender equality was not certified in the Basic Law until 1957. Women have only been allowed to enter paid employment without their husband's consent since 1977 and have marital rape prosecuted only since 1997. While the lack of equality between men and women is named in the preamble of the Istanbul Convention as the cause of gender-specific, structural, traditional violence against women, Germany continues to sink on the latest Gender Equality Index. According to Police Crime Statistics (PCS), women are significantly more often victims of lethal violence, emanating from men than vice versa. The PCS, which, since 2015, also collects gender-specific data on violent crimes, is kept by the Federal Criminal Police Office, but without taking into account the relevant criteria for targeted prevention, such as the history of violence of the perpetrator/killer, weapon, motivation, etc.. Institutions such as EIGE or the World Health Organization have been asking Germany for years in vain for comparable data on violence against women in order to gain an overview or to develop cross-border synergies. The PCS are the only official data collection on violence against women. All players involved are depend on this data set, which is published only in November of the following year and is thus already completely outdated at the time of publication. In order to combat German femicides causally, purposefully and efficiently, evidence-based data was urgently needed. Methodology: Beginning in January 2019, a database was set up that now tracks more than 600 German femicides, broken down by more than 100 crime-related individual criteria, which in turn go far beyond the official PCS. These data are evaluated on the one hand by daily media research, and on the other hand by case-specific inquiries at the respective public prosecutor's offices and courts nationwide. This quantitative long-term study covers domestic violence as well as a variety of different types of gender-specific, lethal violence, including, for example, femicides committed by German citizens abroad. Additionallyalcohol/ narcotic and/or drug abuse, infanticides and the gender aspect in the judiciary are also considered. Results: Since November 2020, evidence-based data from a scientific survey have been available for the first time in Germany, supplementing the rudimentary picture of reality provided by PCS with a number of relevant parameters. The most important goal of the study is to identify "red flags" that enable general preventive awareness, that serve increasingly precise hazard assessment in acute hazard situations, and from which concrete instructions for action can be identified. Already at a very early stage of the study it could be proven that in more than half of all femicides with a sexual perpetrator/victim constellation there was an age difference of five years or more. Summary: Without reliable data and an understanding of the nature and extent, cause and effect, it is impossible to sustainably curb violence against girls and women, which increasingly often culminates in femicide. In Germany, valid data from a scientific survey has been available for the first time since November 2020, supplementing the rudimentary reality picture of the official and, to date, sole crime statistics with several relevant parameters. The basic research provides insights into geo-concentration, monthly peaks and the modus operandi of male violent excesses. A significant increase of child homicides in the course of femicides and/or child homicides as an instrument of violence against the mother could be proven as well as a danger of affected persons due to an age difference of five years and more. In view of the steadily increasing wave of violence against women, these study results are an eminent contribution to the preventive containment of German femicides.Keywords: femicide, violence against women, gender specific data, rule Of law, Istanbul convention, gender equality, gender based violence
Procedia PDF Downloads 9158 A Modular Solution for Large-Scale Critical Industrial Scheduling Problems with Coupling of Other Optimization Problems
Authors: Ajit Rai, Hamza Deroui, Blandine Vacher, Khwansiri Ninpan, Arthur Aumont, Francesco Vitillo, Robert Plana
Abstract:
Large-scale critical industrial scheduling problems are based on Resource-Constrained Project Scheduling Problems (RCPSP), that necessitate integration with other optimization problems (e.g., vehicle routing, supply chain, or unique industrial ones), thus requiring practical solutions (i.e., modular, computationally efficient with feasible solutions). To the best of our knowledge, the current industrial state of the art is not addressing this holistic problem. We propose an original modular solution that answers the issues exhibited by the delivery of complex projects. With three interlinked entities (project, task, resources) having their constraints, it uses a greedy heuristic with a dynamic cost function for each task with a situational assessment at each time step. It handles large-scale data and can be easily integrated with other optimization problems, already existing industrial tools and unique constraints as required by the use case. The solution has been tested and validated by domain experts on three use cases: outage management in Nuclear Power Plants (NPPs), planning of future NPP maintenance operation, and application in the defense industry on supply chain and factory relocation. In the first use case, the solution, in addition to the resources’ availability and tasks’ logical relationships, also integrates several project-specific constraints for outage management, like, handling of resource incompatibility, updating of tasks priorities, pausing tasks in a specific circumstance, and adjusting dynamic unit of resources. With more than 20,000 tasks and multiple constraints, the solution provides a feasible schedule within 10-15 minutes on a standard computer device. This time-effective simulation corresponds with the nature of the problem and requirements of several scenarios (30-40 simulations) before finalizing the schedules. The second use case is a factory relocation project where production lines must be moved to a new site while ensuring the continuity of their production. This generates the challenge of merging job shop scheduling and the RCPSP with location constraints. Our solution allows the automation of the production tasks while considering the rate expectation. The simulation algorithm manages the use and movement of resources and products to respect a given relocation scenario. The last use case establishes a future maintenance operation in an NPP. The project contains complex and hard constraints, like on Finish-Start precedence relationship (i.e., successor tasks have to start immediately after predecessors while respecting all constraints), shareable coactivity for managing workspaces, and requirements of a specific state of "cyclic" resources (they can have multiple states possible with only one at a time) to perform tasks (can require unique combinations of several cyclic resources). Our solution satisfies the requirement of minimization of the state changes of cyclic resources coupled with the makespan minimization. It offers a solution of 80 cyclic resources with 50 incompatibilities between levels in less than a minute. Conclusively, we propose a fast and feasible modular approach to various industrial scheduling problems that were validated by domain experts and compatible with existing industrial tools. This approach can be further enhanced by the use of machine learning techniques on historically repeated tasks to gain further insights for delay risk mitigation measures.Keywords: deterministic scheduling, optimization coupling, modular scheduling, RCPSP
Procedia PDF Downloads 20157 Xen45 Gel Implant in Open Angle Glaucoma: Efficacy, Safety and Predictors of Outcome
Authors: Fossarello Maurizio, Mattana Giorgio, Tatti Filippo.
Abstract:
The most widely performed surgical procedure in Open-Angle Glaucoma (OAG) is trabeculectomy. Although this filtering procedure is extremely effective, surgical failure and postoperative complications are reported. Due to the its invasive nature and possible complications, trabeculectomy is usually reserved, in practice, for patients who are refractory to medical and laser therapy. Recently, a number of micro-invasive surgical techniques (MIGS: Micro-Invasive Glaucoma Surgery), have been introduced in clinical practice. They meet the criteria of micro-incisional approach, minimal tissue damage, short surgical time, reliable IOP reduction, extremely high safety profile and rapid post-operative recovery. Xen45 Gel Implant (Allergan, Dublin, Ireland) is one of the MIGS alternatives, and consists in a porcine gelatin tube designed to create an aqueous flow from the anterior chamber to the subconjunctival space, bypassing the resistance of the trabecular meshwork. In this study we report the results of this technique as a favorable option in the treatment of OAG for its benefits in term of efficacy and safety, either alone or in combination with cataract surgery. This is a retrospective, single-center study conducted in consecutive OAG patients, who underwent Xen45 Gel Stent implantation alone or in combination with phacoemulsification, from October 2018 to June 2019. The primary endpoint of the study was to evaluate the reduction of both IOP and number of antiglaucoma medications at 12 months. The secondary endpoint was to correlate filtering bleb morphology evaluated by means of anterior segment OCT with efficacy in IOP lowering and eventual further procedures requirement. Data were recorded on Microsoft Excel and study analysis was performed using Microsoft Excel and SPSS (IBM). Mean values with standard deviations were calculated for IOPs and number of antiglaucoma medications at all points. Kolmogorov-Smirnov test showed that IOP followed a normal distribution at all time, therefore the paired Student’s T test was used to compare baseline and postoperative mean IOP. Correlation between postoperative Day 1 IOP and Month 12 IOP was evaluated using Pearson coefficient. Thirty-six eyes of 36 patients were evaluated. As compared to baseline, mean IOP and the mean number of antiglaucoma medications significantly decreased from 27,33 ± 7,67 mmHg to 16,3 ± 2,89 mmHg (38,8% reduction) and from 2,64 ± 1,39 to 0,42 ± 0,8 (84% reduction), respectively, at 12 months after surgery (both p < 0,001). According to bleb morphology, eyes were divided in uniform group (n=8, 22,2%), subconjunctival separation group (n=5, 13,9%), microcystic multiform group (n=9, 25%) and multiple internal layer group (n=14, 38,9%). Comparing to baseline, there was no significative difference in IOP between the 4 groups at month 12 follow-up visit. Adverse events included bleb function decrease (n=14, 38,9%), hypotony (n=8, 22,2%) and choroidal detachment (n=2, 5,6%). All eyes presenting bleb flattening underwent needling and MMC injection. The higher percentage of patients that required secondary needling was in the uniform group (75%), with a significant difference between the groups (p=0,03). Xen45 gel stent, either alone or in combination with phacoemulsification, provided a significant lowering in both IOP and medical antiglaucoma treatment and an elevated safety profile.Keywords: anterior segment OCT, bleb morphology, micro-invasive glaucoma surgery, open angle glaucoma, Xen45 gel implant
Procedia PDF Downloads 14256 Italian Speech Vowels Landmark Detection through the Legacy Tool 'xkl' with Integration of Combined CNNs and RNNs
Authors: Kaleem Kashif, Tayyaba Anam, Yizhi Wu
Abstract:
This paper introduces a methodology for advancing Italian speech vowels landmark detection within the distinctive feature-based speech recognition domain. Leveraging the legacy tool 'xkl' by integrating combined convolutional neural networks (CNNs) and recurrent neural networks (RNNs), the study presents a comprehensive enhancement to the 'xkl' legacy software. This integration incorporates re-assigned spectrogram methodologies, enabling meticulous acoustic analysis. Simultaneously, our proposed model, integrating combined CNNs and RNNs, demonstrates unprecedented precision and robustness in landmark detection. The augmentation of re-assigned spectrogram fusion within the 'xkl' software signifies a meticulous advancement, particularly enhancing precision related to vowel formant estimation. This augmentation catalyzes unparalleled accuracy in landmark detection, resulting in a substantial performance leap compared to conventional methods. The proposed model emerges as a state-of-the-art solution in the distinctive feature-based speech recognition systems domain. In the realm of deep learning, a synergistic integration of combined CNNs and RNNs is introduced, endowed with specialized temporal embeddings, harnessing self-attention mechanisms, and positional embeddings. The proposed model allows it to excel in capturing intricate dependencies within Italian speech vowels, rendering it highly adaptable and sophisticated in the distinctive feature domain. Furthermore, our advanced temporal modeling approach employs Bayesian temporal encoding, refining the measurement of inter-landmark intervals. Comparative analysis against state-of-the-art models reveals a substantial improvement in accuracy, highlighting the robustness and efficacy of the proposed methodology. Upon rigorous testing on a database (LaMIT) speech recorded in a silent room by four Italian native speakers, the landmark detector demonstrates exceptional performance, achieving a 95% true detection rate and a 10% false detection rate. A majority of missed landmarks were observed in proximity to reduced vowels. These promising results underscore the robust identifiability of landmarks within the speech waveform, establishing the feasibility of employing a landmark detector as a front end in a speech recognition system. The synergistic integration of re-assigned spectrogram fusion, CNNs, RNNs, and Bayesian temporal encoding not only signifies a significant advancement in Italian speech vowels landmark detection but also positions the proposed model as a leader in the field. The model offers distinct advantages, including unparalleled accuracy, adaptability, and sophistication, marking a milestone in the intersection of deep learning and distinctive feature-based speech recognition. This work contributes to the broader scientific community by presenting a methodologically rigorous framework for enhancing landmark detection accuracy in Italian speech vowels. The integration of cutting-edge techniques establishes a foundation for future advancements in speech signal processing, emphasizing the potential of the proposed model in practical applications across various domains requiring robust speech recognition systems.Keywords: landmark detection, acoustic analysis, convolutional neural network, recurrent neural network
Procedia PDF Downloads 6555 Pulmonary Disease Identification Using Machine Learning and Deep Learning Techniques
Authors: Chandu Rathnayake, Isuri Anuradha
Abstract:
Early detection and accurate diagnosis of lung diseases play a crucial role in improving patient prognosis. However, conventional diagnostic methods heavily rely on subjective symptom assessments and medical imaging, often causing delays in diagnosis and treatment. To overcome this challenge, we propose a novel lung disease prediction system that integrates patient symptoms and X-ray images to provide a comprehensive and reliable diagnosis.In this project, develop a mobile application specifically designed for detecting lung diseases. Our application leverages both patient symptoms and X-ray images to facilitate diagnosis. By combining these two sources of information, our application delivers a more accurate and comprehensive assessment of the patient's condition, minimizing the risk of misdiagnosis. Our primary aim is to create a user-friendly and accessible tool, particularly important given the current circumstances where many patients face limitations in visiting healthcare facilities. To achieve this, we employ several state-of-the-art algorithms. Firstly, the Decision Tree algorithm is utilized for efficient symptom-based classification. It analyzes patient symptoms and creates a tree-like model to predict the presence of specific lung diseases. Secondly, we employ the Random Forest algorithm, which enhances predictive power by aggregating multiple decision trees. This ensemble technique improves the accuracy and robustness of the diagnosis. Furthermore, we incorporate a deep learning model using Convolutional Neural Network (CNN) with the RestNet50 pre-trained model. CNNs are well-suited for image analysis and feature extraction. By training CNN on a large dataset of X-ray images, it learns to identify patterns and features indicative of lung diseases. The RestNet50 architecture, known for its excellent performance in image recognition tasks, enhances the efficiency and accuracy of our deep learning model. By combining the outputs of the decision tree-based algorithms and the deep learning model, our mobile application generates a comprehensive lung disease prediction. The application provides users with an intuitive interface to input their symptoms and upload X-ray images for analysis. The prediction generated by the system offers valuable insights into the likelihood of various lung diseases, enabling individuals to take appropriate actions and seek timely medical attention. Our proposed mobile application has significant potential to address the rising prevalence of lung diseases, particularly among young individuals with smoking addictions. By providing a quick and user-friendly approach to assessing lung health, our application empowers individuals to monitor their well-being conveniently. This solution also offers immense value in the context of limited access to healthcare facilities, enabling timely detection and intervention. In conclusion, our research presents a comprehensive lung disease prediction system that combines patient symptoms and X-ray images using advanced algorithms. By developing a mobile application, we provide an accessible tool for individuals to assess their lung health conveniently. This solution has the potential to make a significant impact on the early detection and management of lung diseases, benefiting both patients and healthcare providers.Keywords: CNN, random forest, decision tree, machine learning, deep learning
Procedia PDF Downloads 7454 Transitioning Towards a Circular Economy in the Textile Industry: Approaches to Address Environmental Challenges
Authors: Atefeh Salehipoor
Abstract:
Textiles play a vital role in human life, particularly in the form of clothing. However, the alarming rate at which textiles end up in landfills presents a significant environmental risk. With approximately one garbage truck per second being filled with discarded textiles, urgent measures are required to mitigate this trend. Governments and responsible organizations are calling upon various stakeholders to shift from a linear economy to a circular economy model in the textile industry. This article highlights several key approaches that can be undertaken to address this pressing issue. These approaches include the creation of renewable raw material sources, rethinking production processes, maximizing the use and reuse of textile products, implementing reproduction and recycling strategies, exploring redistribution to new markets, and finding innovative means to extend the lifespan of textiles. However, the rapid accumulation of textiles in landfills poses a significant threat to the environment. This article explores the urgent need for the textile industry to transition from a linear economy model to a circular economy model. The linear model, characterized by the creation, use, and disposal of textiles, is unsustainable in the long term. By adopting a circular economy approach, the industry can minimize waste, reduce environmental impact, and promote sustainable practices. This article outlines key approaches that can be undertaken to drive this transition. Approaches to Address Environmental Challenges: 1. Creation of Renewable Raw Materials Sources: Exploring and promoting the use of renewable and sustainable raw materials, such as organic cotton, hemp, and recycled fibers, can significantly reduce the environmental footprint of textile production. 2. Rethinking Production Processes: Implementing cleaner production techniques, optimizing resource utilization, and minimizing waste generation are crucial steps in reducing the environmental impact of textile manufacturing. 3. Maximizing Use and Reuse of Textile Products: Encouraging consumers to prolong the lifespan of textile products through proper care, maintenance, and repair services can reduce the frequency of disposal and promote a culture of sustainability. 4. Reproduction and Recycling Strategies: Investing in innovative technologies and infrastructure to enable efficient reproduction and recycling of textiles can close the loop and minimize waste generation. 5. Redistribution of Textiles to New Markets: Exploring opportunities to redistribute textiles to new and parallel markets, such as resale platforms, can extend their lifecycle and prevent premature disposal. 6. Improvising Means to Extend Textile Lifespan: Encouraging design practices that prioritize durability, versatility, and timeless aesthetics can contribute to prolonging the lifespan of textiles. Conclusion The textile industry must urgently transition from a linear economy to a circular economy model to mitigate the adverse environmental impact caused by textile waste. By implementing the outlined approaches, such as sourcing renewable raw materials, rethinking production processes, promoting reuse and recycling, exploring new markets, and extending the lifespan of textiles, stakeholders can work together to create a more sustainable and environmentally friendly textile industry. These measures require collective action and collaboration between governments, organizations, manufacturers, and consumers to drive positive change and safeguard the planet for future generations.Keywords: textiles, circular economy, environmental challenges, renewable raw materials, production processes, reuse, recycling, redistribution, textile lifespan extension
Procedia PDF Downloads 87