Search results for: distributed sensor networks
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5815

Search results for: distributed sensor networks

205 In-Situ Formation of Particle Reinforced Aluminium Matrix Composites by Laser Powder Bed Fusion of Fe₂O₃/AlSi12 Powder Mixture Using Consecutive Laser Melting+Remelting Strategy

Authors: Qimin Shi, Yi Sun, Constantinus Politis, Shoufeng Yang

Abstract:

In-situ preparation of particle-reinforced aluminium matrix composites (PRAMCs) by laser powder bed fusion (LPBF) additive manufacturing is a promising strategy to strengthen traditional Al-based alloys. The laser-driven thermite reaction can be a practical mechanism to in-situ synthesize PRAMCs. However, introducing oxygen elements through adding Fe₂O₃ makes the powder mixture highly sensitive to form porosity and Al₂O₃ film during LPBF, bringing challenges to producing dense Al-based materials. Therefore, this work develops a processing strategy combined with consecutive high-energy laser melting scanning and low-energy laser remelting scanning to prepare PRAMCs from a Fe₂O₃/AlSi12 powder mixture. The powder mixture consists of 5 wt% Fe₂O₃ and the remainder AlSi12 powder. The addition of 5 wt% Fe₂O₃ aims to achieve balanced strength and ductility. A high relative density (98.2 ± 0.55 %) was successfully obtained by optimizing laser melting (Emelting) and laser remelting surface energy density (Eremelting) to Emelting = 35 J/mm² and Eremelting = 5 J/mm². Results further reveal the necessity of increasing Emelting, to improve metal liquid’s spreading/wetting by breaking up the Al₂O₃ films surrounding the molten pools; however, the high-energy laser melting produced much porosity, including H₂₋, O₂₋ and keyhole-induced pores. The subsequent low-energy laser remelting could close the resulting internal pores, backfill open gaps and smoothen solidified surfaces. As a result, the material was densified by repeating laser melting and laser remelting layer by layer. Although with two-times laser scanning, the microstructure still shows fine cellular Si networks with Al grains inside (grain size of about 370 nm) and in-situ nano-precipitates (Al₂O₃, Si, and Al-Fe(-Si) intermetallics). Finally, the fine microstructure, nano-structured dispersion strengthening, and high-level densification strengthened the in-situ PRAMCs, reaching yield strength of 426 ± 4 MPa and tensile strength of 473 ± 6 MPa. Furthermore, the results can expect to provide valuable information to process other powder mixtures with severe porosity/oxide-film formation potential, considering the evidenced contribution of laser melting/remelting strategy to densify material and obtain good mechanical properties during LPBF.

Keywords: densification, laser powder bed fusion, metal matrix composites, microstructures, mechanical properties

Procedia PDF Downloads 160
204 Stability Study of Hydrogel Based on Sodium Alginate/Poly (Vinyl Alcohol) with Aloe Vera Extract for Wound Dressing Application

Authors: Klaudia Pluta, Katarzyna Bialik-Wąs, Dagmara Malina, Mateusz Barczewski

Abstract:

Hydrogel networks, due to their unique properties, are highly attractive materials for wound dressing. The three-dimensional structure of hydrogels provides tissues with optimal moisture, which supports the wound healing process. Moreover, a characteristic feature of hydrogels is their absorption properties which allow for the absorption of wound exudates. For the fabrication of biomedical hydrogels, a combination of natural polymers ensuring biocompatibility and synthetic ones that provide adequate mechanical strength are often used. Sodium alginate (SA) is one of the polymers widely used in wound dressing materials because it exhibits excellent biocompatibility and biodegradability. However, due to poor strength properties, often alginate-based hydrogel materials are enhanced by the addition of another polymer such as poly(vinyl alcohol) (PVA). This paper is concentrated on the preparation methods of sodium alginate/polyvinyl alcohol hydrogel system incorporating Aloe vera extract and glycerin for wound healing material with particular focus on the role of their composition on structure, thermal properties, and stability. Briefly, the hydrogel preparation is based on the chemical cross-linking method using poly(ethylene glycol) diacrylate (PEGDA, Mn = 700 g/mol) as a crosslinking agent and ammonium persulfate as an initiator. In vitro degradation tests of SA/PVA/AV hydrogels were carried out in Phosphate-Buffered Saline (pH – 7.4) as well as in distilled water. Hydrogel samples were firstly cut into half-gram pieces (in triplicate) and immersed in immersion fluid. Then, all specimens were incubated at 37°C and then the pH and conductivity values were measurements at time intervals. The post-incubation fluids were analyzed using SEC/GPC to check the content of oligomers. The separation was carried out at 35°C on a poly(hydroxy methacrylate) column (dimensions 300 x 8 mm). 0.1M NaCl solution, whose flow rate was 0.65 ml/min, was used as the mobile phase. Three injections with a volume of 50 µl were made for each sample. The thermogravimetric data of the prepared hydrogels were collected using a Netzsch TG 209 F1 Libra apparatus. The samples with masses of about 10 mg were weighed separately in Al2O3 crucibles and then were heated from 30°C to 900°C with a scanning rate of 10 °C∙min−1 under a nitrogen atmosphere. Based on the conducted research, a fast and simple method was developed to produce potential wound dressing material containing sodium alginate, poly(vinyl alcohol) and Aloe vera extract. As a result, transparent and flexible SA/PVA/AV hydrogels were obtained. The degradation experiments indicated that most of the samples immersed in PBS as well as in distilled water were not degraded throughout the whole incubation time.

Keywords: hydrogels, wound dressings, sodium alginate, poly(vinyl alcohol)

Procedia PDF Downloads 168
203 Mining Scientific Literature to Discover Potential Research Data Sources: An Exploratory Study in the Field of Haemato-Oncology

Authors: A. Anastasiou, K. S. Tingay

Abstract:

Background: Discovering suitable datasets is an important part of health research, particularly for projects working with clinical data from patients organized in cohorts (cohort data), but with the proliferation of so many national and international initiatives, it is becoming increasingly difficult for research teams to locate real world datasets that are most relevant to their project objectives. We present a method for identifying healthcare institutes in the European Union (EU) which may hold haemato-oncology (HO) data. A key enabler of this research was the bibInsight platform, a scientometric data management and analysis system developed by the authors at Swansea University. Method: A PubMed search was conducted using HO clinical terms taken from previous work. The resulting XML file was processed using the bibInsight platform, linking affiliations to the Global Research Identifier Database (GRID). GRID is an international, standardized list of institutions, including the city and country in which the institution exists, as well as a category of the main business type, e.g., Academic, Healthcare, Government, Company. Countries were limited to the 28 current EU members, and institute type to 'Healthcare'. An article was considered valid if at least one author was affiliated with an EU-based healthcare institute. Results: The PubMed search produced 21,310 articles, consisting of 9,885 distinct affiliations with correspondence in GRID. Of these articles, 760 were from EU countries, and 390 of these were healthcare institutes. One affiliation was excluded as being a veterinary hospital. Two EU countries did not have any publications in our analysis dataset. The results were analysed by country and by individual healthcare institute. Networks both within the EU and internationally show institutional collaborations, which may suggest a willingness to share data for research purposes. Geographical mapping can ensure that data has broad population coverage. Collaborations with industry or government may exclude healthcare institutes that may have embargos or additional costs associated with data access. Conclusions: Data reuse is becoming increasingly important both for ensuring the validity of results, and economy of available resources. The ability to identify potential, specific data sources from over twenty thousand articles in less than an hour could assist in improving knowledge of, and access to, data sources. As our method has not yet specified if these healthcare institutes are holding data, or merely publishing on that topic, future work will involve text mining of data-specific concordant terms to identify numbers of participants, demographics, study methodologies, and sub-topics of interest.

Keywords: data reuse, data discovery, data linkage, journal articles, text mining

Procedia PDF Downloads 118
202 Large-Scale Simulations of Turbulence Using Discontinuous Spectral Element Method

Authors: A. Peyvan, D. Li, J. Komperda, F. Mashayek

Abstract:

Turbulence can be observed in a variety fluid motions in nature and industrial applications. Recent investment in high-speed aircraft and propulsion systems has revitalized fundamental research on turbulent flows. In these systems, capturing chaotic fluid structures with different length and time scales is accomplished through the Direct Numerical Simulation (DNS) approach since it accurately simulates flows down to smallest dissipative scales, i.e., Kolmogorov’s scales. The discontinuous spectral element method (DSEM) is a high-order technique that uses spectral functions for approximating the solution. The DSEM code has been developed by our research group over the course of more than two decades. Recently, the code has been improved to run large cases in the order of billions of solution points. Running big simulations requires a considerable amount of RAM. Therefore, the DSEM code must be highly parallelized and able to start on multiple computational nodes on an HPC cluster with distributed memory. However, some pre-processing procedures, such as determining global element information, creating a global face list, and assigning global partitioning and element connection information of the domain for communication, must be done sequentially with a single processing core. A separate code has been written to perform the pre-processing procedures on a local machine. It stores the minimum amount of information that is required for the DSEM code to start in parallel, extracted from the mesh file, into text files (pre-files). It packs integer type information with a Stream Binary format in pre-files that are portable between machines. The files are generated to ensure fast read performance on different file-systems, such as Lustre and General Parallel File System (GPFS). A new subroutine has been added to the DSEM code to read the startup files using parallel MPI I/O, for Lustre, in a way that each MPI rank acquires its information from the file in parallel. In case of GPFS, in each computational node, a single MPI rank reads data from the file, which is specifically generated for the computational node, and send them to other ranks on the node using point to point non-blocking MPI communication. This way, communication takes place locally on each node and signals do not cross the switches of the cluster. The read subroutine has been tested on Argonne National Laboratory’s Mira (GPFS), National Center for Supercomputing Application’s Blue Waters (Lustre), San Diego Supercomputer Center’s Comet (Lustre), and UIC’s Extreme (Lustre). The tests showed that one file per node is suited for GPFS and parallel MPI I/O is the best choice for Lustre file system. The DSEM code relies on heavily optimized linear algebra operation such as matrix-matrix and matrix-vector products for calculation of the solution in every time-step. For this, the code can either make use of its matrix math library, BLAS, Intel MKL, or ATLAS. This fact and the discontinuous nature of the method makes the DSEM code run efficiently in parallel. The results of weak scaling tests performed on Blue Waters showed a scalable and efficient performance of the code in parallel computing.

Keywords: computational fluid dynamics, direct numerical simulation, spectral element, turbulent flow

Procedia PDF Downloads 136
201 Transport Hubs as Loci of Multi-Layer Ecosystems of Innovation: Case Study of Airports

Authors: Carolyn Hatch, Laurent Simon

Abstract:

Urban mobility and the transportation industry are undergoing a transformation, shifting from an auto production-consumption model that has dominated since the early 20th century towards new forms of personal and shared multi-modality [1]. This is shaped by key forces such as climate change, which has induced a shift in production and consumption patterns and efforts to decarbonize and improve transport services through, for instance, the integration of vehicle automation, electrification and mobility sharing [2]. Advanced innovation practices and platforms for experimentation and validation of new mobility products and services that are increasingly complex and multi-stakeholder-oriented are shaping this new world of mobility. Transportation hubs – such as airports - are emblematic of these disruptive forces playing out in the mobility industry. Airports are emerging as the core of innovation ecosystems on and around contemporary mobility issues, and increasingly recognized as complex public/private nodes operating in many societal dimensions [3,4]. These include urban development, sustainability transitions, digital experimentation, customer experience, infrastructure development and data exploitation (for instance, airports generate massive and often untapped data flows, with significant potential for use, commercialization and social benefit). Yet airport innovation practices have not been well documented in the innovation literature. This paper addresses this gap by proposing a model of airport innovation that aims to equip airport stakeholders to respond to these new and complex innovation needs in practice. The methodology involves: 1 – a literature review bringing together key research and theory on airport innovation management, open innovation and innovation ecosystems in order to evaluate airport practices through an innovation lens; 2 – an international benchmarking of leading airports and their innovation practices, including such examples as Aéroports de Paris, Schipol in Amsterdam, Changi in Singapore, and others; and 3 – semi-structured interviews with airport managers on key aspects of organizational practice, facilitated through a close partnership with the Airport Council International (ACI), a major stakeholder in this research project. Preliminary results find that the most successful airports are those that have shifted to a multi-stakeholder, platform ecosystem model of innovation. The recent entrance of new actors in airports (Google, Amazon, Accor, Vinci, Airbnb and others) have forced the opening of organizational boundaries to share and exchange knowledge with a broader set of ecosystem players. This has also led to new forms of governance and intermediation by airport actors to connect complex, highly distributed knowledge, along with new kinds of inter-organizational collaboration, co-creation and collective ideation processes. Leading airports in the case study have demonstrated a unique capacity to force traditionally siloed activities to “think together”, “explore together” and “act together”, to share data, contribute expertise and pioneer new governance approaches and collaborative practices. In so doing, they have successfully integrated these many disruptive change pathways and forced their implementation and coordination towards innovative mobility outcomes, with positive societal, environmental and economic impacts. This research has implications for: 1 - innovation theory, 2 - urban and transport policy, and 3 - organizational practice - within the mobility industry and across the economy.

Keywords: airport management, ecosystem, innovation, mobility, platform, transport hubs

Procedia PDF Downloads 184
200 Roadmap to a Bottom-Up Approach Creating Meaningful Contributions to Surgery in Low-Income Settings

Authors: Eva Degraeuwe, Margo Vandenheede, Nicholas Rennie, Jolien Braem, Miryam Serry, Frederik Berrevoet, Piet Pattyn, Wouter Willaert, InciSioN Belgium Consortium

Abstract:

Background: Worldwide, five billion people lack access to safe and affordable surgical care. An added 1.27 million surgeons, anesthesiologists, and obstetricians (SAO) are needed by 2030 to meet the target of 20 per 100,000 population and to reach the goal of the Lancet Commission on Global Surgery. A well-informed future generation exposed early on to the current challenges in global surgery (GS) is necessary to ensure a sustainable future. Methods: InciSioN, the International Student Surgical Network, is a non-profit organization by and for students, residents, and fellows in over 80 countries. InciSioN Belgium, one of the prominent national working groups, has made a vast progression and collaborated with other networks to fill the educational gap, stimulate advocacy efforts and increase interactions with the international network. This report describes a roadmap to achieve sustainable development and education within GS, with the example of InciSioN Belgium. Results: Since the establishment of the organization’s branch in 2019, it has hosted an educational workshop for first-year residents in surgery, engaging over 2500 participants, and established a recurring directing board of 15 members. In the year 2020-2021, InciSioN Ghent has organized three workshops combining educational and interactive sessions for future prime advocates and surgical candidates. InciSioN Belgium has set up a strong formal coalition with the Belgian Medical Students’ Association (BeMSA), with its own standing committee, reaching over 3000+ medical students annually. In 2021-2022, InciSioN Belgium broadened to a multidisciplinary approach, including dentistry and nursing students and graduates within workshops and research projects, leading to a member and exposure increase of 450%. This roadmap sets strategic goals and mechanisms for the GS community to achieve nationwide sustained improvements in the research and education of GS focused on future SAOs, in order to achieve the GS sustainable development goals. In the coming year, expansion is directed to a formal integration of GS into the medical curriculum and increased international advocacy whilst inspiring SAOs to integrate into GS in Belgium. Conclusion: The development and implementation of durable change for GS are necessary. The student organization InciSioN Belgium is growing and hopes to close the colossal gap in GS and inspire the growth of other branches while sharing the know-how of a student organization.

Keywords: advocacy, education, global surgery, InciSioN, student network

Procedia PDF Downloads 176
199 Charcoal Traditional Production in Portugal: Contribution to the Quantification of Air Pollutant Emissions

Authors: Cátia Gonçalves, Teresa Nunes, Inês Pina, Ana Vicente, C. Alves, Felix Charvet, Daniel Neves, A. Matos

Abstract:

The production of charcoal relies on rudimentary technologies using traditional brick kilns. Charcoal is produced under pyrolysis conditions: breaking down the chemical structure of biomass under high temperature in the absence of air. The amount of the pyrolysis products (charcoal, pyroligneous extract, and flue gas) depends on various parameters, including temperature, time, pressure, kiln design, and wood characteristics like the moisture content. This activity is recognized for its inefficiency and high pollution levels, but it is poorly characterized. This activity is widely distributed and is a vital economic activity in certain regions of Portugal, playing a relevant role in the management of woody residues. The location of the units establishes the biomass used for charcoal production. The Portalegre district, in the Alto Alentejo region (Portugal), is a good example, essentially with rural characteristics, with a predominant farming, agricultural, and forestry profile, and with a significant charcoal production activity. In this district, a recent inventory identifies almost 50 charcoal production units, equivalent to more than 450 kilns, of which 80% appear to be in operation. A field campaign was designed with the objective of determining the composition of the emissions released during a charcoal production cycle. A total of 30 samples of particulate matter and 20 gas samples in Tedlar bags were collected. Particulate and gas samplings were performed in parallel, 2 in the morning and 2 in the afternoon, alternating the inlet heads (PM₁₀ and PM₂.₅), in the particulate sampler. The gas and particulate samples were collected in the plume as close as the emission chimney point. The biomass (dry basis) used in the carbonization process was a mixture of cork oak (77 wt.%), holm oak (7 wt.%), stumps (11 wt.%), and charred wood (5 wt.%) from previous carbonization processes. A cylindrical batch kiln (80 m³) with 4.5 m diameter and 5 m of height was used in this study. The composition of the gases was determined by gas chromatography, while the particulate samples (PM₁₀, PM₂.₅) were subjected to different analytical techniques (thermo-optical transmission technique, ion chromatography, HPAE-PAD, and GC-MS after solvent extraction) after prior gravimetric determination, to study their organic and inorganic constituents. The charcoal production cycle presents widely varying operating conditions, which will be reflected in the composition of gases and particles produced and emitted throughout the process. The concentration of PM₁₀ and PM₂.₅ in the plume was calculated, ranging between 0.003 and 0.293 g m⁻³, and 0.004 and 0.292 g m⁻³, respectively. Total carbon, inorganic ions, and sugars account, in average, for PM10 and PM₂.₅, 65 % and 56 %, 2.8 % and 2.3 %, 1.27 %, and 1.21 %, respectively. The organic fraction studied until now includes more than 30 aliphatic compounds and 20 PAHs. The emission factors of particulate matter to produce charcoal in the traditional kiln were 33 g/kg (wooddb) and 27 g/kg (wooddb) for PM₁₀ and PM₂.₅, respectively. With the data obtained in this study, it is possible to fill the lack of information about the environmental impact of the traditional charcoal production in Portugal. Acknowledgment: Authors thanks to FCT – Portuguese Science Foundation, I.P. and to Ministry of Science, Technology and Higher Education of Portugal for financial support within the scope of the project CHARCLEAN (PCIF/GVB/0179/2017) and CESAM (UIDP/50017/2020 + UIDB/50017/2020).

Keywords: brick kilns, charcoal, emission factors, PAHs, total carbon

Procedia PDF Downloads 149
198 Effect of a Chatbot-Assisted Adoption of Self-Regulated Spaced Practice on Students' Vocabulary Acquisition and Cognitive Load

Authors: Ngoc-Nguyen Nguyen, Hsiu-Ling Chen, Thanh-Truc Lai Huynh

Abstract:

In foreign language learning, vocabulary acquisition has consistently posed challenges to learners, especially for those at lower levels. Conventional approaches often fail to promote vocabulary learning and ensure engaging experiences alike. The emergence of mobile learning, particularly the integration of chatbot systems, has offered alternative ways to facilitate this practice. Chatbots have proven effective in educational contexts by offering interactive learning experiences in a constructivist manner. These tools have caught attention in the field of mobile-assisted language learning (MALL) in recent years. This research is conducted in an English for Specific Purposes (ESP) course at the A2 level of the CEFR, designed for non-English majors. Participants are first-year Vietnamese students aged 18 to 20 at a university. This quasi-experimental study follows a pretest-posttest control group design over five weeks, with two classes randomly assigned as the experimental and control groups. The experimental group engages in chatbot-assisted spaced practice with SRL components, while the control group uses the same spaced practice without SRL. The two classes are taught by the same lecturer. Data are collected through pre- and post-tests, cognitive load surveys, and semi-structured interviews. The combination of self-regulated learning (SRL) and distributed practice, grounded in the spacing effect, forms the basis of the present study. SRL elements, which concern goal setting and strategy planning, are integrated into the system. The spaced practice method, similar to those used in widely recognized learning platforms like Duolingo and Anki flashcards, spreads out learning over multiple sessions. This study’s design features quizzes progressively increasing in difficulty. These quizzes are aimed at targeting both the Recognition-Recall and Comprehension-Use dimensions for a comprehensive acquisition of vocabulary. The mobile-based chatbot system is built using Golang, an open-source programming language developed by Google. It follows a structured flow that guides learners through a series of 4 quizzes in each week of teacher-led learning. The quizzes start with less cognitively demanding tasks, such as multiple-choice questions, before moving on to more complex exercises. The integration of SRL elements allows students to self-evaluate the difficulty level of vocabulary items, predict scores achieved, and choose appropriate strategy. This research is part one of a two-part project. The initial findings will determine the development of an upgraded chatbot system in part two, where adaptive features in response to the integration of SRL components will be introduced. The research objectives are to assess the effectiveness of the chatbot-assisted approach, based on the combination of spaced practice and SRL, in improving vocabulary acquisition and managing cognitive load, as well as to understand students' perceptions of this learning tool. The insights from this study will contribute to the growing body of research on mobile-assisted language learning and offer practical implications for integrating chatbot systems with spaced practice into educational settings to enhance vocabulary learning.

Keywords: mobile learning, mobile-assisted language learning, MALL, chatbots, vocabulary learning, spaced practice, spacing effect, self-regulated learning, SRL, self-regulation, EFL, cognitive load

Procedia PDF Downloads 25
197 Exploration into Bio Inspired Computing Based on Spintronic Energy Efficiency Principles and Neuromorphic Speed Pathways

Authors: Anirudh Lahiri

Abstract:

Neuromorphic computing, inspired by the intricate operations of biological neural networks, offers a revolutionary approach to overcoming the limitations of traditional computing architectures. This research proposes the integration of spintronics with neuromorphic systems, aiming to enhance computational performance, scalability, and energy efficiency. Traditional computing systems, based on the Von Neumann architecture, struggle with scalability and efficiency due to the segregation of memory and processing functions. In contrast, the human brain exemplifies high efficiency and adaptability, processing vast amounts of information with minimal energy consumption. This project explores the use of spintronics, which utilizes the electron's spin rather than its charge, to create more energy-efficient computing systems. Spintronic devices, such as magnetic tunnel junctions (MTJs) manipulated through spin-transfer torque (STT) and spin-orbit torque (SOT), offer a promising pathway to reducing power consumption and enhancing the speed of data processing. The integration of these devices within a neuromorphic framework aims to replicate the efficiency and adaptability of biological systems. The research is structured into three phases: an exhaustive literature review to build a theoretical foundation, laboratory experiments to test and optimize the theoretical models, and iterative refinements based on experimental results to finalize the system. The initial phase focuses on understanding the current state of neuromorphic and spintronic technologies. The second phase involves practical experimentation with spintronic devices and the development of neuromorphic systems that mimic synaptic plasticity and other biological processes. The final phase focuses on refining the systems based on feedback from the testing phase and preparing the findings for publication. The expected contributions of this research are twofold. Firstly, it aims to significantly reduce the energy consumption of computational systems while maintaining or increasing processing speed, addressing a critical need in the field of computing. Secondly, it seeks to enhance the learning capabilities of neuromorphic systems, allowing them to adapt more dynamically to changing environmental inputs, thus better mimicking the human brain's functionality. The integration of spintronics with neuromorphic computing could revolutionize how computational systems are designed, making them more efficient, faster, and more adaptable. This research aligns with the ongoing pursuit of energy-efficient and scalable computing solutions, marking a significant step forward in the field of computational technology.

Keywords: material science, biological engineering, mechanical engineering, neuromorphic computing, spintronics, energy efficiency, computational scalability, synaptic plasticity.

Procedia PDF Downloads 51
196 Enhanced Furfural Extraction from Aqueous Media Using Neoteric Hydrophobic Solvents

Authors: Ahmad S. Darwish, Tarek Lemaoui, Hanifa Taher, Inas M. AlNashef, Fawzi Banat

Abstract:

This research reports a systematic top-down approach for designing neoteric hydrophobic solvents –particularly, deep eutectic solvents (DES) and ionic liquids (IL)– as furfural extractants from aqueous media for the application of sustainable biomass conversion. The first stage of the framework entailed screening 32 neoteric solvents to determine their efficacy against toluene as the application’s conventional benchmark for comparison. The selection criteria for the best solvents encompassed not only their efficiency in extracting furfural but also low viscosity and minimal toxicity levels. Additionally, for the DESs, their natural origins, availability, and biodegradability were also taken into account. From the screening pool, two neoteric solvents were selected: thymol:decanoic acid 1:1 (Thy:DecA) and trihexyltetradecyl phosphonium bis(trifluoromethylsulfonyl) imide [P₁₄,₆,₆,₆][NTf₂]. These solvents outperformed the toluene benchmark, achieving efficiencies of 94.1% and 97.1% respectively, compared to toluene’s 81.2%, while also possessing the desired properties. These solvents were then characterized thoroughly in terms of their physical properties, thermal properties, critical properties, and cross-contamination solubilities. The selected neoteric solvents were then extensively tested under various operating conditions, and an exceptional stable performance was exhibited, maintaining high efficiency across a broad range of temperatures (15–100 °C), pH levels (1–13), and furfural concentrations (0.1–2.0 wt%) with a remarkable equilibrium time of only 2 minutes, and most notably, demonstrated high efficiencies even at low solvent-to-feed ratios. The durability of the neoteric solvents was also validated to be stable over multiple extraction-regeneration cycles, with limited leachability to the aqueous phase (≈0.1%). Moreover, the extraction performance of the solvents was then modeled through machine learning, specifically multiple non-linear regression (MNLR) and artificial neural networks (ANN). The models demonstrated high accuracy, indicated by their low absolute average relative deviations with values of 2.74% and 2.28% for Thy:DecA and [P₁₄,₆,₆,₆][NTf₂], respectively, using MNLR, and 0.10% for Thy:DecA and 0.41% for [P₁₄,₆,₆,₆][NTf₂] using ANN, highlighting the significantly enhanced predictive accuracy of the ANN. The neoteric solvents presented herein offer noteworthy advantages over traditional organic solvents, including their high efficiency in both extraction and regeneration processes, their stability and minimal leachability, making them particularly suitable for applications involving aqueous media. Moreover, these solvents are more environmentally friendly, incorporating renewable and sustainable components like thymol and decanoic acid. This exceptional efficacy of the newly developed neoteric solvents signifies a significant advancement, providing a green and sustainable alternative for furfural production from biowaste.

Keywords: sustainable biomass conversion, furfural extraction, ionic liquids, deep eutectic solvents

Procedia PDF Downloads 73
195 Intrigues of Brand Activism versus Brand Antagonism in Rival Online Football Brand Communities: The Case of the Top Two Premier Football Clubs in Ghana

Authors: Joshua Doe, George Amoako

Abstract:

Purpose: In an increasingly digital world, the realm of sports fandom has extended its borders, creating a vibrant ecosystem of online communities centered around football clubs. This study ventures into the intricate interplay of motivations that drive football fans to respond to brand activism and its profound implications for brand antagonism and engagement among two of Ghana's most revered premier football clubs. Methods: A sample of 459 fervent fans from these two rival clubs were engaged through self-administered questionnaires expertly distributed via social media and online platforms. Data was analysed, using PLS-SEM. Findings: The tapestry of motivations that weave through these online football communities is as diverse as the fans themselves. It becomes apparent that fans are propelled by a spectrum of incentives. They seek education, yearn for information, revel in entertainment, embrace socialization, and fortify their self-esteem through their interactions within these digital spaces. Yet, it is the nuanced distinction in these motivations that shapes the trajectory of brand antagonism and engagement. Surprisingly, the study reveals a remarkable pattern. Football fans, despite their fierce rivalries, do not engage in brand antagonism based on educational pursuits, information-seeking endeavors, or socialization. Instead, it is motivations rooted in entertainment and self-esteem that serve as the fertile grounds for brand antagonism. Paradoxically, it is these very motivations coupled with the desire for socialization that nurture brand engagement, manifesting as active support and advocacy for their chosen club brand. Originality: Our research charters new waters by extending the boundaries of existing theories in the field. The Technology Acceptance Uses and Gratifications Theory, and Social Identity Theory all find new dimensions within the context of online brand community engagement. This not only deepens our understanding of the multifaceted world of online football fandom but also invites us to explore the implications these insights carry within the digital realm. Contribution to Practice: For marketers, our findings offer a treasure trove of actionable insights. They beckon the development of targeted content strategies that resonate with fan motivations. The implementation of brand advocacy programs, fostering opportunities for socialization, and the effective management of brand antagonism emerge as pivotal strategies. Furthermore, the utilization of data-driven insights is poised to refine consumer engagement strategies and strengthen brand affinity. Future Studies: For future studies, we advocate for longitudinal, cross-cultural, and qualitative studies that could shed further light on this topic. Comparative analyses across different types of online brand communities, an exploration of the role of brand community leaders, and inquiries into the factors that contribute to brand community dissolution all beckon the research community. Furthermore, understanding motivation-specific antagonistic behaviors and the intricate relationship between information-seeking and engagement present exciting avenues for further exploration. This study unfurls a vibrant tapestry of fan motivations, brand activism, and rivalry within online football communities. It extends a hand to scholars and marketers alike, inviting them to embark on a journey through this captivating digital realm, where passion, rivalry, and engagement harmonize to shape the world of sports fandom as we know it.

Keywords: online brand engagement, football fans, brand antagonism, motivations

Procedia PDF Downloads 67
194 Assessing the Utility of Unmanned Aerial Vehicle-Borne Hyperspectral Image and Photogrammetry Derived 3D Data for Wetland Species Distribution Quick Mapping

Authors: Qiaosi Li, Frankie Kwan Kit Wong, Tung Fung

Abstract:

Lightweight unmanned aerial vehicle (UAV) loading with novel sensors offers a low cost approach for data acquisition in complex environment. This study established a framework for applying UAV system in complex environment quick mapping and assessed the performance of UAV-based hyperspectral image and digital surface model (DSM) derived from photogrammetric point clouds for 13 species classification in wetland area Mai Po Inner Deep Bay Ramsar Site, Hong Kong. The study area was part of shallow bay with flat terrain and the major species including reedbed and four mangroves: Kandelia obovata, Aegiceras corniculatum, Acrostichum auerum and Acanthus ilicifolius. Other species involved in various graminaceous plants, tarbor, shrub and invasive species Mikania micrantha. In particular, invasive species climbed up to the mangrove canopy caused damage and morphology change which might increase species distinguishing difficulty. Hyperspectral images were acquired by Headwall Nano sensor with spectral range from 400nm to 1000nm and 0.06m spatial resolution image. A sequence of multi-view RGB images was captured with 0.02m spatial resolution and 75% overlap. Hyperspectral image was corrected for radiative and geometric distortion while high resolution RGB images were matched to generate maximum dense point clouds. Furtherly, a 5 cm grid digital surface model (DSM) was derived from dense point clouds. Multiple feature reduction methods were compared to identify the efficient method and to explore the significant spectral bands in distinguishing different species. Examined methods including stepwise discriminant analysis (DA), support vector machine (SVM) and minimum noise fraction (MNF) transformation. Subsequently, spectral subsets composed of the first 20 most importance bands extracted by SVM, DA and MNF, and multi-source subsets adding extra DSM to 20 spectrum bands were served as input in maximum likelihood classifier (MLC) and SVM classifier to compare the classification result. Classification results showed that feature reduction methods from best to worst are MNF transformation, DA and SVM. MNF transformation accuracy was even higher than all bands input result. Selected bands frequently laid along the green peak, red edge and near infrared. Additionally, DA found that chlorophyll absorption red band and yellow band were also important for species classification. In terms of 3D data, DSM enhanced the discriminant capacity among low plants, arbor and mangrove. Meanwhile, DSM largely reduced misclassification due to the shadow effect and morphological variation of inter-species. In respect to classifier, nonparametric SVM outperformed than MLC for high dimension and multi-source data in this study. SVM classifier tended to produce higher overall accuracy and reduce scattered patches although it costs more time than MLC. The best result was obtained by combining MNF components and DSM in SVM classifier. This study offered a precision species distribution survey solution for inaccessible wetland area with low cost of time and labour. In addition, findings relevant to the positive effect of DSM as well as spectral feature identification indicated that the utility of UAV-borne hyperspectral and photogrammetry deriving 3D data is promising in further research on wetland species such as bio-parameters modelling and biological invasion monitoring.

Keywords: digital surface model (DSM), feature reduction, hyperspectral, photogrammetric point cloud, species mapping, unmanned aerial vehicle (UAV)

Procedia PDF Downloads 262
193 Self-Medication with Antibiotics, Evidence of Factors Influencing the Practice in Low and Middle-Income Countries: A Systematic Scoping Review

Authors: Neusa Fernanda Torres, Buyisile Chibi, Lyn E. Middleton, Vernon P. Solomon, Tivani P. Mashamba-Thompson

Abstract:

Background: Self-medication with antibiotics (SMA) is a global concern, with a higher incidence in low and middle-income countries (LMICs). Despite intense world-wide efforts to control and promote the rational use of antibiotics, continuing practices of SMA systematically exposes individuals and communities to the risk of antibiotic resistance and other undesirable antibiotic side effects. Moreover, it increases the health systems costs of acquiring more powerful antibiotics to treat the resistant infection. This review thus maps evidence on the factors influencing self-medication with antibiotics in these settings. Methods: The search strategy for this review involved electronic databases including PubMed, Web of Knowledge, Science Direct, EBSCOhost (PubMed, CINAHL with Full Text, Health Source - Consumer Edition, MEDLINE), Google Scholar, BioMed Central and World Health Organization library, using the search terms:’ Self-Medication’, ‘antibiotics’, ‘factors’ and ‘reasons’. Our search included studies published from 2007 to 2017. Thematic analysis was performed to identify the patterns of evidence on SMA in LMICs. The mixed method quality appraisal tool (MMAT) version 2011 was employed to assess the quality of the included primary studies. Results: Fifteen studies met the inclusion criteria. Studies included population from the rural (46,4%), urban (33,6%) and combined (20%) settings, of the following LMICs: Guatemala (2 studies), India (2), Indonesia (2), Kenya (1), Laos (1), Nepal (1), Nigeria (2), Pakistan (2), Sri Lanka (1), and Yemen (1). The total sample size of all 15 included studies was 7676 participants. The findings of the review show a high prevalence of SMA ranging from 8,1% to 93%. Accessibility, affordability, conditions of health facilities (long waiting, quality of services and workers) as long well as poor health-seeking behavior and lack of information are factors that influence SMA in LMICs. Antibiotics such as amoxicillin, metronidazole, amoxicillin/clavulanic, ampicillin, ciprofloxacin, azithromycin, penicillin, and tetracycline, were the most frequently used for SMA. The major sources of antibiotics included pharmacies, drug stores, leftover drugs, family/friends and old prescription. Sore throat, common cold, cough with mucus, headache, toothache, flu-like symptoms, pain relief, fever, running nose, toothache, upper respiratory tract infections, urinary symptoms, urinary tract infection were the common disease symptoms managed with SMA. Conclusion: Although the information on factors influencing SMA in LMICs is unevenly distributed, the available information revealed the existence of research evidence on antibiotic self-medication in some countries of LMICs. SMA practices are influenced by social-cultural determinants of health and frequently associated with poor dispensing and prescribing practices, deficient health-seeking behavior and consequently with inappropriate drug use. Therefore, there is still a need to conduct further studies (qualitative, quantitative and randomized control trial) on factors and reasons for SMA to correctly address the public health problem in LMICs.

Keywords: antibiotics, factors, reasons, self-medication, low and middle-income countries (LMICs)

Procedia PDF Downloads 219
192 Leadership and Entrepreneurship in Higher Education: Fostering Innovation and Sustainability

Authors: Naziema Begum Jappie

Abstract:

Leadership and entrepreneurship in higher education have become critical components in navigating the evolving landscape of academia in the 21st century. This abstract explores the multifaceted relationship between leadership and entrepreneurship within the realm of higher education, emphasizing their roles in fostering innovation and sustainability. Higher education institutions, often characterized as slow-moving and resistant to change, are facing unprecedented challenges. Globalization, rapid technological advancements, changing student demographics, and financial constraints necessitate a reimagining of traditional models. Leadership in higher education must embrace entrepreneurial thinking to effectively address these challenges. Entrepreneurship in higher education involves cultivating a culture of innovation, risk-taking, and adaptability. Visionary leaders who promote entrepreneurship within their institutions empower faculty and staff to think creatively, seek new opportunities, and engage with external partners. These entrepreneurial efforts lead to the development of novel programs, research initiatives, and sustainable revenue streams. Innovation in curriculum and pedagogy is a central aspect of leadership and entrepreneurship in higher education. Forward-thinking leaders encourage faculty to experiment with teaching methods and technology, fostering a dynamic learning environment that prepares students for an ever-changing job market. Entrepreneurial leadership also facilitates the creation of interdisciplinary programs that address emerging fields and societal challenges. Collaboration is key to entrepreneurship in higher education. Leaders must establish partnerships with industry, government, and non-profit organizations to enhance research opportunities, secure funding, and provide real-world experiences for students. Entrepreneurial leaders leverage their institutions' resources to build networks that extend beyond campus boundaries, strengthening their positions in the global knowledge economy. Financial sustainability is a pressing concern for higher education institutions. Entrepreneurial leadership involves diversifying revenue streams through innovative fundraising campaigns, partnerships, and alternative educational models. Leaders who embrace entrepreneurship are better equipped to navigate budget constraints and ensure the long-term viability of their institutions. In conclusion, leadership and entrepreneurship are intertwined elements essential to the continued relevance and success of higher education institutions. Visionary leaders who champion entrepreneurship foster innovation, enhance the student experience, and secure the financial future of their institutions. As academia continues to evolve, leadership and entrepreneurship will remain indispensable tools in shaping the future of higher education. This abstract underscores the importance of these concepts and their potential to drive positive change within the higher education landscape.

Keywords: entrepreneurship, higher education, innovation, leadership

Procedia PDF Downloads 74
191 Deciphering Tumor Stroma Interactions in Retinoblastoma

Authors: Rajeswari Raguraman, Sowmya Parameswaran, Krishnakumar Subramanian, Jagat Kanwar, Rupinder Kanwar

Abstract:

Background: Tumor microenvironment has been implicated in several cancers to regulate cell growth, invasion and metastasis culminating in outcome of therapy. Tumor stroma consists of multiple cell types that are in constant cross-talk with the tumor cells to favour a pro-tumorigenic environment. Not much is known about the existence of tumor microenvironment in the pediatric intraocular malignancy, Retinoblastoma (RB). In the present study, we aim to understand the multiple stromal cellular subtypes and tumor stromal interactions expressed in RB tumors. Materials and Methods: Immunohistochemistry for stromal cell markers CD31, CD68, alpha-smooth muscle (α-SMA), vimentin and glial fibrillary acidic protein (GFAP) was performed on formalin fixed paraffin embedded tissues sections of RB (n=12). The differential expression of stromal target molecules; fibroblast activation protein (FAP), tenascin-C (TNC), osteopontin (SPP1), bone marrow stromal antigen 2 (BST2), stromal derived factor 2 and 4 (SDF2 and SDF4) in primary RB tumors (n=20) and normal retina (n=5) was studied by quantitative reverse transcriptase polymerase chain reaction (qRT-PCR) and Western blotting. The differential expression was correlated with the histopathological features of RB. The interaction between RB cell lines (Weri-Rb-1, NCC-RbC-51) and Bone marrow stromal cells (BMSC) was also studied using direct co-culture and indirect co-culture methods. The functional effect of the co-culture methods on the RB cells was evaluated by invasion and proliferation assays. Global gene expression was studied by using Affymetrix 3’ IVT microarray. Pathway prediction was performed using KEGG and the key molecules were validated using qRT-PCR. Results: The immunohistochemistry revealed the presence of several stromal cell types such as endothelial cells (CD31+;Vim+/-); macrophages (CD68+;Vim+/-); Fibroblasts (Vim+; CD31-;CD68- );myofibroblasts (α-SMA+/ Vim+) and invading retinal astrocytes/ differentiated retinal glia (GFAP+; Vim+). A characteristic distribution of these stromal cell types was observed in the tumor microenvironment, with endothelial cells predominantly seen in blood vessels and macrophages near actively proliferating tumor or necrotic areas. Retinal astrocytes and glia were predominant near the optic nerve regions in invasive tumors with sparse distribution in tumor foci. Fibroblasts were widely distributed with rare evidence of myofibroblasts in the tumor. Both gene and protein expression revealed statistically significant (P<0.05) up-regulation of FAP, TNC and BST2 in primary RB tumors compared to the normal retina. Co-culture of BMSC with RB cells promoted invasion and proliferation of RB cells in direct and indirect contact methods respectively. Direct co-culture of RB cell lines with BMSC resulted in gene expression changes in ECM-receptor interaction, focal adhesion, IL-8 and TGF-β signaling pathways associated with cancer. In contrast, various metabolic pathways such a glucose, fructose and amino acid metabolism were significantly altered under the indirect co-culture condition. Conclusion: The study suggests that the close interaction between RB cells and the stroma might be involved in RB tumor invasion and progression which is likely to be mediated by ECM-receptor interactions and secretory factors. Targeting the tumor stroma would be an attractive option for redesigning treatment strategies for RB.

Keywords: gene expression profiles, retinoblastoma, stromal cells, tumor microenvironment

Procedia PDF Downloads 386
190 Challenges of Blockchain Applications in the Supply Chain Industry: A Regulatory Perspective

Authors: Pardis Moslemzadeh Tehrani

Abstract:

Due to the emergence of blockchain technology and the benefits of cryptocurrencies, intelligent or smart contracts are gaining traction. Artificial intelligence (AI) is transforming our lives, and it is being embraced by a wide range of sectors. Smart contracts, which are at the heart of blockchains, incorporate AI characteristics. Such contracts are referred to as "smart" contracts because of the underlying technology that allows contracting parties to agree on terms expressed in computer code that defines machine-readable instructions for computers to follow under specific situations. The transmission happens automatically if the conditions are met. Initially utilised for financial transactions, blockchain applications have since expanded to include the financial, insurance, and medical sectors, as well as supply networks. Raw material acquisition by suppliers, design, and fabrication by manufacturers, delivery of final products to consumers, and even post-sales logistics assistance are all part of supply chains. Many issues are linked with managing supply chains from the planning and coordination stages, which can be implemented in a smart contract in a blockchain due to their complexity. Manufacturing delays and limited third-party amounts of product components have raised concerns about the integrity and accountability of supply chains for food and pharmaceutical items. Other concerns include regulatory compliance in multiple jurisdictions and transportation circumstances (for instance, many products must be kept in temperature-controlled environments to ensure their effectiveness). Products are handled by several providers before reaching customers in modern economic systems. Information is sent between suppliers, shippers, distributors, and retailers at every stage of the production and distribution process. Information travels more effectively when individuals are eliminated from the equation. The usage of blockchain technology could be a viable solution to these coordination issues. In blockchains, smart contracts allow for the rapid transmission of production data, logistical data, inventory levels, and sales data. This research investigates the legal and technical advantages and disadvantages of AI-blockchain technology in the supply chain business. It aims to uncover the applicable legal problems and barriers to the use of AI-blockchain technology to supply chains, particularly in the food industry. It also discusses the essential legal and technological issues and impediments to supply chain implementation for stakeholders, as well as methods for overcoming them before releasing the technology to clients. Because there has been little research done on this topic, it is difficult for industrial stakeholders to grasp how blockchain technology could be used in their respective operations. As a result, the focus of this research will be on building advanced and complex contractual terms in supply chain smart contracts on blockchains to cover all unforeseen supply chain challenges.

Keywords: blockchain, supply chain, IoT, smart contract

Procedia PDF Downloads 132
189 DeepNIC a Method to Transform Each Tabular Variable into an Independant Image Analyzable by Basic CNNs

Authors: Nguyen J. M., Lucas G., Ruan S., Digonnet H., Antonioli D.

Abstract:

Introduction: Deep Learning (DL) is a very powerful tool for analyzing image data. But for tabular data, it cannot compete with machine learning methods like XGBoost. The research question becomes: can tabular data be transformed into images that can be analyzed by simple CNNs (Convolutional Neuron Networks)? Will DL be the absolute tool for data classification? All current solutions consist in repositioning the variables in a 2x2 matrix using their correlation proximity. In doing so, it obtains an image whose pixels are the variables. We implement a technology, DeepNIC, that offers the possibility of obtaining an image for each variable, which can be analyzed by simple CNNs. Material and method: The 'ROP' (Regression OPtimized) model is a binary and atypical decision tree whose nodes are managed by a new artificial neuron, the Neurop. By positioning an artificial neuron in each node of the decision trees, it is possible to make an adjustment on a theoretically infinite number of variables at each node. From this new decision tree whose nodes are artificial neurons, we created the concept of a 'Random Forest of Perfect Trees' (RFPT), which disobeys Breiman's concepts by assembling very large numbers of small trees with no classification errors. From the results of the RFPT, we developed a family of 10 statistical information criteria, Nguyen Information Criterion (NICs), which evaluates in 3 dimensions the predictive quality of a variable: Performance, Complexity and Multiplicity of solution. A NIC is a probability that can be transformed into a grey level. The value of a NIC depends essentially on 2 super parameters used in Neurops. By varying these 2 super parameters, we obtain a 2x2 matrix of probabilities for each NIC. We can combine these 10 NICs with the functions AND, OR, and XOR. The total number of combinations is greater than 100,000. In total, we obtain for each variable an image of at least 1166x1167 pixels. The intensity of the pixels is proportional to the probability of the associated NIC. The color depends on the associated NIC. This image actually contains considerable information about the ability of the variable to make the prediction of Y, depending on the presence or absence of other variables. A basic CNNs model was trained for supervised classification. Results: The first results are impressive. Using the GSE22513 public data (Omic data set of markers of Taxane Sensitivity in Breast Cancer), DEEPNic outperformed other statistical methods, including XGBoost. We still need to generalize the comparison on several databases. Conclusion: The ability to transform any tabular variable into an image offers the possibility of merging image and tabular information in the same format. This opens up great perspectives in the analysis of metadata.

Keywords: tabular data, CNNs, NICs, DeepNICs, random forest of perfect trees, classification

Procedia PDF Downloads 133
188 Financial Analysis of the Foreign Direct in Mexico

Authors: Juan Peña Aguilar, Lilia Villasana, Rodrigo Valencia, Alberto Pastrana, Martin Vivanco, Juan Peña C

Abstract:

Each year a growing number of companies entering Mexico in search of the domestic market share. These activities, including stores, telephone long distance and local raw materials and energy, and particularly the financial sector, have managed to significantly increase its weight in the flows of FDI in Mexico , however, you should consider whether these trends FDI are positive for the Mexican economy and these activities increase Mexican exports in the medium term , and its share in GDP , gross fixed capital formation and employment. In general stresses that these activities, by far, have been unable to significantly generate linkages with the rest of the economy, a process that has not favored with competitiveness policies and activities aimed at these neutral or horizontal. Since the nineties foreign direct investment (FDI) has shown a remarkable dynamism, both internationally and in Latin America and in Mexico. Only in Mexico the first recipient of FDI in importance in Latin America during 1990-1995 and was displaced by Brazil since FDI increased from levels below 1 % of GDP during the eighties to around 3 % of GDP during the nineties. Its impact has been significant not only from a macroeconomic perspective , it has also allowed the generation of a new industrial production structure and organization, parallel to a significant modernization of a segment of the economy. The case of Mexico also is particularly interesting and relevant because the destination of FDI until 1993 had focused on the purchase of state assets during privatization process. This paper aims to present FDI flows in Mexico and analyze the different business strategies that have been touched and encouraged by the FDI. On the one hand, looking briefly discuss regulatory issues and source and recipient of FDI sectors. Furthermore, the paper presents in more detail the impacts and changes that generated the FDI contribution of FDI in the Mexican economy , besides the macroeconomic context and later legislative changes that resulted in the current regulations is examined around FDI in Mexico, including aspects of the Free Trade Agreement (NAFTA). It is worth noting that foreign investment can not only be considered from the perspective of the receiving economic units. Instead, these flows also reflect the strategic interests of transnational corporations (TNCs) and other companies seeking access to markets and increased competitiveness of their production networks and global distribution, among other reasons. Similarly it is important to note that foreign investment in its various forms is critically dependent on historical and temporal aspects. Thus, the same functionality can vary significantly depending on the specific characteristics of both receptor units as sources of FDI, including macroeconomic, institutional, industrial organization, and social aspects, among others.

Keywords: foreign direct investment (FDI), competitiveness, neoliberal regime, globalization, gross domestic product (GDP), NAFTA, macroeconomic

Procedia PDF Downloads 452
187 Budgetary Performance Model for Managing Pavement Maintenance

Authors: Vivek Hokam, Vishrut Landge

Abstract:

An ideal maintenance program for an industrial road network is one that would maintain all sections at a sufficiently high level of functional and structural conditions. However, due to various constraints such as budget, manpower and equipment, it is not possible to carry out maintenance on all the needy industrial road sections within a given planning period. A rational and systematic priority scheme needs to be employed to select and schedule industrial road sections for maintenance. Priority analysis is a multi-criteria process that determines the best ranking list of sections for maintenance based on several factors. In priority setting, difficult decisions are required to be made for selection of sections for maintenance. It is more important to repair a section with poor functional conditions which includes uncomfortable ride etc. or poor structural conditions i.e. sections those are in danger of becoming structurally unsound. It would seem therefore that any rational priority setting approach must consider the relative importance of functional and structural condition of the section. The maintenance priority index and pavement performance models tend to focus mainly on the pavement condition, traffic criteria etc. There is a need to develop the model which is suitably used with respect to limited budget provisions for maintenance of pavement. Linear programming is one of the most popular and widely used quantitative techniques. A linear programming model provides an efficient method for determining an optimal decision chosen from a large number of possible decisions. The optimum decision is one that meets a specified objective of management, subject to various constraints and restrictions. The objective is mainly minimization of maintenance cost of roads in industrial area. In order to determine the objective function for analysis of distress model it is necessary to fix the realistic data into a formulation. Each type of repair is to be quantified in a number of stretches by considering 1000 m as one stretch. A stretch considered under study is having 3750 m length. The quantity has to be put into an objective function for maximizing the number of repairs in a stretch related to quantity. The distress observed in this stretch are potholes, surface cracks, rutting and ravelling. The distress data is measured manually by observing each distress level on a stretch of 1000 m. The maintenance and rehabilitation measured that are followed currently are based on subjective judgments. Hence, there is a need to adopt a scientific approach in order to effectively use the limited resources. It is also necessary to determine the pavement performance and deterioration prediction relationship with more accurate and economic benefits of road networks with respect to vehicle operating cost. The infrastructure of road network should have best results expected from available funds. In this paper objective function for distress model is determined by linear programming and deterioration model considering overloading is discussed.

Keywords: budget, maintenance, deterioration, priority

Procedia PDF Downloads 209
186 Contentious Politics during a Period of Transition to Democracy from an Authoritarian Regime: The Spanish Cycle of Protest of November 1975-December 1978

Authors: Juan Sanmartín Bastida

Abstract:

When a country experiences a period of transition from authoritarianism to democracy, involving an earlier process of political liberalization and a later process of democratization, a cycle of protest usually outbreaks, as there is a reciprocal influence between that kind of political change and the frequency and scale of social protest events. That is what happened in Spain during the first years of its transition to democracy from the Francoist authoritarian regime, roughly between November 1975 and December 1978. Thus, the object of this study is to show and explain how that cycle of protest started, developed, and finished in relation to such a political change, and offer specific information about the main features of all protest cycles: the social movements that arose during that period, the number of protest events by month, the forms of collective action that were utilized, the groups of challengers that engaged in contentious politics, the reaction of the authorities to the action and claims of those groups, etc. The study of this cycle of protest, using the primary sources and analytical tools that characterize the model of research of protest cycles, will make a contribution to the field of contentious politics and its phenomenon of cycles of contention, and more broadly to the political and social history of contemporary Spain. The cycle of protest and the process of political liberalization of the authoritarian regime began around the same time, but the first concluded long before the process of democratization was completed in 1982. The ascending phase of the cycle and therefore the process of liberalization started with the death of Francisco Franco and the proclamation of Juan Carlos I as King of Spain in November 1975; the peak of the cycle was around the first months of 1977; the descending phase started after the first general election of June 1977; and the level of protest stabilized in the last months of 1978, a year that finished with a referendum in which the Spanish people approved the current democratic constitution. It was then when we can consider that the cycle of protest came to an end. The primary sources are the news of protest events and social movements in the three main Spanish newspapers at the time, other written or audiovisual documents, and in-depth interviews; and the analytical tools are the political opportunities that encourage social protest, the available repertoire of contention, the organizations and networks that brought together people with the same claims and allowed them to engage in contentious politics, and the interpretative frames that justify, dignify and motivates their collective action. These are the main four factors that explain the beginning, development and ending of the cycle of protest, and therefore the accompanying social movements and events of collective action. Among those four factors, the political opportunities -their opening, exploitation, and closure-proved to be most decisive.

Keywords: contentious politics, cycles of protest, political opportunities, social movements, Spanish transition to democracy

Procedia PDF Downloads 142
185 The Role of the Corporate Social Responsibility in Poverty Reduction

Authors: M. Verde, G. Falzarano

Abstract:

The paper examines the connection between corporate social responsibility (CSR), capability approach and poverty reduction; in particular, the local employment development (LED) by way of CSR initiatives. The joint action of LED/CSR results in a win-win situation, not only for the enterprises but also for all the stakeholders involved; in this regard, subsidiarity and coordination between national and regional/local authorities are central to a socially-oriented market economy. In the first section, the CSR is analysed on the basis of its social function in the fight against poverty, as a 'capabilities deprivation'. In the central part, the attention is focused on the relationship between CSR and LED; ergo, on the role of the enterprises in fostering capabilities development (the employment). Besides, all the potential solutions are presented, stressing the possible combinations, in the last part. The benchmark is the enterprise as an economic and a social institution: the business should not be combined with profit merely, paying more attention to its sustainable impact and social contribution. In which way could it be possible? The answer is the CSR. The impact of CSR on poverty reduction is still little explored. The companies help to reduce poverty through economic contribution, human rights and social inclusion; hence, the business becomes an 'agent of development' in order to fight against 'inequality'. The starting point is the pyramid of social responsibility, where ethic and philanthropic responsibilities involve programmes and actions aimed at personal development of the individuals, improving human standard of living in all forms, including poverty, when people do not have a choice between different 'life options', ranging from level of education to employment. At this point, CSR comes into play and works on two dimensions: poverty reduction and poverty prevention, by means of a series of initiatives: first of all, job creation and precarious work reduction. Empowerment of the local actors, financial support and combination of top down and bottom up initiatives are some of CSR areas of activity. Several positive effects occur on individual levels of educations, access to capital, individual health status, empowerment of youth and woman, access to social networks and it was observed that these effects depend on the type of CSR strategy. Indeed, CSR programmes should take into account fundamental criteria, such as the transparency, the information about benefits, a coordination unit among institutions and more clear guidelines. In this way, the advantages to the corporate reputation and to the community translate into a better job matching on the labour market, inter alia. It is important to underline that the success depends on the specific measures of the areas in question, by adapting them to the local needs, in light of general principles and index; therefore, the concrete commitment of the all stakeholders involved is decisive in order to achieve the goals. The enterprise would represent a concrete contribution for the pursuit of sustainable development and for the dissemination of a social and well being awareness.

Keywords: capability approach, local employment development, poverty, social inclusion

Procedia PDF Downloads 145
184 Tailoring Workspaces for Generation Z: Harmonizing Teamwork, Privacy, and Connectivity

Authors: Maayan Nakash

Abstract:

The modern workplace is undergoing a revolution, with Generation Z (Gen-Z) at the forefront of this transformative shift. However, empirical investigations specifically targeting the workplace preferences of this generation remain limited. Through direct examination of their tendencies via a survey approach, this study offers vital insights for aligning organizational policies and practices. The results presented in this paper are part of a comprehensive study that explored Gen Z's viewpoints on various employment market aspects, likely to decisively influence the design of future work environments. Data were collected via an online survey distributed among a cohort of 461 individuals from Gen-Z, born between the mid-1990s and 2010, consisting of 241 males (52.28%) and 220 females (47.72%). Responses were gauged using Likert scale statements that probed preferences for teamwork versus individual work, virtual versus personal interactions, and open versus private workspaces. Descriptive statistics and analytical analyses were conducted to pinpoint key patterns. We discovered that a high proportion of respondents (81.99%, n=378) exhibited a preference for teamwork over individual work. Correspondingly, the data indicate strong support for the recognition of team-based tasks as a tool contributing to personal and professional development. In terms of communication, the majority of respondents (61.38%) either disagreed (n=154) or slightly agreed (n=129) with the exclusive reliance on virtual interactions with their organizational peers. This finding underscores that despite technological progress, digital natives place significant value on physical interaction and non-mediated communication. Moreover, we understand that they also value a quiet and private work environment, clearly preferring it over open and shared workspaces. Considering that Gen-Z does not necessarily experience high levels of stress within social frameworks in the workplace, this can be attributed to a desire for a space that allows for focused engagement with work tasks. A One-Sample Chi-Square Test was performed on the observed distribution of respondents' reactions to each examined statement. The results showed statistically significant deviations from a uniform distribution (p<.001), indicating that the response patterns did not occur by chance and that there were meaningful tendencies in the participants' responses. The findings expand the theoretical knowledge base on human resources in the dynamics of a multi-generational workforce, illuminating the values, approaches, and expectations of Gen-Z. Practically, the results may lead organizations to equip themselves with tools to create policies tailored to Gen-Z in the context of workspaces and social needs, which could potentially foster a fertile environment and aid in attracting and retaining young talent. Future studies might include investigating potential mitigating factors, such as cultural influences or individual personality traits, which could further clarify the nuances in Gen-Z's work style preferences. Longitudinal studies tracking changes in these preferences as the generation matures may also yield valuable insights. Ultimately, as the landscape of the workforce continues to evolve, ongoing investigations into the unique characteristics and aspirations of emerging generations remain essential for nurturing harmonious, productive, and future-ready organizational environments.

Keywords: workplace, future of work, generation Z, digital natives, human resources management

Procedia PDF Downloads 55
183 Cultural Identity and Self-Censorship in Social Media: A Qualitative Case Study

Authors: Nastaran Khoshsabk

Abstract:

The evolution of communication through the Internet has influenced shaping and reshaping the self-presentation of social media users. Online communities both connect people and give voice to the voiceless allowing them to present themselves nationally and globally. People all around the world are experiencing censorship in different aspects of their life. Censorship can be externally imposed because of the political situations, or it can be self-imposed. Social media users choose the content they want to share and decide about the online audiences with whom they want to share this content. Most social media networks, such as Facebook, enable their users to be selective about the shared content and its availability to other people. However, sometimes instead of targeting a specific audience, users self-censor themselves or decide not to share various forms of information. These decisions are of particular importance in countries such as Iran where Internet is not the arena of free self-presentation and people are encouraged to stay away from political participation in the country and acting against the Islamic values. Facebook and some other social media tools are blocked in countries such as Iran. This project investigates the importance of social media in the life of Iranians to explore how they present themselves and construct their digital selves. The notion of cultural identity is applied in this research to explore the educational and informative role of social media in the identity formation and cultural representation of Facebook users. This study explores the self-censorship of Iranian adult Facebook users through their online self-representation and communication on the Internet. The data in this qualitative multiple case study have been collected through individual synchronous online interviews with the researcher’s Facebook friends and through the analysis of the participants’ Facebook profiles and activities over a period of six months. The data is analysed with an emphasis on the identity formation of participants through the recognition of the underlying themes. The exploration of online interviews is on the basis of participants’ personal accounts of self-censorship and cultural understanding through using social media. The driven codes and themes have been categorised considering censorship and place of culture on representation of self. Participants were asked to explain their views about censorship and conservatism through using social media. They reported their thoughts about deciding which content to share on Facebook and which to self-censor and their reasons behind these decisions. The codes and themes have been categorised considering censorship and its role in representation of idealised self. The ‘actual self’ showed to be hidden by an individual for different reasons such as its influence on their social status, academic achievements and job opportunities. It is hoped that this research will have implications for education contexts in countries that are experiencing social media filtering by offering an increased understanding of the importance of online communities; which can provide an educational environment to talk and learn about social taboos and constructing adults’ identity in virtual environment and through cultural self-presentation.

Keywords: cultural identity, identity formation, online communities, self-censorship

Procedia PDF Downloads 240
182 Evolving Credit Scoring Models using Genetic Programming and Language Integrated Query Expression Trees

Authors: Alexandru-Ion Marinescu

Abstract:

There exist a plethora of methods in the scientific literature which tackle the well-established task of credit score evaluation. In its most abstract form, a credit scoring algorithm takes as input several credit applicant properties, such as age, marital status, employment status, loan duration, etc. and must output a binary response variable (i.e. “GOOD” or “BAD”) stating whether the client is susceptible to payment return delays. Data imbalance is a common occurrence among financial institution databases, with the majority being classified as “GOOD” clients (clients that respect the loan return calendar) alongside a small percentage of “BAD” clients. But it is the “BAD” clients we are interested in since accurately predicting their behavior is crucial in preventing unwanted loss for loan providers. We add to this whole context the constraint that the algorithm must yield an actual, tractable mathematical formula, which is friendlier towards financial analysts. To this end, we have turned to genetic algorithms and genetic programming, aiming to evolve actual mathematical expressions using specially tailored mutation and crossover operators. As far as data representation is concerned, we employ a very flexible mechanism – LINQ expression trees, readily available in the C# programming language, enabling us to construct executable pieces of code at runtime. As the title implies, they model trees, with intermediate nodes being operators (addition, subtraction, multiplication, division) or mathematical functions (sin, cos, abs, round, etc.) and leaf nodes storing either constants or variables. There is a one-to-one correspondence between the client properties and the formula variables. The mutation and crossover operators work on a flattened version of the tree, obtained via a pre-order traversal. A consequence of our chosen technique is that we can identify and discard client properties which do not take part in the final score evaluation, effectively acting as a dimensionality reduction scheme. We compare ourselves with state of the art approaches, such as support vector machines, Bayesian networks, and extreme learning machines, to name a few. The data sets we benchmark against amount to a total of 8, of which we mention the well-known Australian credit and German credit data sets, and the performance indicators are the following: percentage correctly classified, area under curve, partial Gini index, H-measure, Brier score and Kolmogorov-Smirnov statistic, respectively. Finally, we obtain encouraging results, which, although placing us in the lower half of the hierarchy, drive us to further refine the algorithm.

Keywords: expression trees, financial credit scoring, genetic algorithm, genetic programming, symbolic evolution

Procedia PDF Downloads 121
181 Developing a Cloud Intelligence-Based Energy Management Architecture Facilitated with Embedded Edge Analytics for Energy Conservation in Demand-Side Management

Authors: Yu-Hsiu Lin, Wen-Chun Lin, Yen-Chang Cheng, Chia-Ju Yeh, Yu-Chuan Chen, Tai-You Li

Abstract:

Demand-Side Management (DSM) has the potential to reduce electricity costs and carbon emission, which are associated with electricity used in the modern society. A home Energy Management System (EMS) commonly used by residential consumers in a down-stream sector of a smart grid to monitor, control, and optimize energy efficiency to domestic appliances is a system of computer-aided functionalities as an energy audit for residential DSM. Implementing fault detection and classification to domestic appliances monitored, controlled, and optimized is one of the most important steps to realize preventive maintenance, such as residential air conditioning and heating preventative maintenance in residential/industrial DSM. In this study, a cloud intelligence-based green EMS that comes up with an Internet of Things (IoT) technology stack for residential DSM is developed. In the EMS, Arduino MEGA Ethernet communication-based smart sockets that module a Real Time Clock chip to keep track of current time as timestamps via Network Time Protocol are designed and implemented for readings of load phenomena reflecting on voltage and current signals sensed. Also, a Network-Attached Storage providing data access to a heterogeneous group of IoT clients via Hypertext Transfer Protocol (HTTP) methods is configured to data stores of parsed sensor readings. Lastly, a desktop computer with a WAMP software bundle (the Microsoft® Windows operating system, Apache HTTP Server, MySQL relational database management system, and PHP programming language) serves as a data science analytics engine for dynamic Web APP/REpresentational State Transfer-ful web service of the residential DSM having globally-Advanced Internet of Artificial Intelligence (AI)/Computational Intelligence. Where, an abstract computing machine, Java Virtual Machine, enables the desktop computer to run Java programs, and a mash-up of Java, R language, and Python is well-suited and -configured for AI in this study. Having the ability of sending real-time push notifications to IoT clients, the desktop computer implements Google-maintained Firebase Cloud Messaging to engage IoT clients across Android/iOS devices and provide mobile notification service to residential/industrial DSM. In this study, in order to realize edge intelligence that edge devices avoiding network latency and much-needed connectivity of Internet connections for Internet of Services can support secure access to data stores and provide immediate analytical and real-time actionable insights at the edge of the network, we upgrade the designed and implemented smart sockets to be embedded AI Arduino ones (called embedded AIduino). With the realization of edge analytics by the proposed embedded AIduino for data analytics, an Arduino Ethernet shield WizNet W5100 having a micro SD card connector is conducted and used. The SD library is included for reading parsed data from and writing parsed data to an SD card. And, an Artificial Neural Network library, ArduinoANN, for Arduino MEGA is imported and used for locally-embedded AI implementation. The embedded AIduino in this study can be developed for further applications in manufacturing industry energy management and sustainable energy management, wherein in sustainable energy management rotating machinery diagnostics works to identify energy loss from gross misalignment and unbalance of rotating machines in power plants as an example.

Keywords: demand-side management, edge intelligence, energy management system, fault detection and classification

Procedia PDF Downloads 255
180 Linguistic Insights Improve Semantic Technology in Medical Research and Patient Self-Management Contexts

Authors: William Michael Short

Abstract:

Semantic Web’ technologies such as the Unified Medical Language System Metathesaurus, SNOMED-CT, and MeSH have been touted as transformational for the way users access online medical and health information, enabling both the automated analysis of natural-language data and the integration of heterogeneous healthrelated resources distributed across the Internet through the use of standardized terminologies that capture concepts and relationships between concepts that are expressed differently across datasets. However, the approaches that have so far characterized ‘semantic bioinformatics’ have not yet fulfilled the promise of the Semantic Web for medical and health information retrieval applications. This paper argues within the perspective of cognitive linguistics and cognitive anthropology that four features of human meaning-making must be taken into account before the potential of semantic technologies can be realized for this domain. First, many semantic technologies operate exclusively at the level of the word. However, texts convey meanings in ways beyond lexical semantics. For example, transitivity patterns (distributions of active or passive voice) and modality patterns (configurations of modal constituents like may, might, could, would, should) convey experiential and epistemic meanings that are not captured by single words. Language users also naturally associate stretches of text with discrete meanings, so that whole sentences can be ascribed senses similar to the senses of words (so-called ‘discourse topics’). Second, natural language processing systems tend to operate according to the principle of ‘one token, one tag’. For instance, occurrences of the word sound must be disambiguated for part of speech: in context, is sound a noun or a verb or an adjective? In syntactic analysis, deterministic annotation methods may be acceptable. But because natural language utterances are typically characterized by polyvalency and ambiguities of all kinds (including intentional ambiguities), such methods leave the meanings of texts highly impoverished. Third, ontologies tend to be disconnected from everyday language use and so struggle in cases where single concepts are captured through complex lexicalizations that involve profile shifts or other embodied representations. More problematically, concept graphs tend to capture ‘expert’ technical models rather than ‘folk’ models of knowledge and so may not match users’ common-sense intuitions about the organization of concepts in prototypical structures rather than Aristotelian categories. Fourth, and finally, most ontologies do not recognize the pervasively figurative character of human language. However, since the time of Galen the widespread use of metaphor in the linguistic usage of both medical professionals and lay persons has been recognized. In particular, metaphor is a well-documented linguistic tool for communicating experiences of pain. Because semantic medical knowledge-bases are designed to help capture variations within technical vocabularies – rather than the kinds of conventionalized figurative semantics that practitioners as well as patients actually utilize in clinical description and diagnosis – they fail to capture this dimension of linguistic usage. The failure of semantic technologies in these respects degrades the efficiency and efficacy not only of medical research, where information retrieval inefficiencies can lead to direct financial costs to organizations, but also of care provision, especially in contexts of patients’ self-management of complex medical conditions.

Keywords: ambiguity, bioinformatics, language, meaning, metaphor, ontology, semantic web, semantics

Procedia PDF Downloads 134
179 Fabrication of Highly Stable Low-Density Self-Assembled Monolayers by Thiolyne Click Reaction

Authors: Leila Safazadeh, Brad Berron

Abstract:

Self-assembled monolayers have tremendous impact in interfacial science, due to the unique opportunity they offer to tailor surface properties. Low-density self-assembled monolayers are an emerging class of monolayers where the environment-interfacing portion of the adsorbate has a greater level of conformational freedom when compared to traditional monolayer chemistries. This greater range of motion and increased spacing between surface-bound molecules offers new opportunities in tailoring adsorption phenomena in sensing systems. In particular, we expect low-density surfaces to offer a unique opportunity to intercalate surface bound ligands into the secondary structure of protiens and other macromolecules. Additionally, as many conventional sensing surfaces are built upon gold surfaces (SPR or QCM), these surfaces must be compatible with gold substrates. Here, we present the first stable method of generating low-density self assembled monolayer surfaces on gold for the analysis of their interactions with protein targets. Our approach is based on the 2:1 addition of thiol-yne chemistry to develop new classes of y-shaped adsorbates on gold, where the environment-interfacing group is spaced laterally from neighboring chemical groups. This technique involves an initial deposition of a crystalline monolayer of 1,10 decanedithiol on the gold substrate, followed by grafting of a low-packed monolayer on through a photoinitiated thiol-yne reaction in presence of light. Orthogonality of the thiol-yne chemistry (commonly referred to as a click chemistry) allows for preparation of low-density monolayers with variety of functional groups. To date, carboxyl, amine, alcohol, and alkyl terminated monolayers have been prepared using this core technology. Results from surface characterization techniques such as FTIR, contact angle goniometry and electrochemical impedance spectroscopy confirm the proposed low chain-chain interactions of the environment interfacing groups. Reductive desorption measurements suggest a higher stability for the click-LDMs compared to traditional SAMs, along with the equivalent packing density at the substrate interface, which confirms the proposed stability of the monolayer-gold interface. In addition, contact angle measurements change in the presence of an applied potential, supporting our description of a surface structure which allows the alkyl chains to freely orient themselves in response to different environments. We are studying the differences in protein adsorption phenomena between well packed and our loosely packed surfaces, and we expect this data will be ready to present at the GRC meeting. This work aims to contribute biotechnology science in the following manner: Molecularly imprinted polymers are a promising recognition mode with several advantages over natural antibodies in the recognition of small molecules. However, because of their bulk polymer structure, they are poorly suited for the rapid diffusion desired for recognition of proteins and other macromolecules. Molecularly imprinted monolayers are an emerging class of materials where the surface is imprinted, and there is not a bulk material to impede mass transfer. Further, the short distance between the binding site and the signal transduction material improves many modes of detection. My dissertation project is to develop a new chemistry for protein-imprinted self-assembled monolayers on gold, for incorporation into SPR sensors. Our unique contribution is the spatial imprinting of not only physical cues (seen in current imprinted monolayer techniques), but to also incorporate complementary chemical cues. This is accomplished through a photo-click grafting of preassembled ligands around a protein template. This conference is important for my development as a graduate student to broaden my appreciation of the sensor development beyond surface chemistry.

Keywords: low-density self-assembled monolayers, thiol-yne click reaction, molecular imprinting

Procedia PDF Downloads 228
178 Separation of Lanthanides Ions from Mineral Waste with Functionalized Pillar[5]Arenes: Synthesis, Physicochemical Characterization and Molecular Dynamics Studies

Authors: Ariesny Vera, Rodrigo Montecinos

Abstract:

The rare-earth elements (REEs) or rare-earth metals (REMs), correspond to seventeen chemical elements composed by the fifteen lanthanoids, as well as scandium and yttrium. Lanthanoids corresponds to lanthanum and the f-block elements, from cerium to lutetium. Scandium and yttrium are considered rare-earth elements because they have ionic radii similar to the lighter f-block elements. These elements were called rare earths because they are simply more difficult to extract and separate individually than the most metals and, generally, they do not accumulate in minerals, they are rarely found in easily mined ores and are often unfavorably distributed in common ores/minerals. REEs show unique chemical and physical properties, in comparison to the other metals in the periodic table. Nowadays, these physicochemical properties are utilized in a wide range of synthetic, catalytic, electronic, medicinal, and military applications. Because of their applications, the global demand for rare earth metals is becoming progressively more important in the transition to a self-sustaining society and greener economy. However, due to the difficult separation between lanthanoid ions, the high cost and pollution of these processes, the scientists search the development of a method that combines selectivity and quantitative separation of lanthanoids from the leaching liquor, while being more economical and environmentally friendly processes. This motivation has favored the design and development of more efficient and environmentally friendly cation extractors with the incorporation of compounds as ionic liquids, membrane inclusion polymers (PIM) and supramolecular systems. Supramolecular chemistry focuses on the development of host-guest systems, in which a host molecule can recognize and bind a certain guest molecule or ion. Normally, the formation of a host-guest complex involves non-covalent interactions Additionally, host-guest interactions can be influenced among others effects by the structural nature of host and guests. The different macrocyclic hosts for lanthanoid species that have been studied are crown ethers, cyclodextrins, cucurbituryls, calixarenes and pillararenes.Among all the factors that can influence and affect lanthanoid (III) coordination, perhaps the most basic of them is the systematic control using macrocyclic substituents that promote a selective coordination. In this sense, macrocycles pillar[n]arenes (P[n]As) present a relatively easy functionalization and they have more π-rich cavity than other host molecules. This gives to P[n]As a negative electrostatic potential in the cavity which would be responsible for the selectivity of these compounds towards cations. Furthermore, the cavity size, the linker, and the functional groups of the polar headgroups could be modified in order to control the association of lanthanoid cations. In this sense, different P[n]As systems, specifically derivatives of the pentamer P[5]A functionalized with amide, amine, phosphate and sulfate derivatives, have been designed in terms of experimental synthesis and molecular dynamics, and the interaction between these P[5]As and some lanthanoid ions such as La³+, Eu³+ and Lu³+ has been studied by physicochemical characterization by 1H-NMR, ITC and fluorescence in the case of Eu³+ systems. The molecular dynamics study of these systems was developed in hexane as solvent, also taking into account the lanthanoid ions mentioned above, and the respective comparison studies between the different ions.

Keywords: lanthanoids, macrocycles, pillar[n]arenes, rare-earth metal extraction, supramolecular chemistry, supramolecular complexes.

Procedia PDF Downloads 79
177 Upgrade of Value Chains and the Effect on Resilience of Russia’s Coal Industry and Receiving Regions on the Path of Energy Transition

Authors: Sergey Nikitenko, Vladimir Klishin, Yury Malakhov, Elena Goosen

Abstract:

Transition to renewable energy sources (solar, wind, bioenergy, etc.) and launching of alternative energy generation has weakened the role of coal as a source of energy. The Paris Agreement and assumption of obligations by many nations to orderly reduce CO₂ emissions by means of technological modernization and climate change adaptation has abridged coal demand yet more. This paper aims to assess current resilience of the coal industry to stress and to define prospects for coal production optimization using high technologies pursuant to global challenges and requirements of energy transition. Our research is based on the resilience concept adapted to the coal industry. It is proposed to divide the coal sector into segments depending on the prevailing value chains (VC). Four representative models of VC are identified in the coal sector. The most promising lines of upgrading VC in the coal industry include: •Elongation of VC owing to introduction of clean technologies of coal conversion and utilization; •Creation of parallel VC by means of waste management; •Branching of VC (conversion of a company’s VC into a production network). The upgrade effectiveness is governed in many ways by applicability of advanced coal processing technologies, usability of waste, expandability of production, entrance to non-rival markets and localization of new segments of VC in receiving regions. It is also important that upgrade of VC by means of formation of agile high-tech inter-industry production networks within the framework of operating surface and underground mines can reduce social, economic and ecological risks associated with closure of coal mines. Such promising route of VC upgrade is application of methanotrophic bacteria to produce protein to be used as feed-stuff in fish, poultry and cattle breeding, or in production of ferments, lipoids, sterols, antioxidants, pigments and polysaccharides. Closed mines can use recovered methane as a clean energy source. There exist methods of methane utilization from uncontrollable sources, including preliminary treatment and recovery of methane from air-and-methane mixture, or decomposition of methane to hydrogen and acetylene. Separated hydrogen is used in hydrogen fuel cells to generate power to feed the process of methane utilization and to supply external consumers. Despite the recent paradigm of carbon-free energy generation, it is possible to preserve the coal mining industry using the differentiated approach to upgrade of value chains based on flexible technologies with regard to specificity of mining companies.

Keywords: resilience, resilience concept, resilience indicator, resilience in the Russian coal industry, value chains

Procedia PDF Downloads 109
176 High Efficiency Double-Band Printed Rectenna Model for Energy Harvesting

Authors: Rakelane A. Mendes, Sandro T. M. Goncalves, Raphaella L. R. Silva

Abstract:

The concepts of energy harvesting and wireless energy transfer have been widely discussed in recent times. There are some ways to create autonomous systems for collecting ambient energy, such as solar, vibratory, thermal, electromagnetic, radiofrequency (RF), among others. In the case of the RF it is possible to collect up to 100 μW / cm². To collect and/or transfer energy in RF systems, a device called rectenna is used, which is defined by the junction of an antenna and a rectifier circuit. The rectenna presented in this work is resonant at the frequencies of 1.8 GHz and 2.45 GHz. Frequencies at 1.8 GHz band are e part of the GSM / LTE band. The GSM (Global System for Mobile Communication) is a frequency band of mobile telephony, it is also called second generation mobile networks (2G), it came to standardize mobile telephony in the world and was originally developed for voice traffic. LTE (Long Term Evolution) or fourth generation (4G) has emerged to meet the demand for wireless access to services such as Internet access, online games, VoIP and video conferencing. The 2.45 GHz frequency is part of the ISM (Instrumentation, Scientific and Medical) frequency band, this band is internationally reserved for industrial, scientific and medical development with no need for licensing, and its only restrictions are related to maximum power transfer and bandwidth, which must be kept within certain limits (in Brazil the bandwidth is 2.4 - 2.4835 GHz). The rectenna presented in this work was designed to present efficiency above 50% for an input power of -15 dBm. It is known that for wireless energy capture systems the signal power is very low and varies greatly, for this reason this ultra-low input power was chosen. The Rectenna was built using the low cost FR4 (Flame Resistant) substrate, the antenna selected is a microfita antenna, consisting of a Meandered dipole, and this one was optimized using the software CST Studio. This antenna has high efficiency, high gain and high directivity. Gain is the quality of an antenna in capturing more or less efficiently the signals transmitted by another antenna and/or station. Directivity is the quality that an antenna has to better capture energy in a certain direction. The rectifier circuit used has series topology and was optimized using Keysight's ADS software. The rectifier circuit is the most complex part of the rectenna, since it includes the diode, which is a non-linear component. The chosen diode is the Schottky diode SMS 7630, this presents low barrier voltage (between 135-240 mV) and a wider band compared to other types of diodes, and these attributes make it perfect for this type of application. In the rectifier circuit are also used inductor and capacitor, these are part of the input and output filters of the rectifier circuit. The inductor has the function of decreasing the dispersion effect on the efficiency of the rectifier circuit. The capacitor has the function of eliminating the AC component of the rectifier circuit and making the signal undulating.

Keywords: dipole antenna, double-band, high efficiency, rectenna

Procedia PDF Downloads 127