Search results for: mobile Ad Hoc networks
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4277

Search results for: mobile Ad Hoc networks

197 Mining Scientific Literature to Discover Potential Research Data Sources: An Exploratory Study in the Field of Haemato-Oncology

Authors: A. Anastasiou, K. S. Tingay

Abstract:

Background: Discovering suitable datasets is an important part of health research, particularly for projects working with clinical data from patients organized in cohorts (cohort data), but with the proliferation of so many national and international initiatives, it is becoming increasingly difficult for research teams to locate real world datasets that are most relevant to their project objectives. We present a method for identifying healthcare institutes in the European Union (EU) which may hold haemato-oncology (HO) data. A key enabler of this research was the bibInsight platform, a scientometric data management and analysis system developed by the authors at Swansea University. Method: A PubMed search was conducted using HO clinical terms taken from previous work. The resulting XML file was processed using the bibInsight platform, linking affiliations to the Global Research Identifier Database (GRID). GRID is an international, standardized list of institutions, including the city and country in which the institution exists, as well as a category of the main business type, e.g., Academic, Healthcare, Government, Company. Countries were limited to the 28 current EU members, and institute type to 'Healthcare'. An article was considered valid if at least one author was affiliated with an EU-based healthcare institute. Results: The PubMed search produced 21,310 articles, consisting of 9,885 distinct affiliations with correspondence in GRID. Of these articles, 760 were from EU countries, and 390 of these were healthcare institutes. One affiliation was excluded as being a veterinary hospital. Two EU countries did not have any publications in our analysis dataset. The results were analysed by country and by individual healthcare institute. Networks both within the EU and internationally show institutional collaborations, which may suggest a willingness to share data for research purposes. Geographical mapping can ensure that data has broad population coverage. Collaborations with industry or government may exclude healthcare institutes that may have embargos or additional costs associated with data access. Conclusions: Data reuse is becoming increasingly important both for ensuring the validity of results, and economy of available resources. The ability to identify potential, specific data sources from over twenty thousand articles in less than an hour could assist in improving knowledge of, and access to, data sources. As our method has not yet specified if these healthcare institutes are holding data, or merely publishing on that topic, future work will involve text mining of data-specific concordant terms to identify numbers of participants, demographics, study methodologies, and sub-topics of interest.

Keywords: data reuse, data discovery, data linkage, journal articles, text mining

Procedia PDF Downloads 115
196 Roadmap to a Bottom-Up Approach Creating Meaningful Contributions to Surgery in Low-Income Settings

Authors: Eva Degraeuwe, Margo Vandenheede, Nicholas Rennie, Jolien Braem, Miryam Serry, Frederik Berrevoet, Piet Pattyn, Wouter Willaert, InciSioN Belgium Consortium

Abstract:

Background: Worldwide, five billion people lack access to safe and affordable surgical care. An added 1.27 million surgeons, anesthesiologists, and obstetricians (SAO) are needed by 2030 to meet the target of 20 per 100,000 population and to reach the goal of the Lancet Commission on Global Surgery. A well-informed future generation exposed early on to the current challenges in global surgery (GS) is necessary to ensure a sustainable future. Methods: InciSioN, the International Student Surgical Network, is a non-profit organization by and for students, residents, and fellows in over 80 countries. InciSioN Belgium, one of the prominent national working groups, has made a vast progression and collaborated with other networks to fill the educational gap, stimulate advocacy efforts and increase interactions with the international network. This report describes a roadmap to achieve sustainable development and education within GS, with the example of InciSioN Belgium. Results: Since the establishment of the organization’s branch in 2019, it has hosted an educational workshop for first-year residents in surgery, engaging over 2500 participants, and established a recurring directing board of 15 members. In the year 2020-2021, InciSioN Ghent has organized three workshops combining educational and interactive sessions for future prime advocates and surgical candidates. InciSioN Belgium has set up a strong formal coalition with the Belgian Medical Students’ Association (BeMSA), with its own standing committee, reaching over 3000+ medical students annually. In 2021-2022, InciSioN Belgium broadened to a multidisciplinary approach, including dentistry and nursing students and graduates within workshops and research projects, leading to a member and exposure increase of 450%. This roadmap sets strategic goals and mechanisms for the GS community to achieve nationwide sustained improvements in the research and education of GS focused on future SAOs, in order to achieve the GS sustainable development goals. In the coming year, expansion is directed to a formal integration of GS into the medical curriculum and increased international advocacy whilst inspiring SAOs to integrate into GS in Belgium. Conclusion: The development and implementation of durable change for GS are necessary. The student organization InciSioN Belgium is growing and hopes to close the colossal gap in GS and inspire the growth of other branches while sharing the know-how of a student organization.

Keywords: advocacy, education, global surgery, InciSioN, student network

Procedia PDF Downloads 174
195 An E-Maintenance IoT Sensor Node Designed for Fleets of Diverse Heavy-Duty Vehicles

Authors: George Charkoftakis, Panagiotis Liosatos, Nicolas-Alexander Tatlas, Dimitrios Goustouridis, Stelios M. Potirakis

Abstract:

E-maintenance is a relatively new concept, generally referring to maintenance management by monitoring assets over the Internet. One of the key links in the chain of an e-maintenance system is data acquisition and transmission. Specifically for the case of a fleet of heavy-duty vehicles, where the main challenge is the diversity of the vehicles and vehicle-embedded self-diagnostic/reporting technologies, the design of the data acquisition and transmission unit is a demanding task. This clear if one takes into account that a heavy-vehicles fleet assortment may range from vehicles with only a limited number of analog sensors monitored by dashboard light indicators and gauges to vehicles with plethora of sensors monitored by a vehicle computer producing digital reporting. The present work proposes an adaptable internet of things (IoT) sensor node that is capable of addressing this challenge. The proposed sensor node architecture is based on the increasingly popular single-board computer – expansion boards approach. In the proposed solution, the expansion boards undertake the tasks of position identification by means of a global navigation satellite system (GNSS), cellular connectivity by means of 3G/long-term evolution (LTE) modem, connectivity to on-board diagnostics (OBD), and connectivity to analog and digital sensors by means of a novel design of expansion board. Specifically, the later provides eight analog plus three digital sensor channels, as well as one on-board temperature / relative humidity sensor. The specific device offers a number of adaptability features based on appropriate zero-ohm resistor placement and appropriate value selection for limited number of passive components. For example, although in the standard configuration four voltage analog channels with constant voltage sources for the power supply of the corresponding sensors are available, up to two of these voltage channels can be converted to provide power to the connected sensors by means of corresponding constant current source circuits, whereas all parameters of analog sensor power supply and matching circuits are fully configurable offering the advantage of covering a wide variety of industrial sensors. Note that a key feature of the proposed sensor node, ensuring the reliable operation of the connected sensors, is the appropriate supply of external power to the connected sensors and their proper matching to the IoT sensor node. In standard mode, the IoT sensor node communicates to the data center through 3G/LTE, transmitting all digital/digitized sensor data, IoT device identity, and position. Moreover, the proposed IoT sensor node offers WiFi connectivity to mobile devices (smartphones, tablets) equipped with an appropriate application for the manual registration of vehicle- and driver-specific information, and these data are also forwarded to the data center. All control and communication tasks of the IoT sensor node are performed by dedicated firmware. It is programmed with a high-level language (Python) on top of a modern operating system (Linux). Acknowledgment: This research has been co-financed by the European Union and Greek national funds through the Operational Program Competitiveness, Entrepreneurship, and Innovation, under the call RESEARCH—CREATE—INNOVATE (project code: T1EDK- 01359, IntelligentLogger).

Keywords: IoT sensor nodes, e-maintenance, single-board computers, sensor expansion boards, on-board diagnostics

Procedia PDF Downloads 154
194 Exploration into Bio Inspired Computing Based on Spintronic Energy Efficiency Principles and Neuromorphic Speed Pathways

Authors: Anirudh Lahiri

Abstract:

Neuromorphic computing, inspired by the intricate operations of biological neural networks, offers a revolutionary approach to overcoming the limitations of traditional computing architectures. This research proposes the integration of spintronics with neuromorphic systems, aiming to enhance computational performance, scalability, and energy efficiency. Traditional computing systems, based on the Von Neumann architecture, struggle with scalability and efficiency due to the segregation of memory and processing functions. In contrast, the human brain exemplifies high efficiency and adaptability, processing vast amounts of information with minimal energy consumption. This project explores the use of spintronics, which utilizes the electron's spin rather than its charge, to create more energy-efficient computing systems. Spintronic devices, such as magnetic tunnel junctions (MTJs) manipulated through spin-transfer torque (STT) and spin-orbit torque (SOT), offer a promising pathway to reducing power consumption and enhancing the speed of data processing. The integration of these devices within a neuromorphic framework aims to replicate the efficiency and adaptability of biological systems. The research is structured into three phases: an exhaustive literature review to build a theoretical foundation, laboratory experiments to test and optimize the theoretical models, and iterative refinements based on experimental results to finalize the system. The initial phase focuses on understanding the current state of neuromorphic and spintronic technologies. The second phase involves practical experimentation with spintronic devices and the development of neuromorphic systems that mimic synaptic plasticity and other biological processes. The final phase focuses on refining the systems based on feedback from the testing phase and preparing the findings for publication. The expected contributions of this research are twofold. Firstly, it aims to significantly reduce the energy consumption of computational systems while maintaining or increasing processing speed, addressing a critical need in the field of computing. Secondly, it seeks to enhance the learning capabilities of neuromorphic systems, allowing them to adapt more dynamically to changing environmental inputs, thus better mimicking the human brain's functionality. The integration of spintronics with neuromorphic computing could revolutionize how computational systems are designed, making them more efficient, faster, and more adaptable. This research aligns with the ongoing pursuit of energy-efficient and scalable computing solutions, marking a significant step forward in the field of computational technology.

Keywords: material science, biological engineering, mechanical engineering, neuromorphic computing, spintronics, energy efficiency, computational scalability, synaptic plasticity.

Procedia PDF Downloads 43
193 Enhanced Furfural Extraction from Aqueous Media Using Neoteric Hydrophobic Solvents

Authors: Ahmad S. Darwish, Tarek Lemaoui, Hanifa Taher, Inas M. AlNashef, Fawzi Banat

Abstract:

This research reports a systematic top-down approach for designing neoteric hydrophobic solvents –particularly, deep eutectic solvents (DES) and ionic liquids (IL)– as furfural extractants from aqueous media for the application of sustainable biomass conversion. The first stage of the framework entailed screening 32 neoteric solvents to determine their efficacy against toluene as the application’s conventional benchmark for comparison. The selection criteria for the best solvents encompassed not only their efficiency in extracting furfural but also low viscosity and minimal toxicity levels. Additionally, for the DESs, their natural origins, availability, and biodegradability were also taken into account. From the screening pool, two neoteric solvents were selected: thymol:decanoic acid 1:1 (Thy:DecA) and trihexyltetradecyl phosphonium bis(trifluoromethylsulfonyl) imide [P₁₄,₆,₆,₆][NTf₂]. These solvents outperformed the toluene benchmark, achieving efficiencies of 94.1% and 97.1% respectively, compared to toluene’s 81.2%, while also possessing the desired properties. These solvents were then characterized thoroughly in terms of their physical properties, thermal properties, critical properties, and cross-contamination solubilities. The selected neoteric solvents were then extensively tested under various operating conditions, and an exceptional stable performance was exhibited, maintaining high efficiency across a broad range of temperatures (15–100 °C), pH levels (1–13), and furfural concentrations (0.1–2.0 wt%) with a remarkable equilibrium time of only 2 minutes, and most notably, demonstrated high efficiencies even at low solvent-to-feed ratios. The durability of the neoteric solvents was also validated to be stable over multiple extraction-regeneration cycles, with limited leachability to the aqueous phase (≈0.1%). Moreover, the extraction performance of the solvents was then modeled through machine learning, specifically multiple non-linear regression (MNLR) and artificial neural networks (ANN). The models demonstrated high accuracy, indicated by their low absolute average relative deviations with values of 2.74% and 2.28% for Thy:DecA and [P₁₄,₆,₆,₆][NTf₂], respectively, using MNLR, and 0.10% for Thy:DecA and 0.41% for [P₁₄,₆,₆,₆][NTf₂] using ANN, highlighting the significantly enhanced predictive accuracy of the ANN. The neoteric solvents presented herein offer noteworthy advantages over traditional organic solvents, including their high efficiency in both extraction and regeneration processes, their stability and minimal leachability, making them particularly suitable for applications involving aqueous media. Moreover, these solvents are more environmentally friendly, incorporating renewable and sustainable components like thymol and decanoic acid. This exceptional efficacy of the newly developed neoteric solvents signifies a significant advancement, providing a green and sustainable alternative for furfural production from biowaste.

Keywords: sustainable biomass conversion, furfural extraction, ionic liquids, deep eutectic solvents

Procedia PDF Downloads 70
192 Acoustic Energy Harvesting Using Polyvinylidene Fluoride (PVDF) and PVDF-ZnO Piezoelectric Polymer

Authors: S. M. Giripunje, Mohit Kumar

Abstract:

Acoustic energy that exists in our everyday life and environment have been overlooked as a green energy that can be extracted, generated, and consumed without any significant negative impact to the environment. The harvested energy can be used to enable new technology like wireless sensor networks. Technological developments in the realization of truly autonomous MEMS devices and energy storage systems have made acoustic energy harvesting (AEH) an increasingly viable technology. AEH is the process of converting high and continuous acoustic waves from the environment into electrical energy by using an acoustic transducer or resonator. AEH is not popular as other types of energy harvesting methods since sound waves have lower energy density and such energy can only be harvested in very noisy environment. However, the energy requirements for certain applications are also correspondingly low and also there is a necessity to observe the noise to reduce noise pollution. So the ability to reclaim acoustic energy and store it in a usable electrical form enables a novel means of supplying power to relatively low power devices. A quarter-wavelength straight-tube acoustic resonator as an acoustic energy harvester is introduced with polyvinylidene fluoride (PVDF) and PVDF doped with ZnO nanoparticles, piezoelectric cantilever beams placed inside the resonator. When the resonator is excited by an incident acoustic wave at its first acoustic eigen frequency, an amplified acoustic resonant standing wave is developed inside the resonator. The acoustic pressure gradient of the amplified standing wave then drives the vibration motion of the PVDF piezoelectric beams, generating electricity due to the direct piezoelectric effect. In order to maximize the amount of the harvested energy, each PVDF and PVDF-ZnO piezoelectric beam has been designed to have the same structural eigen frequency as the acoustic eigen frequency of the resonator. With a single PVDF beam placed inside the resonator, the harvested voltage and power become the maximum near the resonator tube open inlet where the largest acoustic pressure gradient vibrates the PVDF beam. As the beam is moved to the resonator tube closed end, the voltage and power gradually decrease due to the decreased acoustic pressure gradient. Multiple piezoelectric beams PVDF and PVDF-ZnO have been placed inside the resonator with two different configurations: the aligned and zigzag configurations. With the zigzag configuration which has the more open path for acoustic air particle motions, the significant increases in the harvested voltage and power have been observed. Due to the interruption of acoustic air particle motion caused by the beams, it is found that placing PVDF beams near the closed tube end is not beneficial. The total output voltage of the piezoelectric beams increases linearly as the incident sound pressure increases. This study therefore reveals that the proposed technique used to harvest sound wave energy has great potential of converting free energy into useful energy.

Keywords: acoustic energy, acoustic resonator, energy harvester, eigenfrequency, polyvinylidene fluoride (PVDF)

Procedia PDF Downloads 385
191 Effect of Black Cumin (Nigella sativa) Extract on Damaged Brain Cells

Authors: Batul Kagalwala

Abstract:

The nervous system is made up of complex delicate structures such as the spinal cord, peripheral nerves and the brain. These are prone to various types of injury ranging from neurodegenerative diseases to trauma leading to diseases like Parkinson's, Alzheimer's, multiple sclerosis, amyotrophic lateral sclerosis (ALS), multiple system atrophy etc. Unfortunately, because of the complicated structure of nervous system, spontaneous regeneration, repair and healing is seldom seen due to which brain damage, peripheral nerve damage and paralysis from spinal cord injury are often permanent and incapacitating. Hence, innovative and standardized approach is required for advance treatment of neurological injury. Nigella sativa (N. sativa), an annual flowering plant native to regions of southern Europe and Asia; has been suggested to have neuroprotective and anti-seizures properties. Neuroregeneration is found to occur in damaged cells when treated using extract of N. sativa. Due to its proven health benefits, lots of experiments are being conducted to extract all the benefits from the plant. The flowers are delicate and are usually pale blue and white in color with small black seeds. These seeds are the source of active components such as 30–40% fixed oils, 0.5–1.5% essential oils, pharmacologically active components containing thymoquinone (TQ), ditimoquinone (DTQ) and nigellin. In traditional medicine, this herb was identified to have healing properties and was extensively used Middle East and Far East for treating diseases such as head ache, back pain, asthma, infections, dysentery, hypertension, obesity and gastrointestinal problems. Literature studies have confirmed the extract of N. sativa seeds and TQ have inhibitory effects on inducible nitric oxide synthase and production of nitric oxide as well as anti-inflammatory and anticancer activities. Experimental investigation will be conducted to understand which ingredient of N. sativa causes neuroregeneration and roots to its healing property. An aqueous/ alcoholic extract of N. sativa will be made. Seed oil is also found to have used by researchers to prepare such extracts. For the alcoholic extracts, the seeds need to be powdered and soaked in alcohol for a period of time and the alcohol must be evaporated using rotary evaporator. For aqueous extracts, the powder must be dissolved in distilled water to obtain a pure extract. The mobile phase will be the extract while the suitable stationary phase (substance that is a good adsorbent e.g. silica gels, alumina, cellulose etc.) will be selected. Different ingredients of N. sativa will be separated using High Performance Liquid Chromatography (HPLC) for treating damaged cells. Damaged brain cells will be treated individually and in different combinations of 2 or 3 compounds for different intervals of time. The most suitable compound or a combination of compounds for the regeneration of cells will be determined using DOE methodology. Later the gene will also be determined and using Polymerase Chain Reaction (PCR) it will be replicated in a plasmid vector. This plasmid vector shall be inserted in the brain of the organism used and replicated within. The gene insertion can also be done by the gene gun method. The gene in question can be coated on a micro bullet of tungsten and bombarded in the area of interest and gene replication and coding shall be studied. Investigation on whether the gene replicates in the organism or not will be examined.

Keywords: black cumin, brain cells, damage, extract, neuroregeneration, PCR, plasmids, vectors

Procedia PDF Downloads 657
190 Using ANN in Emergency Reconstruction Projects Post Disaster

Authors: Rasha Waheeb, Bjorn Andersen, Rafa Shakir

Abstract:

Purpose The purpose of this study is to avoid delays that occur in emergency reconstruction projects especially in post disaster circumstances whether if they were natural or manmade due to their particular national and humanitarian importance. We presented a theoretical and practical concepts for projects management in the field of construction industry that deal with a range of global and local trails. This study aimed to identify the factors of effective delay in construction projects in Iraq that affect the time and the specific quality cost, and find the best solutions to address delays and solve the problem by setting parameters to restore balance in this study. 30 projects were selected in different areas of construction were selected as a sample for this study. Design/methodology/approach This study discusses the reconstruction strategies and delay in time and cost caused by different delay factors in some selected projects in Iraq (Baghdad as a case study).A case study approach was adopted, with thirty construction projects selected from the Baghdad region, of different types and sizes. Project participants from the case projects provided data about the projects through a data collection instrument distributed through a survey. Mixed approach and methods were applied in this study. Mathematical data analysis was used to construct models to predict delay in time and cost of projects before they started. The artificial neural networks analysis was selected as a mathematical approach. These models were mainly to help decision makers in construction project to find solutions to these delays before they cause any inefficiency in the project being implemented and to strike the obstacles thoroughly to develop this industry in Iraq. This approach was practiced using the data collected through survey and questionnaire data collection as information form. Findings The most important delay factors identified leading to schedule overruns were contractor failure, redesigning of designs/plans and change orders, security issues, selection of low-price bids, weather factors, and owner failures. Some of these are quite in line with findings from similar studies in other countries/regions, but some are unique to the Iraqi project sample, such as security issues and low-price bid selection. Originality/value we selected ANN’s analysis first because ANN’s was rarely used in project management , and never been used in Iraq to finding solutions for problems in construction industry. Also, this methodology can be used in complicated problems when there is no interpretation or solution for a problem. In some cases statistical analysis was conducted and in some cases the problem is not following a linear equation or there was a weak correlation, thus we suggested using the ANN’s because it is used for nonlinear problems to find the relationship between input and output data and that was really supportive.

Keywords: construction projects, delay factors, emergency reconstruction, innovation ANN, post disasters, project management

Procedia PDF Downloads 165
189 Amphiphilic Compounds as Potential Non-Toxic Antifouling Agents: A Study of Biofilm Formation Assessed by Micro-titer Assays with Marine Bacteria and Eco-toxicological Effect on Marine Algae

Authors: D. Malouch, M. Berchel, C. Dreanno, S. Stachowski-Haberkorn, P-A. Jaffres

Abstract:

Biofilm is a predominant lifestyle chosen by bacteria. Whether it is developed on an immerged surface or a mobile biofilm known as flocs, the bacteria within this form of life show properties different from its planktonic ones. Within the biofilm, the self-formed matrix of Extracellular Polymeric Substances (EPS) offers hydration, resources capture, enhanced resistance to antimicrobial agents, and allows cell-communication. Biofouling is a complex natural phenomenon that involves biological, physical and chemical properties related to the environment, the submerged surface and the living organisms involved. Bio-colonization of artificial structures can cause various economic and environmental impacts. The increase in costs associated with the over-consumption of fuel from biocolonized vessels has been widely studied. Measurement drifts from submerged sensors, as well as obstructions in heat exchangers, and deterioration of offshore structures are major difficulties that industries are dealing with. Therefore, surfaces that inhibit biocolonization are required in different areas (water treatment, marine paints, etc.) and many efforts have been devoted to produce efficient and eco-compatible antifouling agents. The different steps of surface fouling are widely described in literature. Studying the biofilm and its stages provides a better understanding of how to elaborate more efficient antifouling strategies. Several approaches are currently applied, such as the use of biocide anti-fouling paint6 (mainly with copper derivatives) and super-hydrophobic coatings. While these two processes are proving to be the most effective, they are not entirely satisfactory, especially in a context of a changing legislation. Nowadays, the challenge is to prevent biofouling with non-biocide compounds, offering a cost effective solution, but with no toxic effects on marine organisms. Since the micro-fouling phase plays an important role in the regulation of the following steps of biofilm formation7, it is desired to reduce or delate biofouling of a given surface by inhibiting the micro fouling at its early stages. In our recent works, we reported that some amphiphilic compounds exhibited bacteriostatic or bactericidal properties at a concentration that did not affect eukaryotic cells. These remarkable properties invited us to assess this type of bio-inspired phospholipids9 to prevent the colonization of surfaces by marine bacteria. Of note, other studies reported that amphiphilic compounds interacted with bacteria leading to a reduction of their development. An amphiphilic compound is a molecule consisting of a hydrophobic domain and a polar head (ionic or non-ionic). These compounds appear to have interesting antifouling properties: some ionic compounds have shown antimicrobial activity, and zwitterions can reduce nonspecific adsorption of proteins. Herein, we investigate the potential of amphiphilic compounds as inhibitors of bacterial growth and marine biofilm formation. The aim of this study is to compare the efficacy of four synthetic phospholipids that features a cationic charge (BSV36, KLN47) or a zwitterionic polar-head group (SL386, MB2871) to prevent microfouling with marine bacteria. We also study the toxicity of these compounds in order to identify the most promising compound that must feature high anti-adhesive properties and a low cytotoxicity on two links representative of coastal marine food webs: phytoplankton and oyster larvae.

Keywords: amphiphilic phospholipids, bacterial biofilm, marine microfouling, non-toxic antifouling

Procedia PDF Downloads 147
188 Leadership and Entrepreneurship in Higher Education: Fostering Innovation and Sustainability

Authors: Naziema Begum Jappie

Abstract:

Leadership and entrepreneurship in higher education have become critical components in navigating the evolving landscape of academia in the 21st century. This abstract explores the multifaceted relationship between leadership and entrepreneurship within the realm of higher education, emphasizing their roles in fostering innovation and sustainability. Higher education institutions, often characterized as slow-moving and resistant to change, are facing unprecedented challenges. Globalization, rapid technological advancements, changing student demographics, and financial constraints necessitate a reimagining of traditional models. Leadership in higher education must embrace entrepreneurial thinking to effectively address these challenges. Entrepreneurship in higher education involves cultivating a culture of innovation, risk-taking, and adaptability. Visionary leaders who promote entrepreneurship within their institutions empower faculty and staff to think creatively, seek new opportunities, and engage with external partners. These entrepreneurial efforts lead to the development of novel programs, research initiatives, and sustainable revenue streams. Innovation in curriculum and pedagogy is a central aspect of leadership and entrepreneurship in higher education. Forward-thinking leaders encourage faculty to experiment with teaching methods and technology, fostering a dynamic learning environment that prepares students for an ever-changing job market. Entrepreneurial leadership also facilitates the creation of interdisciplinary programs that address emerging fields and societal challenges. Collaboration is key to entrepreneurship in higher education. Leaders must establish partnerships with industry, government, and non-profit organizations to enhance research opportunities, secure funding, and provide real-world experiences for students. Entrepreneurial leaders leverage their institutions' resources to build networks that extend beyond campus boundaries, strengthening their positions in the global knowledge economy. Financial sustainability is a pressing concern for higher education institutions. Entrepreneurial leadership involves diversifying revenue streams through innovative fundraising campaigns, partnerships, and alternative educational models. Leaders who embrace entrepreneurship are better equipped to navigate budget constraints and ensure the long-term viability of their institutions. In conclusion, leadership and entrepreneurship are intertwined elements essential to the continued relevance and success of higher education institutions. Visionary leaders who champion entrepreneurship foster innovation, enhance the student experience, and secure the financial future of their institutions. As academia continues to evolve, leadership and entrepreneurship will remain indispensable tools in shaping the future of higher education. This abstract underscores the importance of these concepts and their potential to drive positive change within the higher education landscape.

Keywords: entrepreneurship, higher education, innovation, leadership

Procedia PDF Downloads 68
187 Evolution of Plio/Pleistocene Sedimentary Processes in Patraikos Gulf, Offshore Western Greece

Authors: E. K. Tripsanas, D. Spanos, I. Oikonomopoulos, K. Stathopoulou, A. S. Abdelsamad, A. Pagoulatos

Abstract:

Patraikos Gulf is located offshore western Greece, and it is limited to the west by the Zante, Cephalonia, and Lefkas islands. The Plio/Pleistocene sequence is characterized by two depocenters, the east and west Patraikos basins separated from each other by a prominent sill. This study is based on the Plio/Pleistocene seismic stratigraphy analysis of a newly acquired 3D PSDM (Pre-Stack depth migration) seismic survey in the west Patraikos Basin and few 2D seismic profiles throughout the entire Patraikos Gulf. The eastern Patraikos Basin, although completely buried today with water depths less than 100 m, it was a deep basin during Pliocene ( > 2 km of Pliocene-Pleistocene sediments) and appears to have gathered most of Achelous River discharges. The west Patraikos Gulf was shallower ( < 1300 m of Pliocene-Pleistocene sediments) and characterized by a hummocky relief due to thrust-belt tectonics and Miocene to Pleistocene halokinetic processes. The transition from Pliocene to Miocene is expressed by a widespread erosional unconformity with evidence of fluvial drainage patterns. This indicates that west Patraikos Basin was aerially exposed during the Messinian Salinity Crisis. Continuous to semi-continuous, parallel reflections in the lower, early- to mid-Pliocene seismic packet provides evidence that the re-connection of the Mediterranean Sea with the Atlantic Ocean during Zanclean resulted in the flooding of the west Patraikos basin and the domination of hemipelagic sedimentation interrupted by occasional gravity flows. This is evident in amplitude and semblance horizon slices, which clearly show the presence of long-running, meandering submarine channels sourced from the southeast (northwest Peloponnese) and north. The long-running nature of the submarine channels suggests mobile efficient turbidity currents, probably due to the participation of a sufficient amount of clay minerals in their suspended load. The upper seismic section in the study area mainly consists of several successions of clinoforms, interpreted as progradational delta complexes of Achelous River. This sudden change from marine to shallow marine sedimentary processes is attributed to climatic changes and eustatic perturbations since late Pliocene onwards (~ 2.6 Ma) and/or a switch of Achelous River from the east Patraikos Basin to the west Patraikos Basin. The deltaic seismic unit consists of four delta complexes. The first two complexes result in the infill of topographic depressions and smoothing of an initial hummocky bathymetry. The distribution of the upper two delta complexes is controlled by compensational stacking. Amplitude and semblance horizon slices depict the development of several almost straight and short (a few km long) distributary submarine channels at the delta slopes and proximal prodeltaic plains with lobate sand-sheet deposits at their mouths. Such channels are interpreted to result from low-efficiency turbidity currents with low content in clay minerals. Such a differentiation in the nature of the gravity flows is attributed to the switch of the sediment supply from clay-rich sediments derived from the draining of flysch formations of the Ionian and Gavrovo zones, to the draining of poor in clay minerals carbonate formations of Gavrovo zone through the Achelous River.

Keywords: sequence stratigraphy, basin analysis, river deltas, submarine channels

Procedia PDF Downloads 322
186 Challenges of Blockchain Applications in the Supply Chain Industry: A Regulatory Perspective

Authors: Pardis Moslemzadeh Tehrani

Abstract:

Due to the emergence of blockchain technology and the benefits of cryptocurrencies, intelligent or smart contracts are gaining traction. Artificial intelligence (AI) is transforming our lives, and it is being embraced by a wide range of sectors. Smart contracts, which are at the heart of blockchains, incorporate AI characteristics. Such contracts are referred to as "smart" contracts because of the underlying technology that allows contracting parties to agree on terms expressed in computer code that defines machine-readable instructions for computers to follow under specific situations. The transmission happens automatically if the conditions are met. Initially utilised for financial transactions, blockchain applications have since expanded to include the financial, insurance, and medical sectors, as well as supply networks. Raw material acquisition by suppliers, design, and fabrication by manufacturers, delivery of final products to consumers, and even post-sales logistics assistance are all part of supply chains. Many issues are linked with managing supply chains from the planning and coordination stages, which can be implemented in a smart contract in a blockchain due to their complexity. Manufacturing delays and limited third-party amounts of product components have raised concerns about the integrity and accountability of supply chains for food and pharmaceutical items. Other concerns include regulatory compliance in multiple jurisdictions and transportation circumstances (for instance, many products must be kept in temperature-controlled environments to ensure their effectiveness). Products are handled by several providers before reaching customers in modern economic systems. Information is sent between suppliers, shippers, distributors, and retailers at every stage of the production and distribution process. Information travels more effectively when individuals are eliminated from the equation. The usage of blockchain technology could be a viable solution to these coordination issues. In blockchains, smart contracts allow for the rapid transmission of production data, logistical data, inventory levels, and sales data. This research investigates the legal and technical advantages and disadvantages of AI-blockchain technology in the supply chain business. It aims to uncover the applicable legal problems and barriers to the use of AI-blockchain technology to supply chains, particularly in the food industry. It also discusses the essential legal and technological issues and impediments to supply chain implementation for stakeholders, as well as methods for overcoming them before releasing the technology to clients. Because there has been little research done on this topic, it is difficult for industrial stakeholders to grasp how blockchain technology could be used in their respective operations. As a result, the focus of this research will be on building advanced and complex contractual terms in supply chain smart contracts on blockchains to cover all unforeseen supply chain challenges.

Keywords: blockchain, supply chain, IoT, smart contract

Procedia PDF Downloads 126
185 DeepNIC a Method to Transform Each Tabular Variable into an Independant Image Analyzable by Basic CNNs

Authors: Nguyen J. M., Lucas G., Ruan S., Digonnet H., Antonioli D.

Abstract:

Introduction: Deep Learning (DL) is a very powerful tool for analyzing image data. But for tabular data, it cannot compete with machine learning methods like XGBoost. The research question becomes: can tabular data be transformed into images that can be analyzed by simple CNNs (Convolutional Neuron Networks)? Will DL be the absolute tool for data classification? All current solutions consist in repositioning the variables in a 2x2 matrix using their correlation proximity. In doing so, it obtains an image whose pixels are the variables. We implement a technology, DeepNIC, that offers the possibility of obtaining an image for each variable, which can be analyzed by simple CNNs. Material and method: The 'ROP' (Regression OPtimized) model is a binary and atypical decision tree whose nodes are managed by a new artificial neuron, the Neurop. By positioning an artificial neuron in each node of the decision trees, it is possible to make an adjustment on a theoretically infinite number of variables at each node. From this new decision tree whose nodes are artificial neurons, we created the concept of a 'Random Forest of Perfect Trees' (RFPT), which disobeys Breiman's concepts by assembling very large numbers of small trees with no classification errors. From the results of the RFPT, we developed a family of 10 statistical information criteria, Nguyen Information Criterion (NICs), which evaluates in 3 dimensions the predictive quality of a variable: Performance, Complexity and Multiplicity of solution. A NIC is a probability that can be transformed into a grey level. The value of a NIC depends essentially on 2 super parameters used in Neurops. By varying these 2 super parameters, we obtain a 2x2 matrix of probabilities for each NIC. We can combine these 10 NICs with the functions AND, OR, and XOR. The total number of combinations is greater than 100,000. In total, we obtain for each variable an image of at least 1166x1167 pixels. The intensity of the pixels is proportional to the probability of the associated NIC. The color depends on the associated NIC. This image actually contains considerable information about the ability of the variable to make the prediction of Y, depending on the presence or absence of other variables. A basic CNNs model was trained for supervised classification. Results: The first results are impressive. Using the GSE22513 public data (Omic data set of markers of Taxane Sensitivity in Breast Cancer), DEEPNic outperformed other statistical methods, including XGBoost. We still need to generalize the comparison on several databases. Conclusion: The ability to transform any tabular variable into an image offers the possibility of merging image and tabular information in the same format. This opens up great perspectives in the analysis of metadata.

Keywords: tabular data, CNNs, NICs, DeepNICs, random forest of perfect trees, classification

Procedia PDF Downloads 125
184 Financial Analysis of the Foreign Direct in Mexico

Authors: Juan Peña Aguilar, Lilia Villasana, Rodrigo Valencia, Alberto Pastrana, Martin Vivanco, Juan Peña C

Abstract:

Each year a growing number of companies entering Mexico in search of the domestic market share. These activities, including stores, telephone long distance and local raw materials and energy, and particularly the financial sector, have managed to significantly increase its weight in the flows of FDI in Mexico , however, you should consider whether these trends FDI are positive for the Mexican economy and these activities increase Mexican exports in the medium term , and its share in GDP , gross fixed capital formation and employment. In general stresses that these activities, by far, have been unable to significantly generate linkages with the rest of the economy, a process that has not favored with competitiveness policies and activities aimed at these neutral or horizontal. Since the nineties foreign direct investment (FDI) has shown a remarkable dynamism, both internationally and in Latin America and in Mexico. Only in Mexico the first recipient of FDI in importance in Latin America during 1990-1995 and was displaced by Brazil since FDI increased from levels below 1 % of GDP during the eighties to around 3 % of GDP during the nineties. Its impact has been significant not only from a macroeconomic perspective , it has also allowed the generation of a new industrial production structure and organization, parallel to a significant modernization of a segment of the economy. The case of Mexico also is particularly interesting and relevant because the destination of FDI until 1993 had focused on the purchase of state assets during privatization process. This paper aims to present FDI flows in Mexico and analyze the different business strategies that have been touched and encouraged by the FDI. On the one hand, looking briefly discuss regulatory issues and source and recipient of FDI sectors. Furthermore, the paper presents in more detail the impacts and changes that generated the FDI contribution of FDI in the Mexican economy , besides the macroeconomic context and later legislative changes that resulted in the current regulations is examined around FDI in Mexico, including aspects of the Free Trade Agreement (NAFTA). It is worth noting that foreign investment can not only be considered from the perspective of the receiving economic units. Instead, these flows also reflect the strategic interests of transnational corporations (TNCs) and other companies seeking access to markets and increased competitiveness of their production networks and global distribution, among other reasons. Similarly it is important to note that foreign investment in its various forms is critically dependent on historical and temporal aspects. Thus, the same functionality can vary significantly depending on the specific characteristics of both receptor units as sources of FDI, including macroeconomic, institutional, industrial organization, and social aspects, among others.

Keywords: foreign direct investment (FDI), competitiveness, neoliberal regime, globalization, gross domestic product (GDP), NAFTA, macroeconomic

Procedia PDF Downloads 450
183 Budgetary Performance Model for Managing Pavement Maintenance

Authors: Vivek Hokam, Vishrut Landge

Abstract:

An ideal maintenance program for an industrial road network is one that would maintain all sections at a sufficiently high level of functional and structural conditions. However, due to various constraints such as budget, manpower and equipment, it is not possible to carry out maintenance on all the needy industrial road sections within a given planning period. A rational and systematic priority scheme needs to be employed to select and schedule industrial road sections for maintenance. Priority analysis is a multi-criteria process that determines the best ranking list of sections for maintenance based on several factors. In priority setting, difficult decisions are required to be made for selection of sections for maintenance. It is more important to repair a section with poor functional conditions which includes uncomfortable ride etc. or poor structural conditions i.e. sections those are in danger of becoming structurally unsound. It would seem therefore that any rational priority setting approach must consider the relative importance of functional and structural condition of the section. The maintenance priority index and pavement performance models tend to focus mainly on the pavement condition, traffic criteria etc. There is a need to develop the model which is suitably used with respect to limited budget provisions for maintenance of pavement. Linear programming is one of the most popular and widely used quantitative techniques. A linear programming model provides an efficient method for determining an optimal decision chosen from a large number of possible decisions. The optimum decision is one that meets a specified objective of management, subject to various constraints and restrictions. The objective is mainly minimization of maintenance cost of roads in industrial area. In order to determine the objective function for analysis of distress model it is necessary to fix the realistic data into a formulation. Each type of repair is to be quantified in a number of stretches by considering 1000 m as one stretch. A stretch considered under study is having 3750 m length. The quantity has to be put into an objective function for maximizing the number of repairs in a stretch related to quantity. The distress observed in this stretch are potholes, surface cracks, rutting and ravelling. The distress data is measured manually by observing each distress level on a stretch of 1000 m. The maintenance and rehabilitation measured that are followed currently are based on subjective judgments. Hence, there is a need to adopt a scientific approach in order to effectively use the limited resources. It is also necessary to determine the pavement performance and deterioration prediction relationship with more accurate and economic benefits of road networks with respect to vehicle operating cost. The infrastructure of road network should have best results expected from available funds. In this paper objective function for distress model is determined by linear programming and deterioration model considering overloading is discussed.

Keywords: budget, maintenance, deterioration, priority

Procedia PDF Downloads 207
182 Contentious Politics during a Period of Transition to Democracy from an Authoritarian Regime: The Spanish Cycle of Protest of November 1975-December 1978

Authors: Juan Sanmartín Bastida

Abstract:

When a country experiences a period of transition from authoritarianism to democracy, involving an earlier process of political liberalization and a later process of democratization, a cycle of protest usually outbreaks, as there is a reciprocal influence between that kind of political change and the frequency and scale of social protest events. That is what happened in Spain during the first years of its transition to democracy from the Francoist authoritarian regime, roughly between November 1975 and December 1978. Thus, the object of this study is to show and explain how that cycle of protest started, developed, and finished in relation to such a political change, and offer specific information about the main features of all protest cycles: the social movements that arose during that period, the number of protest events by month, the forms of collective action that were utilized, the groups of challengers that engaged in contentious politics, the reaction of the authorities to the action and claims of those groups, etc. The study of this cycle of protest, using the primary sources and analytical tools that characterize the model of research of protest cycles, will make a contribution to the field of contentious politics and its phenomenon of cycles of contention, and more broadly to the political and social history of contemporary Spain. The cycle of protest and the process of political liberalization of the authoritarian regime began around the same time, but the first concluded long before the process of democratization was completed in 1982. The ascending phase of the cycle and therefore the process of liberalization started with the death of Francisco Franco and the proclamation of Juan Carlos I as King of Spain in November 1975; the peak of the cycle was around the first months of 1977; the descending phase started after the first general election of June 1977; and the level of protest stabilized in the last months of 1978, a year that finished with a referendum in which the Spanish people approved the current democratic constitution. It was then when we can consider that the cycle of protest came to an end. The primary sources are the news of protest events and social movements in the three main Spanish newspapers at the time, other written or audiovisual documents, and in-depth interviews; and the analytical tools are the political opportunities that encourage social protest, the available repertoire of contention, the organizations and networks that brought together people with the same claims and allowed them to engage in contentious politics, and the interpretative frames that justify, dignify and motivates their collective action. These are the main four factors that explain the beginning, development and ending of the cycle of protest, and therefore the accompanying social movements and events of collective action. Among those four factors, the political opportunities -their opening, exploitation, and closure-proved to be most decisive.

Keywords: contentious politics, cycles of protest, political opportunities, social movements, Spanish transition to democracy

Procedia PDF Downloads 138
181 The Role of the Corporate Social Responsibility in Poverty Reduction

Authors: M. Verde, G. Falzarano

Abstract:

The paper examines the connection between corporate social responsibility (CSR), capability approach and poverty reduction; in particular, the local employment development (LED) by way of CSR initiatives. The joint action of LED/CSR results in a win-win situation, not only for the enterprises but also for all the stakeholders involved; in this regard, subsidiarity and coordination between national and regional/local authorities are central to a socially-oriented market economy. In the first section, the CSR is analysed on the basis of its social function in the fight against poverty, as a 'capabilities deprivation'. In the central part, the attention is focused on the relationship between CSR and LED; ergo, on the role of the enterprises in fostering capabilities development (the employment). Besides, all the potential solutions are presented, stressing the possible combinations, in the last part. The benchmark is the enterprise as an economic and a social institution: the business should not be combined with profit merely, paying more attention to its sustainable impact and social contribution. In which way could it be possible? The answer is the CSR. The impact of CSR on poverty reduction is still little explored. The companies help to reduce poverty through economic contribution, human rights and social inclusion; hence, the business becomes an 'agent of development' in order to fight against 'inequality'. The starting point is the pyramid of social responsibility, where ethic and philanthropic responsibilities involve programmes and actions aimed at personal development of the individuals, improving human standard of living in all forms, including poverty, when people do not have a choice between different 'life options', ranging from level of education to employment. At this point, CSR comes into play and works on two dimensions: poverty reduction and poverty prevention, by means of a series of initiatives: first of all, job creation and precarious work reduction. Empowerment of the local actors, financial support and combination of top down and bottom up initiatives are some of CSR areas of activity. Several positive effects occur on individual levels of educations, access to capital, individual health status, empowerment of youth and woman, access to social networks and it was observed that these effects depend on the type of CSR strategy. Indeed, CSR programmes should take into account fundamental criteria, such as the transparency, the information about benefits, a coordination unit among institutions and more clear guidelines. In this way, the advantages to the corporate reputation and to the community translate into a better job matching on the labour market, inter alia. It is important to underline that the success depends on the specific measures of the areas in question, by adapting them to the local needs, in light of general principles and index; therefore, the concrete commitment of the all stakeholders involved is decisive in order to achieve the goals. The enterprise would represent a concrete contribution for the pursuit of sustainable development and for the dissemination of a social and well being awareness.

Keywords: capability approach, local employment development, poverty, social inclusion

Procedia PDF Downloads 138
180 Cultural Identity and Self-Censorship in Social Media: A Qualitative Case Study

Authors: Nastaran Khoshsabk

Abstract:

The evolution of communication through the Internet has influenced shaping and reshaping the self-presentation of social media users. Online communities both connect people and give voice to the voiceless allowing them to present themselves nationally and globally. People all around the world are experiencing censorship in different aspects of their life. Censorship can be externally imposed because of the political situations, or it can be self-imposed. Social media users choose the content they want to share and decide about the online audiences with whom they want to share this content. Most social media networks, such as Facebook, enable their users to be selective about the shared content and its availability to other people. However, sometimes instead of targeting a specific audience, users self-censor themselves or decide not to share various forms of information. These decisions are of particular importance in countries such as Iran where Internet is not the arena of free self-presentation and people are encouraged to stay away from political participation in the country and acting against the Islamic values. Facebook and some other social media tools are blocked in countries such as Iran. This project investigates the importance of social media in the life of Iranians to explore how they present themselves and construct their digital selves. The notion of cultural identity is applied in this research to explore the educational and informative role of social media in the identity formation and cultural representation of Facebook users. This study explores the self-censorship of Iranian adult Facebook users through their online self-representation and communication on the Internet. The data in this qualitative multiple case study have been collected through individual synchronous online interviews with the researcher’s Facebook friends and through the analysis of the participants’ Facebook profiles and activities over a period of six months. The data is analysed with an emphasis on the identity formation of participants through the recognition of the underlying themes. The exploration of online interviews is on the basis of participants’ personal accounts of self-censorship and cultural understanding through using social media. The driven codes and themes have been categorised considering censorship and place of culture on representation of self. Participants were asked to explain their views about censorship and conservatism through using social media. They reported their thoughts about deciding which content to share on Facebook and which to self-censor and their reasons behind these decisions. The codes and themes have been categorised considering censorship and its role in representation of idealised self. The ‘actual self’ showed to be hidden by an individual for different reasons such as its influence on their social status, academic achievements and job opportunities. It is hoped that this research will have implications for education contexts in countries that are experiencing social media filtering by offering an increased understanding of the importance of online communities; which can provide an educational environment to talk and learn about social taboos and constructing adults’ identity in virtual environment and through cultural self-presentation.

Keywords: cultural identity, identity formation, online communities, self-censorship

Procedia PDF Downloads 237
179 Evolving Credit Scoring Models using Genetic Programming and Language Integrated Query Expression Trees

Authors: Alexandru-Ion Marinescu

Abstract:

There exist a plethora of methods in the scientific literature which tackle the well-established task of credit score evaluation. In its most abstract form, a credit scoring algorithm takes as input several credit applicant properties, such as age, marital status, employment status, loan duration, etc. and must output a binary response variable (i.e. “GOOD” or “BAD”) stating whether the client is susceptible to payment return delays. Data imbalance is a common occurrence among financial institution databases, with the majority being classified as “GOOD” clients (clients that respect the loan return calendar) alongside a small percentage of “BAD” clients. But it is the “BAD” clients we are interested in since accurately predicting their behavior is crucial in preventing unwanted loss for loan providers. We add to this whole context the constraint that the algorithm must yield an actual, tractable mathematical formula, which is friendlier towards financial analysts. To this end, we have turned to genetic algorithms and genetic programming, aiming to evolve actual mathematical expressions using specially tailored mutation and crossover operators. As far as data representation is concerned, we employ a very flexible mechanism – LINQ expression trees, readily available in the C# programming language, enabling us to construct executable pieces of code at runtime. As the title implies, they model trees, with intermediate nodes being operators (addition, subtraction, multiplication, division) or mathematical functions (sin, cos, abs, round, etc.) and leaf nodes storing either constants or variables. There is a one-to-one correspondence between the client properties and the formula variables. The mutation and crossover operators work on a flattened version of the tree, obtained via a pre-order traversal. A consequence of our chosen technique is that we can identify and discard client properties which do not take part in the final score evaluation, effectively acting as a dimensionality reduction scheme. We compare ourselves with state of the art approaches, such as support vector machines, Bayesian networks, and extreme learning machines, to name a few. The data sets we benchmark against amount to a total of 8, of which we mention the well-known Australian credit and German credit data sets, and the performance indicators are the following: percentage correctly classified, area under curve, partial Gini index, H-measure, Brier score and Kolmogorov-Smirnov statistic, respectively. Finally, we obtain encouraging results, which, although placing us in the lower half of the hierarchy, drive us to further refine the algorithm.

Keywords: expression trees, financial credit scoring, genetic algorithm, genetic programming, symbolic evolution

Procedia PDF Downloads 117
178 Sinhala Sign Language to Grammatically Correct Sentences using NLP

Authors: Anjalika Fernando, Banuka Athuraliya

Abstract:

This paper presents a comprehensive approach for converting Sinhala Sign Language (SSL) into grammatically correct sentences using Natural Language Processing (NLP) techniques in real-time. While previous studies have explored various aspects of SSL translation, the research gap lies in the absence of grammar checking for SSL. This work aims to bridge this gap by proposing a two-stage methodology that leverages deep learning models to detect signs and translate them into coherent sentences, ensuring grammatical accuracy. The first stage of the approach involves the utilization of a Long Short-Term Memory (LSTM) deep learning model to recognize and interpret SSL signs. By training the LSTM model on a dataset of SSL gestures, it learns to accurately classify and translate these signs into textual representations. The LSTM model achieves a commendable accuracy rate of 94%, demonstrating its effectiveness in accurately recognizing and translating SSL gestures. Building upon the successful recognition and translation of SSL signs, the second stage of the methodology focuses on improving the grammatical correctness of the translated sentences. The project employs a Neural Machine Translation (NMT) architecture, consisting of an encoder and decoder with LSTM components, to enhance the syntactical structure of the generated sentences. By training the NMT model on a parallel corpus of Sinhala wrong sentences and their corresponding grammatically correct translations, it learns to generate coherent and grammatically accurate sentences. The NMT model achieves an impressive accuracy rate of 98%, affirming its capability to produce linguistically sound translations. The proposed approach offers significant contributions to the field of SSL translation and grammar correction. Addressing the critical issue of grammar checking, it enhances the usability and reliability of SSL translation systems, facilitating effective communication between hearing-impaired and non-sign language users. Furthermore, the integration of deep learning techniques, such as LSTM and NMT, ensures the accuracy and robustness of the translation process. This research holds great potential for practical applications, including educational platforms, accessibility tools, and communication aids for the hearing-impaired. Furthermore, it lays the foundation for future advancements in SSL translation systems, fostering inclusive and equal opportunities for the deaf community. Future work includes expanding the existing datasets to further improve the accuracy and generalization of the SSL translation system. Additionally, the development of a dedicated mobile application would enhance the accessibility and convenience of SSL translation on handheld devices. Furthermore, efforts will be made to enhance the current application for educational purposes, enabling individuals to learn and practice SSL more effectively. Another area of future exploration involves enabling two-way communication, allowing seamless interaction between sign-language users and non-sign-language users.In conclusion, this paper presents a novel approach for converting Sinhala Sign Language gestures into grammatically correct sentences using NLP techniques in real time. The two-stage methodology, comprising an LSTM model for sign detection and translation and an NMT model for grammar correction, achieves high accuracy rates of 94% and 98%, respectively. By addressing the lack of grammar checking in existing SSL translation research, this work contributes significantly to the development of more accurate and reliable SSL translation systems, thereby fostering effective communication and inclusivity for the hearing-impaired community

Keywords: Sinhala sign language, sign Language, NLP, LSTM, NMT

Procedia PDF Downloads 104
177 Upgrade of Value Chains and the Effect on Resilience of Russia’s Coal Industry and Receiving Regions on the Path of Energy Transition

Authors: Sergey Nikitenko, Vladimir Klishin, Yury Malakhov, Elena Goosen

Abstract:

Transition to renewable energy sources (solar, wind, bioenergy, etc.) and launching of alternative energy generation has weakened the role of coal as a source of energy. The Paris Agreement and assumption of obligations by many nations to orderly reduce CO₂ emissions by means of technological modernization and climate change adaptation has abridged coal demand yet more. This paper aims to assess current resilience of the coal industry to stress and to define prospects for coal production optimization using high technologies pursuant to global challenges and requirements of energy transition. Our research is based on the resilience concept adapted to the coal industry. It is proposed to divide the coal sector into segments depending on the prevailing value chains (VC). Four representative models of VC are identified in the coal sector. The most promising lines of upgrading VC in the coal industry include: •Elongation of VC owing to introduction of clean technologies of coal conversion and utilization; •Creation of parallel VC by means of waste management; •Branching of VC (conversion of a company’s VC into a production network). The upgrade effectiveness is governed in many ways by applicability of advanced coal processing technologies, usability of waste, expandability of production, entrance to non-rival markets and localization of new segments of VC in receiving regions. It is also important that upgrade of VC by means of formation of agile high-tech inter-industry production networks within the framework of operating surface and underground mines can reduce social, economic and ecological risks associated with closure of coal mines. Such promising route of VC upgrade is application of methanotrophic bacteria to produce protein to be used as feed-stuff in fish, poultry and cattle breeding, or in production of ferments, lipoids, sterols, antioxidants, pigments and polysaccharides. Closed mines can use recovered methane as a clean energy source. There exist methods of methane utilization from uncontrollable sources, including preliminary treatment and recovery of methane from air-and-methane mixture, or decomposition of methane to hydrogen and acetylene. Separated hydrogen is used in hydrogen fuel cells to generate power to feed the process of methane utilization and to supply external consumers. Despite the recent paradigm of carbon-free energy generation, it is possible to preserve the coal mining industry using the differentiated approach to upgrade of value chains based on flexible technologies with regard to specificity of mining companies.

Keywords: resilience, resilience concept, resilience indicator, resilience in the Russian coal industry, value chains

Procedia PDF Downloads 107
176 Story of Per-: The Radial Network of One Lithuanian Prefix

Authors: Samanta Kietytė

Abstract:

The object of this study is the verbal derivatives stemming from the Lithuanian prefix per-. The prefix under examination can be classified as prepositional, having descended from the preposition per, thereby sharing the same prototypical meaning – denoting movement OVER. These frequently co-occur within sentences (1). The aim of this paper is to conduct a semantic analysis of the prefix per- and to propose a possible radial network of its meanings. In essence, the aim is to identify the interrelationships existing between its meanings. 1) Jis peršoko per tvorą/ 3SG.NOM.M jump.PST.3 over fence.ACC.SG. /ʻHe jumped over the fenceʼ. The foundation of this work lies in the methodological and theoretical framework of cognitive linguistics. The prototypical meaning of prefixes consistently embodies spatial dimensions that can be described through image schemas. This entails the identification of the trajectory, the landmark, and the relation between them in the situation described by the prefixed verb. The meanings of linguistic units are not perceived as arbitrary, but rather, they are interconnected through semantic motivation. According to this perspective, a singular meaning within linguistic units is considered as prototypical, while additional meanings are descended (not necessarily directly) from it. For example, one of the per- meanings TRANSFER (2) is derived from the prototypical meaning OVER. 2) Prašau persiųsti vadovo laišką man./ Ask.PRS.1 forward.INF manager.GEN.SG email.ACC.SG 1.SG.DAT/ ʻPlease forward the manager‘s email to meʼ. Certain semantic relations are explained by the conceptual metaphor and metonymy theory. For instances, when prefixed verb has a meaning WIN (3) it is related to the prototypical meaning. In this case, the prefixed verb describes situations of winning in various ways. In the prototypical meaning, the trajector moves higher than the landmark, and winning is metaphorically perceived as being higher. 3) Sūnus peraugo tėvą./ Son.NOM.SG outgrow.PST.3 father.ACC.SG/ ʻThe son has outgrown the fatherʼ. The data utilized for this study was collected from the 2014 grammatically annotated text "Lithuanian Web (LithuanianWaC v2)", consisting of 63,645,700 words. Given that the corpus is grammatically lemmatized, the list of the 793 items was obtained using the wordlist function and specifying that verbs starting with per were searched. The list included not only prefixed verbs but also other verbs whose roots have the same letter sequences as prefixes. Also, words with misspellings, without diacritical marks, and words listed for lemmatization errors were rejected, and a total of 475 derivatives were left for further analysis. The semantic analysis revealed that there are 12 distinct meanings of the prefix per-. The spatial meanings were extracted by determining what a trajector is, what a landmark is, and what the relation between them is. The connection between non-spatial meanings and spatial ones occurs through semantic motivation established by identifying elements that correspond to the trajector and landmark. The analysis reveals that there are no strict boundaries among these meanings, instead showing a continuum that encompasses a central core and a peripheral association with their internal structure, i.e., some derivatives are more prototypical of a particular meaning than others.

Keywords: word-formation, cognitive semantics, metaphor, radial networks, prototype theory, prefix

Procedia PDF Downloads 77
175 The Cultural Shift in Pre-owned Fashion as Sustainable Consumerism in Vietnam

Authors: Lam Hong Lan

Abstract:

The textile industry is said to be the second-largest polluter, responsible for 92 million tonnes of waste annually. There is an urgent need to practice the circular economy to increase the use and reuse around the world. By its nature, the pre-owned fashion business is considered part of the circular economy as it helps to eliminate waste and circulate products. Second-hand clothes and accessories used to be associated with a ‘cheap image’ that carried ‘old energy’ in Vietnam. This perception has been shifted, especially amongst the younger generation. Vietnamese consumer is spending more on products and services that increase self-esteem. The same consumer is moving away from a collectivist social identity towards a ‘me, not we’ outlook as they look for a way to express their individual identity. And pre-owned fashion is one of their solutions as it values money, can create a unique personal style for the wearer and links with sustainability. The design of this study is based on the second-hand shopping motivation theory. A semi-structured online survey with 100 consumers from one pre-owned clothing community and one pre-owned e-commerce site in Vietnam. The findings show that in contrast with Vietnamese older consumers (55+yo) who, in the previous study, generally associated pre-owned fashion with ‘low-cost’, ‘cheap image’ that carried ‘old energy’, young customers (20-30 yo) were actively promoted their pre-owned fashion items to the public via outlet’s social platforms and their social media. This cultural shift comes from the impact of global and local discourse around sustainable fashion and the growth of digital platforms in the pre-owned fashion business in the last five years, which has generally supported wider interest in pre-owned fashion in Vietnam. It can be summarised in three areas: (1) global and local celebrity influencers. A number of celebrities have been photographed wearing vintage items in music videos, photoshoots or at red carpet events. (2) E-commerce and intermediaries. International e-commerce sites – e.g., Vinted, TheRealReal – and/or local apps – e.g., Re.Loved – can influence attitudes and behaviors towards pre-owned consumption. (3) Eco-awareness. The increased online coverage of climate change and environmental pollution has encouraged customers to adopt a more eco-friendly approach to their wardrobes. While sustainable biomaterials and designs are still navigating their way into sustainability, sustainable consumerism via pre-owned fashion seems to be an immediate solution to lengthen the clothes lifecycle. This study has found that young consumers are primarily seeking value for money and/or a unique personal style from pre-owned/vintage fashion while using these purchases to promote their own “eco-awareness” via their social media networks. This is a good indication for fashion designers to keep in mind in their design process and for fashion enterprises in their business model’s choice to not overproduce fashion items.

Keywords: cultural shift, pre-owned fashion, sustainable consumption, sustainable fashion.

Procedia PDF Downloads 83
174 Women’s Experience of Managing Pre-Existing Lymphoedema during Pregnancy and the Early Postnatal Period

Authors: Kim Toyer, Belinda Thompson, Louise Koelmeyer

Abstract:

Lymphoedema is a chronic condition caused by dysfunction of the lymphatic system, which limits the drainage of fluid and tissue waste from the interstitial space of the affected body part. The normal physiological changes in pregnancy cause an increased load on a normal lymphatic system which can result in a transient lymphatic overload (oedema). The interaction between lymphoedema and pregnancy oedema is unclear. Women with pre-existing lymphoedema require accurate information and additional strategies to manage their lymphoedema during pregnancy. Currently, no resources are available to guide women or their healthcare providers with accurate advice and additional management strategies for coping with lymphoedema during pregnancy until they have recovered postnatally. This study explored the experiences of Australian women with pre-existing lymphoedema during recent pregnancy and the early postnatal period to determine how their usual lymphoedema management strategies were adapted and what were their additional or unmet needs. Interactions with their obstetric care providers, the hospital maternity services, and usual lymphoedema therapy services were detailed. Participants were sourced from several Australian lymphoedema community groups, including therapist networks. Opportunistic sampling is appropriate to explore this topic in a small target population as lymphoedema in women of childbearing age is uncommon, with prevalence data unavailable. Inclusion criteria were aged over 18 years, diagnosed with primary or secondary lymphoedema of the arm or leg, pregnant within the preceding ten years (since 2012), and had their pregnancy and postnatal care in Australia. Exclusion criteria were a diagnosis of lipedema and if unable to read or understand a reasonable level of English. A mixed-method qualitative design was used in two phases. This involved an online survey (REDCap platform) of the participants followed by online semi-structured interviews or focus groups to provide the transcript data for inductive thematic analysis to gain an in-depth understanding of issues raised. Women with well-managed pre-existing lymphoedema coped well with the additional oedema load of pregnancy; however, those with limited access to quality conservative care prior to pregnancy were found to be significantly impacted by pregnancy, including many reporting deterioration of their chronic lymphoedema. Misinformation and a lack of support increased fear and apprehension in planning and enjoying their pregnancy experience. Collaboration between maternity and lymphoedema therapy services did not happen despite study participants suggesting it. Helpful resources and unmet needs were identified in the recent Australian context to inform further research and the development of resources to assist women with lymphoedema who are considering or are pregnant and their supporters, including health care providers.

Keywords: lymphoedema, management strategies, pregnancy, qualitative

Procedia PDF Downloads 85
173 Continuous Professional Development of Teachers: Implementation Mechanisms in the Republic of Kazakhstan Based on the Professional Standard 'Teacher'

Authors: Yelena Agranovich, Larissa Ageyeva, Aigul Syzdykbayeva, Violetta Tyan

Abstract:

The modernization of the education system in the Republic of Kazakhstan is aimed at improving the quality of teacher training and enhancing key competencies among teachers. The current professional standard ‘Teacher’ defines the general characteristics of teachers’ activities, key competencies, and criteria according to relevant qualification categories structured on the principle of progression, thereby enabling Continuous Professional Development (CPD). The essence of CPD lies in the constant integration of new knowledge and skills that help teachers adapt to changes in the education system, in technologies, and teaching methods. This developmental process enables teachers to stay updated on recent scientific achievements, innovations, and modern pedagogical practices. Continuous learning helps teachers remain flexible and open to new developments, creating conditions for improving educational quality and fostering students' personal growth. This study aims to address the following objectives: analysis of international CPD practices, identification of conceptual foundations, and investigation of CPD implementation mechanisms in Kazakhstan. The core principles of CPD are identified as longitudinality, systematicity, and fragmentation. CPD implementation is based on various theoretical approaches: axiological, systemic, competency-based, activity-based, and learner-centered. The study analyzes leading models of teacher CPD, with a target sample that includes countries such as Australia, Japan, South Korea, England, Singapore, Sweden, Finland, and Kazakhstan. The research methods include analysis (comparative, historical, content analysis, systematic), case studies of CPD models, and synthesis and systematization of scientific data. As research results, the mechanisms for CPD implementation in Kazakhstan will be identified, along with further perspectives on transforming resources within the teacher professional development system. In comparing CPD models from various countries, it is noted that teacher CPD in the Republic of Kazakhstan: (1) is implemented through educational programs, professional development courses, teacher certification, professional networks, in-school professional development, self-education, and self-assessment; (2) includes the development of pedagogical values and competencies (tolerance, inclusivity, communication, critical thinking, creativity, reflection, etc.); (3) is carried out based on traditional forms (professional development courses, retraining) and informal forms (self-learning, self-development, experience sharing and exchange). Further research will focus on creating a digital ecosystem for teacher CPD, based on an educational platform that facilitates individualized professional development pathways for teachers (competency diagnostics, course selection, and a methodological system of course and post-course support for teachers).

Keywords: continuous professional development, CPD models, professional development, professional upgrading, teacher, teacher training

Procedia PDF Downloads 12
172 Consensual A-Monogamous Relationships: Challenges and Ways of Coping

Authors: Tal Braverman Uriel, Tal Litvak Hirsch

Abstract:

Background and Objectives: Little or only partial emphasis has been placed on exploring the complexity of consensual non-monogamous relationships. The term "polyamory" refers to consensual non-monogamy, and it is defined as having emotional and/or sexual relations simultaneously with two or more people, the consent and knowledge of all the partners concerned. Managing multiple romantic relationships with different people evokes more emotions, leads to more emotional conflicts arising from different interests, and demands practical strategies. An individual's transition from a monogamous lifestyle to a consensual non-monogamous lifestyle yields new challenges, accompanied by stress, uncertainty, and question marks, as do other life-changing events, such as divorce or transition to parenthood. The study examines both the process of transition and adaptation to a consensually non-monogamous relationship, as well as the coping mechanism involved in the daily conduct of this lifestyle. The research focuses on understanding the consequences, challenges, and coping methods from a personal, marital, and familial point of view and focuses on 40 middle-aged individuals (20 men and 20 women ages 40-60). The research sheds light on a way of life that has not been previously studied in Israel and is still considered unacceptable. Theories of crisis (e.g., as Folkman and Lazarus) were applied, and as a result, a deeper understanding of the subject was reached, all while focusing on multiple aspects of dealing with stress. The basic research question examines the consequences of entering a polyamorous life from a personal point of view as an individual, partner, and parent and the ways of coping with these consequences. Method: The research is conducted with a narrative qualitative approach in the interpretive paradigm, including semi-structured in-depth interviews. The method of analysis is thematic. Results: The findings indicate that in most cases, an individual's motivation to open the relationship is mainly a longing for better sexuality and for an added layer of excitement to their lives. Most of the interviewees were assisted by their spouses in the process, as well as by social networks and podcasts on the subject. Some of them therapeutic professionals from the field are helpful. It also clearly emerged that among those who experienced acute emotional crises with the primary partner or painful separations from secondary partners, all believed polyamory to be the adequate way of life for them. Finally, a key resource for managing tension and stress is the ability to share and communicate with the primary partner. Conclusions: The study points to the challenges and benefits of a non-monogamous lifestyle as well as the use of coping mechanisms and resources that are consistent with the existing theory and research in the field in the context of life changes. The study indicates the need to expand the research canvas in the future in the context of parenting and the consequences for children.

Keywords: a-monogamy, consent, family, stress, tension

Procedia PDF Downloads 76
171 Changing the Landscape of Fungal Genomics: New Trends

Authors: Igor V. Grigoriev

Abstract:

Understanding of biological processes encoded in fungi is instrumental in addressing future food, feed, and energy demands of the growing human population. Genomics is a powerful and quickly evolving tool to understand these processes. The Fungal Genomics Program of the US Department of Energy Joint Genome Institute (JGI) partners with researchers around the world to explore fungi in several large scale genomics projects, changing the fungal genomics landscape. The key trends of these changes include: (i) rapidly increasing scale of sequencing and analysis, (ii) developing approaches to go beyond culturable fungi and explore fungal ‘dark matter,’ or unculturables, and (iii) functional genomics and multi-omics data integration. Power of comparative genomics has been recently demonstrated in several JGI projects targeting mycorrhizae, plant pathogens, wood decay fungi, and sugar fermenting yeasts. The largest JGI project ‘1000 Fungal Genomes’ aims at exploring the diversity across the Fungal Tree of Life in order to better understand fungal evolution and to build a catalogue of genes, enzymes, and pathways for biotechnological applications. At this point, at least 65% of over 700 known families have one or more reference genomes sequenced, enabling metagenomics studies of microbial communities and their interactions with plants. For many of the remaining families no representative species are available from culture collections. To sequence genomes of unculturable fungi two approaches have been developed: (a) sequencing DNA from fruiting bodies of ‘macro’ and (b) single cell genomics using fungal spores. The latter has been tested using zoospores from the early diverging fungi and resulted in several near-complete genomes from underexplored branches of the Fungal Tree, including the first genomes of Zoopagomycotina. Genome sequence serves as a reference for transcriptomics studies, the first step towards functional genomics. In the JGI fungal mini-ENCODE project transcriptomes of the model fungus Neurospora crassa grown on a spectrum of carbon sources have been collected to build regulatory gene networks. Epigenomics is another tool to understand gene regulation and recently introduced single molecule sequencing platforms not only provide better genome assemblies but can also detect DNA modifications. For example, 6mC methylome was surveyed across many diverse fungi and the highest among Eukaryota levels of 6mC methylation has been reported. Finally, data production at such scale requires data integration to enable efficient data analysis. Over 700 fungal genomes and other -omes have been integrated in JGI MycoCosm portal and equipped with comparative genomics tools to enable researchers addressing a broad spectrum of biological questions and applications for bioenergy and biotechnology.

Keywords: fungal genomics, single cell genomics, DNA methylation, comparative genomics

Procedia PDF Downloads 208
170 Comparison of GIS-Based Soil Erosion Susceptibility Models Using Support Vector Machine, Binary Logistic Regression and Artificial Neural Network in the Southwest Amazon Region

Authors: Elaine Lima Da Fonseca, Eliomar Pereira Da Silva Filho

Abstract:

The modeling of areas susceptible to soil loss by hydro erosive processes consists of a simplified instrument of reality with the purpose of predicting future behaviors from the observation and interaction of a set of geoenvironmental factors. The models of potential areas for soil loss will be obtained through binary logistic regression, artificial neural networks, and support vector machines. The choice of the municipality of Colorado do Oeste in the south of the western Amazon is due to soil degradation due to anthropogenic activities, such as agriculture, road construction, overgrazing, deforestation, and environmental and socioeconomic configurations. Initially, a soil erosion inventory map constructed through various field investigations will be designed, including the use of remotely piloted aircraft, orbital imagery, and the PLANAFLORO/RO database. 100 sampling units with the presence of erosion will be selected based on the assumptions indicated in the literature, and, to complement the dichotomous analysis, 100 units with no erosion will be randomly designated. The next step will be the selection of the predictive parameters that exert, jointly, directly, or indirectly, some influence on the mechanism of occurrence of soil erosion events. The chosen predictors are altitude, declivity, aspect or orientation of the slope, curvature of the slope, composite topographic index, flow power index, lineament density, normalized difference vegetation index, drainage density, lithology, soil type, erosivity, and ground surface temperature. After evaluating the relative contribution of each predictor variable, the erosion susceptibility model will be applied to the municipality of Colorado do Oeste - Rondônia through the SPSS Statistic 26 software. Evaluation of the model will occur through the determination of the values of the R² of Cox & Snell and the R² of Nagelkerke, Hosmer and Lemeshow Test, Log Likelihood Value, and Wald Test, in addition to analysis of the Confounding Matrix, ROC Curve and Accumulated Gain according to the model specification. The validation of the synthesis map resulting from both models of the potential risk of soil erosion will occur by means of Kappa indices, accuracy, and sensitivity, as well as by field verification of the classes of susceptibility to erosion using drone photogrammetry. Thus, it is expected to obtain the mapping of the following classes of susceptibility to erosion very low, low, moderate, very high, and high, which may constitute a screening tool to identify areas where more detailed investigations need to be carried out, applying more efficient social resources.

Keywords: modeling, susceptibility to erosion, artificial intelligence, Amazon

Procedia PDF Downloads 66
169 Existential Affordances and Psychopathology: A Gibsonian Analysis of Dissociative Identity Disorder

Authors: S. Alina Wang

Abstract:

A Gibsonian approach is used to understand the existential dimensions of the human ecological niche. Then, this existential-Gibsonian framework is applied to rethinking Hacking’s historical analysis of multiple personality disorder. This research culminates in a generalized account of psychiatric illness from an enactivist lens. In conclusion, reflections on the implications of this account on approaches to psychiatric treatment are mentioned. J.J. Gibson’s theory of affordances centered on affordances of sensorimotor varieties, which guide basic behaviors relative to organisms’ vital needs and physiological capacities (1979). Later theorists, notably Neisser (1988) and Rietveld (2014), expanded on the theory of affordances to account for uniquely human activities relative to the emotional, intersubjective, cultural, and narrative aspects of the human ecological niche. This research shows that these affordances are structured by what Haugeland (1998) calls existential commitments, which draws on Heidegger’s notion of dasein (1927) and Merleau-Ponty’s account of existential freedom (1945). These commitments organize the existential affordances that fill an individual’s environment and guide their thoughts, emotions, and behaviors. This system of a priori existential commitments and a posteriori affordances is called existential enactivism. For humans, affordances do not only elicit motor responses and appear as objects with instrumental significance. Affordances also, and possibly primarily, determine so-called affective and cognitive activities and structure the wide range of kinds (e.g., instrumental, aesthetic, ethical) of significances of objects found in the world. Then existential enactivism is applied to understanding the psychiatric phenomenon of multiple personality disorder (precursor of the current diagnosis of dissociative identity disorder). A reinterpretation of Hacking’s (1998) insights into the history of this particular disorder and his generalizations on the constructed nature of most psychiatric illness is taken on. Enactivist approaches sensitive to existential phenomenology can provide a deeper understanding of these matters. Conceptualizing psychiatric illness as strictly a disorder in the head (whether parsed as a disorder of brain chemicals or meaning-making capacities encoded in psychological modules) is incomplete. Rather, psychiatric illness must also be understood as a disorder in the world, or in the interconnected networks of existential affordances that regulate one’s emotional, intersubjective, and narrative capacities. All of this suggests that an adequate account of psychiatric illness must involve (1) the affordances that are the sources of existential hindrance, (2) the existential commitments structuring these affordances, and (3) the conditions of these existential commitments. Approaches to treatment of psychiatric illness would be more effective by centering on the interruption of normalized behaviors corresponding to affordances targeted as sources of hindrance, the development of new existential commitments, and the practice of new behaviors that erect affordances relative to these reformed commitments.

Keywords: affordance, enaction, phenomenology, psychiatry, psychopathology

Procedia PDF Downloads 137
168 Verification of Geophysical Investigation during Subsea Tunnelling in Qatar

Authors: Gary Peach, Furqan Hameed

Abstract:

Musaimeer outfall tunnel is one of the longest storm water tunnels in the world, with a total length of 10.15 km. The tunnel will accommodate surface and rain water received from the drainage networks from 270 km of urban areas in southern Doha with a pumping capacity of 19.7m³/sec. The tunnel is excavated by Tunnel Boring Machine (TBM) through Rus Formation, Midra Shales, and Simsima Limestone. Water inflows at high pressure, complex mixed ground, and weaker ground strata prone to karstification with the presence of vertical and lateral fractures connected to the sea bed were also encountered during mining. In addition to pre-tender geotechnical investigations, the Contractor carried out a supplementary offshore geophysical investigation in order to fine-tune the existing results of geophysical and geotechnical investigations. Electric resistivity tomography (ERT) and Seismic Reflection survey was carried out. Offshore geophysical survey was performed, and interpretations of rock mass conditions were made to provide an overall picture of underground conditions along the tunnel alignment. This allowed the critical tunnelling area and cutter head intervention to be planned accordingly. Karstification was monitored with a non-intrusive radar system facility installed on the TBM. The Boring Electric Ahead Monitoring(BEAM) was installed at the cutter head and was able to predict the rock mass up to 3 tunnel diameters ahead of the cutter head. BEAM system was provided with an online system for real time monitoring of rock mass condition and then correlated with the rock mass conditions predicted during the interpretation phase of offshore geophysical surveys. The further correlation was carried by Samples of the rock mass taken from tunnel face inspections and excavated material produced by the TBM. The BEAM data was continuously monitored to check the variations in resistivity and percentage frequency effect (PFE) of the ground. This system provided information about rock mass condition, potential karst risk, and potential of water inflow. BEAM system was found to be more than 50% accurate in picking up the difficult ground conditions and faults as predicted in the geotechnical interpretative report before the start of tunnelling operations. Upon completion of the project, it was concluded that the combined use of different geophysical investigation results can make the execution stage be carried out in a more confident way with the less geotechnical risk involved. The approach used for the prediction of rock mass condition in Geotechnical Interpretative Report (GIR) and Geophysical Reflection and electric resistivity tomography survey (ERT) Geophysical Reflection surveys were concluded to be reliable as the same rock mass conditions were encountered during tunnelling operations.

Keywords: tunnel boring machine (TBM), subsea, karstification, seismic reflection survey

Procedia PDF Downloads 244