Search results for: sequence evolution
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2990

Search results for: sequence evolution

110 Algorithmic Obligations: Proactive Liability for AI-Generated Content and Copyright Compliance

Authors: Aleksandra Czubek

Abstract:

As AI systems increasingly shape content creation, existing copyright frameworks face significant challenges in determining liability for AI-generated outputs. Current legal discussions largely focus on who bears responsibility for infringing works, be it developers, users, or entities benefiting from AI outputs. This paper introduces a novel concept of algorithmic obligations, proposing that AI developers be subject to proactive duties that ensure their models prevent copyright infringement before it occurs. Building on principles of obligations law traditionally applied to human actors, the paper suggests a shift from reactive enforcement to proactive legal requirements. AI developers would be legally mandated to incorporate copyright-aware mechanisms within their systems, turning optional safeguards into enforceable standards. These obligations could vary in implementation across international, EU, UK, and U.S. legal frameworks, creating a multi-jurisdictional approach to copyright compliance. This paper explores how the EU’s existing copyright framework, exemplified by the Copyright Directive (2019/790), could evolve to impose a duty of foresight on AI developers, compelling them to embed mechanisms that prevent infringing outputs. By drawing parallels to GDPR’s “data protection by design,” a similar principle could be applied to copyright law, where AI models are designed to minimize copyright risks. In the UK, post-Brexit text and data mining exemptions are seen as pro-innovation but pose risks to copyright protections. This paper proposes a balanced approach, introducing algorithmic obligations to complement these exemptions. AI systems benefiting from text and data mining provisions should integrate safeguards that flag potential copyright violations in real time, ensuring both innovation and protection. In the U.S., where copyright law focuses on human-centric works, this paper suggests an evolution toward algorithmic due diligence. AI developers would have a duty similar to product liability, ensuring that their systems do not produce infringing outputs, even if the outputs themselves cannot be copyrighted. This framework introduces a shift from post-infringement remedies to preventive legal structures, where developers actively mitigate risks. The paper also breaks new ground by addressing obligations surrounding the training data of large language models (LLMs). Currently, training data is often treated under exceptions such as the EU’s text and data mining provisions or U.S. fair use. However, this paper proposes a proactive framework where developers are obligated to verify and document the legal status of their training data, ensuring it is licensed or otherwise cleared for use. In conclusion, this paper advocates for an obligations-centered model that shifts AI-related copyright law from reactive litigation to proactive design. By holding AI developers to a heightened standard of care, this approach aims to prevent infringement at its source, addressing both the outputs of AI systems and the training processes that underlie them.

Keywords: ip, technology, copyright, data, infringement, comparative analysis

Procedia PDF Downloads 18
109 The Direct and Indirect Effects of Buddhism on Fertility Rates in General and in Specific Socioeconomic Circumstances of Women

Authors: Szerena Vajkovszki

Abstract:

Our worldwide aging society, especially in developed countries, including members of EU, raise sophisticated sociological and economic issues and challenges to be met. As declining fertility has outstanding influence underlying this trend, numerous studies have attempted to identify, describe, measure and interpret contributing factors of the fertility rate, out of which relatively few revealed the impact of religion. Identified, examined and influential factors affecting birth rate as stated by the present scientific publications are more than a dozen out of which religious beliefs, traditions, and cultural norms were examined first with a special focus on abortion and forms of birth control. Nevertheless, connected to religion, not only these topics are crucial regarding fertility, but many others as well. Among many religious guidelines, we can separate two major categories: direct and indirect. The aim of this research was to understand what are the most crucial identified (family values, gender related behaviors, religious sentiments) and not yet identified most influential contributing religious factors. Above identifying these direct or indirect factors, it is also important to understand to what extent and how do they influence fertility, which requires a wider (inter-discipline) perspective. As proved by previous studies religion has also an influential role on health, mental state, well-being, working activity and many other components that are also related to fertility rates. All these components are inter-related. Hence direct and indirect religious effects can only be well understood if we figure out all necessary fields and their interaction. With the help of semi-structured opened interviews taking place in different countries, it was showed that indeed Buddhism has significant direct and indirect effect on fertility. Hence the initial hypothesis was proved. However, the interviews showed an overall positive effect; the results could only serve for a general understanding of how Buddhism affects fertility. Evolution of Buddhism’s direct and indirect influence may vary in different nations and circumstances according to their specific environmental attributes. According to the local patterns, with special regard to women’s position and role in the society, outstandingly indirect influences could show diversifications. So it is advisory to investigate more for a deeper and clearer understanding of how Buddhism function in different socioeconomic circumstances. For this purpose, a specific and detailed analysis was developed from recent related researches about women’s position (including family roles and economic activity) in Hungary with the intention to be able to have a complex vision of crucial socioeconomic factors influencing fertility. Further interviews and investigations are to be done in order to show a complex vision of Buddhism’s direct and indirect effect on fertility in Hungary to be able to support recommendations and policies pointing to higher fertility rates in the field of social policies. The present research could serve as a general starting point or a common basis for further specific national investigations.

Keywords: Buddhism, children, fertility, gender roles, religion, women

Procedia PDF Downloads 151
108 Applying Napoleoni's 'Shell-State' Concept to Jihadist Organisations's Rise in Mali, Nigeria and Syria/Iraq, 2011-2015

Authors: Francesco Saverio Angiò

Abstract:

The Islamic State of Iraq and the Levant / Syria (ISIL/S), Al-Qaeda in the Islamic Maghreb (AQIM) and People Committed to the Propagation of the Prophet's Teachings and Jihad, also known as ‘Boko Haram’ (BH), have fought successfully against Syria and Iraq, Mali, Nigeria’s government, respectively. According to Napoleoni, the ‘shell-state’ concept can explain the economic dimension and the financing model of the ISIL insurgency. However, she argues that AQIM and BH did not properly plan their financial model. Consequently, her idea would not be suitable to these groups. Nevertheless, AQIM and BH’s economic performances and their (short) territorialisation suggest that their financing models respond to a well-defined strategy, which they were able to adapt to new circumstances. Therefore, Napoleoni’s idea of ‘shell-state’ can be applied to the three jihadist armed groups. In the last five years, together with other similar entities, ISIL/S, AQIM and BH have been fighting against governments with insurgent tactics and terrorism acts, conquering and ruling a quasi-state; a physical space they presented as legitimate territorial entity, thanks to a puritan version of the Islamic law. In these territories, they have exploited the traditional local economic networks. In addition, they have contributed to the development of legal and illegal transnational business activities. They have also established a justice system and created an administrative structure to supply services. Napoleoni’s ‘shell-state’ can describe the evolution of ISIL/S, AQIM and BH, which has switched from an insurgency to a proto or a quasi-state entity, enjoying a significant share of power over territories and populations. Napoleoni first developed and applied the ‘Shell-state’ concept to describe the nature of groups such as the Palestine Liberation Organisation (PLO), before using it to explain the expansion of ISIL. However, her original conceptualisation emphasises on the economic dimension of the rise of the insurgency, focusing on the ‘business’ model and the insurgents’ financing management skills, which permits them to turn into an organisation. However, the idea of groups which use, coordinate and grab some territorial economic activities (at the same time, encouraging new criminal ones), can also be applied to administrative, social, infrastructural, legal and military levels of their insurgency, since they contribute to transform the insurgency to the same extent the economic dimension does. In addition, according to Napoleoni’s view, the ‘shell-state’ prism is valid to understand the ISIL/S phenomenon, because the group has carefully planned their financial steps. Napoleoni affirmed that ISIL/S carries out activities in order to promote their conversion from a group relying on external sponsors to an entity that can penetrate and condition local economies. On the contrary, ‘shell-state’ could not be applied to AQIM or BH, which are acting more like smugglers. Nevertheless, despite its failure to control territories, as ISIL has been able to do, AQIM and BH have responded strategically to their economic circumstances and have defined specific dynamics to ensure a flow of stable funds. Therefore, Napoleoni’s theory is applicable.

Keywords: shell-state, jihadist insurgency, proto or quasi-state entity economic planning, strategic financing

Procedia PDF Downloads 352
107 Finite Element Method (FEM) Simulation, design and 3D Print of Novel Highly Integrated PV-TEG Device with Improved Solar Energy Harvest Efficiency

Authors: Jaden Lu, Olivia Lu

Abstract:

Despite the remarkable advancement of solar cell technology, the challenge of optimizing total solar energy harvest efficiency persists, primarily due to significant heat loss. This excess heat not only diminishes solar panel output efficiency but also curtails its operational lifespan. A promising approach to address this issue is the conversion of surplus heat into electricity. In recent years, there is growing interest in the use of thermoelectric generators (TEG) as a potential solution. The integration of efficient TEG devices holds the promise of augmenting overall energy harvest efficiency while prolonging the longevity of solar panels. While certain research groups have proposed the integration of solar cells and TEG devices, a substantial gap between conceptualization and practical implementation remains, largely attributed to low thermal energy conversion efficiency of TEG devices. To bridge this gap and meet the requisites of practical application, a feasible strategy involves the incorporation of a substantial number of p-n junctions within a confined unit volume. However, the manufacturing of high-density TEG p-n junctions presents a formidable challenge. The prevalent solution often leads to large device sizes to accommodate enough p-n junctions, consequently complicating integration with solar cells. Recently, the adoption of 3D printing technology has emerged as a promising solution to address this challenge by fabricating high-density p-n arrays. Despite this, further developmental efforts are necessary. Presently, the primary focus is on the 3D printing of vertically layered TEG devices, wherein p-n junction density remains constrained by spatial limitations and the constraints of 3D printing techniques. This study proposes a novel device configuration featuring horizontally arrayed p-n junctions of Bi2Te3. The structural design of the device is subjected to simulation through the Finite Element Method (FEM) within COMSOL Multiphysics software. Various device configurations are simulated to identify optimal device structure. Based on the simulation results, a new TEG device is fabricated utilizing 3D Selective laser melting (SLM) printing technology. Fusion 360 facilitates the translation of the COMSOL device structure into a 3D print file. The horizontal design offers a unique advantage, enabling the fabrication of densely packed, three-dimensional p-n junction arrays. The fabrication process entails printing a singular row of horizontal p-n junctions using the 3D SLM printing technique in a single layer. Subsequently, successive rows of p-n junction arrays are printed within the same layer, interconnected by thermally conductive copper. This sequence is replicated across multiple layers, separated by thermal insulating glass. This integration created in a highly compact three-dimensional TEG device with high density p-n junctions. The fabricated TEG device is then attached to the bottom of the solar cell using thermal glue. The whole device is characterized, with output data closely matching with COMSOL simulation results. Future research endeavors will encompass the refinement of thermoelectric materials. This includes the advancement of high-resolution 3D printing techniques tailored to diverse thermoelectric materials, along with the optimization of material microstructures such as porosity and doping. The objective is to achieve an optimal and highly integrated PV-TEG device that can substantially increase the solar energy harvest efficiency.

Keywords: thermoelectric, finite element method, 3d print, energy conversion

Procedia PDF Downloads 67
106 Thermal Ageing of a 316 Nb Stainless Steel: From Mechanical and Microstructural Analyses to Thermal Ageing Models for Long Time Prediction

Authors: Julien Monnier, Isabelle Mouton, Francois Buy, Adrien Michel, Sylvain Ringeval, Joel Malaplate, Caroline Toffolon, Bernard Marini, Audrey Lechartier

Abstract:

Chosen to design and assemble massive components for nuclear industry, the 316 Nb austenitic stainless steel (also called 316 Nb) suits well this function thanks to its mechanical, heat and corrosion handling properties. However, these properties might change during steel’s life due to thermal ageing causing changes within its microstructure. Our main purpose is to determine if the 316 Nb will keep its mechanical properties after an exposition to industrial temperatures (around 300 °C) during a long period of time (< 10 years). The 316 Nb is composed by different phases, which are austenite as main phase, niobium-carbides, and ferrite remaining from the ferrite to austenite transformation during the process. Our purpose is to understand thermal ageing effects on the material microstructure and properties and to submit a model predicting the evolution of 316 Nb properties as a function of temperature and time. To do so, based on Fe-Cr and 316 Nb phase diagrams, we studied the thermal ageing of 316 Nb steel alloys (1%v of ferrite) and welds (10%v of ferrite) for various temperatures (350, 400, and 450 °C) and ageing time (from 1 to 10.000 hours). Higher temperatures have been chosen to reduce thermal treatment time by exploiting a kinetic effect of temperature on 316 Nb ageing without modifying reaction mechanisms. Our results from early times of ageing show no effect on steel’s global properties linked to austenite stability, but an increase of ferrite hardness during thermal ageing has been observed. It has been shown that austenite’s crystalline structure (cfc) grants it a thermal stability, however, ferrite crystalline structure (bcc) favours iron-chromium demixion and formation of iron-rich and chromium-rich phases within ferrite. Observations of thermal ageing effects on ferrite’s microstructure were necessary to understand the changes caused by the thermal treatment. Analyses have been performed by using different techniques like Atomic Probe Tomography (APT) and Differential Scanning Calorimetry (DSC). A demixion of alloy’s elements leading to formation of iron-rich (α phase, bcc structure), chromium-rich (α’ phase, bcc structure), and nickel-rich (fcc structure) phases within the ferrite have been observed and associated to the increase of ferrite’s hardness. APT results grant information about phases’ volume fraction and composition, allowing to associate hardness measurements to the volume fractions of the different phases and to set up a way to calculate α’ and nickel-rich particles’ growth rate depending on temperature. The same methodology has been applied to DSC results, which allowed us to measure the enthalpy of α’ phase dissolution between 500 and 600_°C. To resume, we started from mechanical and macroscopic measurements and explained the results through microstructural study. The data obtained has been match to CALPHAD models’ prediction and used to improve these calculations and employ them to predict 316 Nb properties’ change during the industrial process.

Keywords: stainless steel characterization, atom probe tomography APT, vickers hardness, differential scanning calorimetry DSC, thermal ageing

Procedia PDF Downloads 93
105 Unleashing the Power of Cerebrospinal System for a Better Computer Architecture

Authors: Lakshmi N. Reddi, Akanksha Varma Sagi

Abstract:

Studies on biomimetics are largely developed, deriving inspiration from natural processes in our objective world to develop novel technologies. Recent studies are diverse in nature, making their categorization quite challenging. Based on an exhaustive survey, we developed categorizations based on either the essential elements of nature - air, water, land, fire, and space, or on form/shape, functionality, and process. Such diverse studies as aircraft wings inspired by bird wings, a self-cleaning coating inspired by a lotus petal, wetsuits inspired by beaver fur, and search algorithms inspired by arboreal ant path networks lend themselves to these categorizations. Our categorizations of biomimetic studies allowed us to define a different dimension of biomimetics. This new dimension is not restricted to inspiration from the objective world. It is based on the premise that the biological processes observed in the objective world find their reflections in our human bodies in a variety of ways. For example, the lungs provide the most efficient example for liquid-gas phase exchange, the heart exemplifies a very efficient pumping and circulatory system, and the kidneys epitomize the most effective cleaning system. The main focus of this paper is to bring out the magnificence of the cerebro-spinal system (CSS) insofar as it relates to our current computer architecture. In particular, the paper uses four key measures to analyze the differences between CSS and human- engineered computational systems. These are adaptability, sustainability, energy efficiency, and resilience. We found that the cerebrospinal system reveals some important challenges in the development and evolution of our current computer architectures. In particular, the myriad ways in which the CSS is integrated with other systems/processes (circulatory, respiration, etc) offer useful insights on how the human-engineered computational systems could be made more sustainable, energy-efficient, resilient, and adaptable. In our paper, we highlight the energy consumption differences between CSS and our current computational designs. Apart from the obvious differences in materials used between the two, the systemic nature of how CSS functions provides clues to enhance life-cycles of our current computational systems. The rapid formation and changes in the physiology of dendritic spines and their synaptic plasticity causing memory changes (ex., long-term potentiation and long-term depression) allowed us to formulate differences in the adaptability and resilience of CSS. In addition, the CSS is sustained by integrative functions of various organs, and its robustness comes from its interdependence with the circulatory system. The paper documents and analyzes quantifiable differences between the two in terms of the four measures. Our analyses point out the possibilities in the development of computational systems that are more adaptable, sustainable, energy efficient, and resilient. It concludes with the potential approaches for technological advancement through creation of more interconnected and interdependent systems to replicate the effective operation of cerebro-spinal system.

Keywords: cerebrospinal system, computer architecture, adaptability, sustainability, resilience, energy efficiency

Procedia PDF Downloads 97
104 Enhancing Financial Security: Real-Time Anomaly Detection in Financial Transactions Using Machine Learning

Authors: Ali Kazemi

Abstract:

The digital evolution of financial services, while offering unprecedented convenience and accessibility, has also escalated the vulnerabilities to fraudulent activities. In this study, we introduce a distinct approach to real-time anomaly detection in financial transactions, aiming to fortify the defenses of banking and financial institutions against such threats. Utilizing unsupervised machine learning algorithms, specifically autoencoders and isolation forests, our research focuses on identifying irregular patterns indicative of fraud within transactional data, thus enabling immediate action to prevent financial loss. The data we used in this study included the monetary value of each transaction. This is a crucial feature as fraudulent transactions may have distributions of different amounts than legitimate ones, such as timestamps indicating when transactions occurred. Analyzing transactions' temporal patterns can reveal anomalies (e.g., unusual activity in the middle of the night). Also, the sector or category of the merchant where the transaction occurred, such as retail, groceries, online services, etc. Specific categories may be more prone to fraud. Moreover, the type of payment used (e.g., credit, debit, online payment systems). Different payment methods have varying risk levels associated with fraud. This dataset, anonymized to ensure privacy, reflects a wide array of transactions typical of a global banking institution, ranging from small-scale retail purchases to large wire transfers, embodying the diverse nature of potentially fraudulent activities. By engineering features that capture the essence of transactions, including normalized amounts and encoded categorical variables, we tailor our data to enhance model sensitivity to anomalies. The autoencoder model leverages its reconstruction error mechanism to flag transactions that deviate significantly from the learned normal pattern, while the isolation forest identifies anomalies based on their susceptibility to isolation from the dataset's majority. Our experimental results, validated through techniques such as k-fold cross-validation, are evaluated using precision, recall, and the F1 score alongside the area under the receiver operating characteristic (ROC) curve. Our models achieved an F1 score of 0.85 and a ROC AUC of 0.93, indicating high accuracy in detecting fraudulent transactions without excessive false positives. This study contributes to the academic discourse on financial fraud detection and provides a practical framework for banking institutions seeking to implement real-time anomaly detection systems. By demonstrating the effectiveness of unsupervised learning techniques in a real-world context, our research offers a pathway to significantly reduce the incidence of financial fraud, thereby enhancing the security and trustworthiness of digital financial services.

Keywords: anomaly detection, financial fraud, machine learning, autoencoders, isolation forest, transactional data analysis

Procedia PDF Downloads 57
103 Interpretable Deep Learning Models for Medical Condition Identification

Authors: Dongping Fang, Lian Duan, Xiaojing Yuan, Mike Xu, Allyn Klunder, Kevin Tan, Suiting Cao, Yeqing Ji

Abstract:

Accurate prediction of a medical condition with straight clinical evidence is a long-sought topic in the medical management and health insurance field. Although great progress has been made with machine learning algorithms, the medical community is still, to a certain degree, suspicious about the model's accuracy and interpretability. This paper presents an innovative hierarchical attention deep learning model to achieve good prediction and clear interpretability that can be easily understood by medical professionals. This deep learning model uses a hierarchical attention structure that matches naturally with the medical history data structure and reflects the member’s encounter (date of service) sequence. The model attention structure consists of 3 levels: (1) attention on the medical code types (diagnosis codes, procedure codes, lab test results, and prescription drugs), (2) attention on the sequential medical encounters within a type, (3) attention on the medical codes within an encounter and type. This model is applied to predict the occurrence of stage 3 chronic kidney disease (CKD3), using three years’ medical history of Medicare Advantage (MA) members from a top health insurance company. The model takes members’ medical events, both claims and electronic medical record (EMR) data, as input, makes a prediction of CKD3 and calculates the contribution from individual events to the predicted outcome. The model outcome can be easily explained with the clinical evidence identified by the model algorithm. Here are examples: Member A had 36 medical encounters in the past three years: multiple office visits, lab tests and medications. The model predicts member A has a high risk of CKD3 with the following well-contributed clinical events - multiple high ‘Creatinine in Serum or Plasma’ tests and multiple low kidneys functioning ‘Glomerular filtration rate’ tests. Among the abnormal lab tests, more recent results contributed more to the prediction. The model also indicates regular office visits, no abnormal findings of medical examinations, and taking proper medications decreased the CKD3 risk. Member B had 104 medical encounters in the past 3 years and was predicted to have a low risk of CKD3, because the model didn’t identify diagnoses, procedures, or medications related to kidney disease, and many lab test results, including ‘Glomerular filtration rate’ were within the normal range. The model accurately predicts members A and B and provides interpretable clinical evidence that is validated by clinicians. Without extra effort, the interpretation is generated directly from the model and presented together with the occurrence date. Our model uses the medical data in its most raw format without any further data aggregation, transformation, or mapping. This greatly simplifies the data preparation process, mitigates the chance for error and eliminates post-modeling work needed for traditional model explanation. To our knowledge, this is the first paper on an interpretable deep-learning model using a 3-level attention structure, sourcing both EMR and claim data, including all 4 types of medical data, on the entire Medicare population of a big insurance company, and more importantly, directly generating model interpretation to support user decision. In the future, we plan to enrich the model input by adding patients’ demographics and information from free-texted physician notes.

Keywords: deep learning, interpretability, attention, big data, medical conditions

Procedia PDF Downloads 91
102 Agri-Food Transparency and Traceability: A Marketing Tool to Satisfy Consumer Awareness Needs

Authors: Angelo Corallo, Maria Elena Latino, Marta Menegoli

Abstract:

The link between man and food plays, in the social and economic system, a central role where cultural and multidisciplinary aspects intertwine: food is not only nutrition, but also communication, culture, politics, environment, science, ethics, fashion. This multi-dimensionality has many implications in the food economy. In recent years, the consumer became more conscious about his food choices, involving a consistent change in consumption models. This change concerns several aspects: awareness of food system issues, employment of socially and environmentally conscious decision-making, food choices based on different characteristics than nutritional ones i.e. origin of food, how it’s produced, and who’s producing it. In this frame the ‘consumption choices’ and the ‘interests of the citizen’ become one part of the others. The figure of the ‘Citizen Consumer’ is born, a responsible and ethically motivated individual to change his lifestyle, achieving the goal of sustainable consumption. Simultaneously the branding, that before was guarantee of the product quality, today is questioned. In order to meet these needs, Agri-Food companies are developing specific product lines that follow two main philosophies: ‘Back to basics’ and ‘Less is more’. However, the issue of ethical behavior does not seem to find an adequate on market offer. Most likely due to a lack of attention on the communication strategy used, very often based on market logic and rarely on ethical one. The label in its classic concept of ‘clean labeling’ can no longer be the only instrument through which to convey product information and its evolution towards a concept of ‘clear label’ is necessary to embrace ethical and transparent concepts in progress the process of democratization of the Food System. The implementation of a voluntary traceability path, relying on the technological models of the Internet of Things or Industry 4.0, would enable the Agri-Food Supply Chain to collect data that, if properly treated, could satisfy the information need of consumers. A change of approach is therefore proposed towards Agri-Food traceability that is no longer intended as a tool to be used to respond to the legislator, but rather as a promotional tool useful to tell the company in a transparent manner and then reach the slice of the market of food citizens. The use of mobile technology can also facilitate this information transfer. However, in order to guarantee maximum efficiency, an appropriate communication model based on the ethical communication principles should be used, which aims to overcome the pipeline communication model, to offer the listener a new way of telling the food product, based on real data collected through processes traceability. The Citizen Consumer is therefore placed at the center of the new model of communication in which he has the opportunity to choose what to know and how. The new label creates a virtual access point capable of telling the product according to different point of views, following the personal interests and offering the possibility to give several content modalities to support different situations and usability.

Keywords: agri food traceability, agri-food transparency, clear label, food system, internet of things

Procedia PDF Downloads 158
101 The Proposal for a Framework to Face Opacity and Discrimination ‘Sins’ Caused by Consumer Creditworthiness Machines in the EU

Authors: Diogo José Morgado Rebelo, Francisco António Carneiro Pacheco de Andrade, Paulo Jorge Freitas de Oliveira Novais

Abstract:

Not everything in AI-power consumer credit scoring turns out to be a wonder. When using AI in Creditworthiness Assessment (CWA), opacity and unfairness ‘sins’ must be considered to the task be deemed Responsible. AI software is not always 100% accurate, which can lead to misclassification. Discrimination of some groups can be exponentiated. A hetero personalized identity can be imposed on the individual(s) affected. Also, autonomous CWA sometimes lacks transparency when using black box models. However, for this intended purpose, human analysts ‘on-the-loop’ might not be the best remedy consumers are looking for in credit. This study seeks to explore the legality of implementing a Multi-Agent System (MAS) framework in consumer CWA to ensure compliance with the regulation outlined in Article 14(4) of the Proposal for an Artificial Intelligence Act (AIA), dated 21 April 2021 (as per the last corrigendum by the European Parliament on 19 April 2024), Especially with the adoption of Art. 18(8)(9) of the EU Directive 2023/2225, of 18 October, which will go into effect on 20 November 2026, there should be more emphasis on the need for hybrid oversight in AI-driven scoring to ensure fairness and transparency. In fact, the range of EU regulations on AI-based consumer credit will soon impact the AI lending industry locally and globally, as shown by the broad territorial scope of AIA’s Art. 2. Consequently, engineering the law of consumer’s CWA is imperative. Generally, the proposed MAS framework consists of several layers arranged in a specific sequence, as follows: firstly, the Data Layer gathers legitimate predictor sets from traditional sources; then, the Decision Support System Layer, whose Neural Network model is trained using k-fold Cross Validation, provides recommendations based on the feeder data; the eXplainability (XAI) multi-structure comprises Three-Step-Agents; and, lastly, the Oversight Layer has a 'Bottom Stop' for analysts to intervene in a timely manner. From the analysis, one can assure a vital component of this software is the XAY layer. It appears as a transparent curtain covering the AI’s decision-making process, enabling comprehension, reflection, and further feasible oversight. Local Interpretable Model-agnostic Explanations (LIME) might act as a pillar by offering counterfactual insights. SHapley Additive exPlanation (SHAP), another agent in the XAI layer, could address potential discrimination issues, identifying the contribution of each feature to the prediction. Alternatively, for thin or no file consumers, the Suggestion Agent can promote financial inclusion. It uses lawful alternative sources such as the share of wallet, among others, to search for more advantageous solutions to incomplete evaluation appraisals based on genetic programming. Overall, this research aspires to bring the concept of Machine-Centered Anthropocentrism to the table of EU policymaking. It acknowledges that, when put into service, credit analysts no longer exert full control over the data-driven entities programmers have given ‘birth’ to. With similar explanatory agents under supervision, AI itself can become self-accountable, prioritizing human concerns and values. AI decisions should not be vilified inherently. The issue lies in how they are integrated into decision-making and whether they align with non-discrimination principles and transparency rules.

Keywords: creditworthiness assessment, hybrid oversight, machine-centered anthropocentrism, EU policymaking

Procedia PDF Downloads 34
100 Analysis of Flow Dynamics of Heated and Cooled Pylon Upstream to the Cavity past Supersonic Flow with Wall Heating and Cooling

Authors: Vishnu Asokan, Zaid M. Paloba

Abstract:

Flow over cavities is an important area of research due to the significant change in flow physics caused by cavity aspect ratio, free stream Mach number and the nature of upstream boundary layer approaching the cavity leading edge. Cavity flow finds application in aircraft wheel well, weapons bay, combustion chamber of scramjet engines, etc. These flows are highly unsteady, compressible and turbulent and it involves mass entrainment coupled with acoustics phenomenon. Variation of flow dynamics in an angled cavity with a heated and cooled pylon upstream to the cavity with spatial combinations of heat flux addition and removal to the wall studied numerically. The goal of study is to investigate the effect of energy addition, removal to the cavity walls and pylon cavity flow dynamics. Preliminary steady state numerical simulations on inclined cavities with heat addition have shown that wall pressure profiles, as well as the recirculation, are influenced by heat transfer to the compressible fluid medium. Such a hybrid control of cavity flow dynamics in the form of heat transfer and pylon geometry can open out greater opportunities in enhancement of mixing and flame holding requirements of supersonic combustors. Addition of pylon upstream to the cavity reduces the acoustic oscillations emanating from the geometry. A numerical unsteady analysis of supersonic flow past cavities exposed to cavity wall heating and cooling with heated and cooled pylon helps to get a clear idea about the oscillation suppression in the cavity. A Cavity of L/D 4 and aft wall angle 22 degree with an upstream pylon of h/D=1.5 mm with a wall angle 29 degree exposed to supersonic flow of Mach number 2 and heat flux of 40 W/cm² and -40 W/cm² modeled for the above study. In the preliminary study, the domain is modeled and validated numerically with a turbulence model of SST k-ω using an HLLC implicit scheme. Both qualitative and quantitative flow data extracted and analyzed using advanced CFD tools. Flow visualization is done using numerical Schlieren method as the fluid medium gives the density variation. The heat flux addition to the wall increases the secondary vortex size of the cavity and removal of energy leads to the reduction in vortex size. The flow field turbulence seems to be increasing at higher heat flux. The shear layer thickness increases as heat flux increases. The steady state analysis of wall pressure shows that there is variation on wall pressure as heat flux increases. Shift in frequency of unsteady wall pressure analysis is an interesting observation for the above study. The time averaged skin friction seems to be reducing at higher heat flux due to the variation in viscosity of fluid inside the cavity.

Keywords: energy addition, frequency shift, Numerical Schlieren, shear layer, vortex evolution

Procedia PDF Downloads 143
99 Integrative Omics-Portrayal Disentangles Molecular Heterogeneity and Progression Mechanisms of Cancer

Authors: Binder Hans

Abstract:

Cancer is no longer seen as solely a genetic disease where genetic defects such as mutations and copy number variations affect gene regulation and eventually lead to aberrant cell functioning which can be monitored by transcriptome analysis. It has become obvious that epigenetic alterations represent a further important layer of (de-)regulation of gene activity. For example, aberrant DNA methylation is a hallmark of many cancer types, and methylation patterns were successfully used to subtype cancer heterogeneity. Hence, unraveling the interplay between different omics levels such as genome, transcriptome and epigenome is inevitable for a mechanistic understanding of molecular deregulation causing complex diseases such as cancer. This objective requires powerful downstream integrative bioinformatics methods as an essential prerequisite to discover the whole genome mutational, transcriptome and epigenome landscapes of cancer specimen and to discover cancer genesis, progression and heterogeneity. Basic challenges and tasks arise ‘beyond sequencing’ because of the big size of the data, their complexity, the need to search for hidden structures in the data, for knowledge mining to discover biological function and also systems biology conceptual models to deduce developmental interrelations between different cancer states. These tasks are tightly related to cancer biology as an (epi-)genetic disease giving rise to aberrant genomic regulation under micro-environmental control and clonal evolution which leads to heterogeneous cellular states. Machine learning algorithms such as self organizing maps (SOM) represent one interesting option to tackle these bioinformatics tasks. The SOMmethod enables recognizing complex patterns in large-scale data generated by highthroughput omics technologies. It portrays molecular phenotypes by generating individualized, easy to interpret images of the data landscape in combination with comprehensive analysis options. Our image-based, reductionist machine learning methods provide one interesting perspective how to deal with massive data in the discovery of complex diseases, gliomas, melanomas and colon cancer on molecular level. As an important new challenge, we address the combined portrayal of different omics data such as genome-wide genomic, transcriptomic and methylomic ones. The integrative-omics portrayal approach is based on the joint training of the data and it provides separate personalized data portraits for each patient and data type which can be analyzed by visual inspection as one option. The new method enables an integrative genome-wide view on the omics data types and the underlying regulatory modes. It is applied to high and low-grade gliomas and to melanomas where it disentangles transversal and longitudinal molecular heterogeneity in terms of distinct molecular subtypes and progression paths with prognostic impact.

Keywords: integrative bioinformatics, machine learning, molecular mechanisms of cancer, gliomas and melanomas

Procedia PDF Downloads 148
98 Methodology to Achieve Non-Cooperative Target Identification Using High Resolution Range Profiles

Authors: Olga Hernán-Vega, Patricia López-Rodríguez, David Escot-Bocanegra, Raúl Fernández-Recio, Ignacio Bravo

Abstract:

Non-Cooperative Target Identification has become a key research domain in the Defense industry since it provides the ability to recognize targets at long distance and under any weather condition. High Resolution Range Profiles, one-dimensional radar images where the reflectivity of a target is projected onto the radar line of sight, are widely used for identification of flying targets. According to that, to face this problem, an approach to Non-Cooperative Target Identification based on the exploitation of Singular Value Decomposition to a matrix of range profiles is presented. Target Identification based on one-dimensional radar images compares a collection of profiles of a given target, namely test set, with the profiles included in a pre-loaded database, namely training set. The classification is improved by using Singular Value Decomposition since it allows to model each aircraft as a subspace and to accomplish recognition in a transformed domain where the main features are easier to extract hence, reducing unwanted information such as noise. Singular Value Decomposition permits to define a signal subspace which contain the highest percentage of the energy, and a noise subspace which will be discarded. This way, only the valuable information of each target is used in the recognition process. The identification algorithm is based on finding the target that minimizes the angle between subspaces and takes place in a transformed domain. Two metrics, F1 and F2, based on Singular Value Decomposition are accomplished in the identification process. In the case of F2, the angle is weighted, since the top vectors set the importance in the contribution to the formation of a target signal, on the contrary F1 simply shows the evolution of the unweighted angle. In order to have a wide database or radar signatures and evaluate the performance, range profiles are obtained through numerical simulation of seven civil aircraft at defined trajectories taken from an actual measurement. Taking into account the nature of the datasets, the main drawback of using simulated profiles instead of actual measured profiles is that the former implies an ideal identification scenario, since measured profiles suffer from noise, clutter and other unwanted information and simulated profiles don't. In this case, the test and training samples have similar nature and usually a similar high signal-to-noise ratio, so as to assess the feasibility of the approach, the addition of noise has been considered before the creation of the test set. The identification results applying the unweighted and weighted metrics are analysed for demonstrating which algorithm provides the best robustness against noise in an actual possible scenario. So as to confirm the validity of the methodology, identification experiments of profiles coming from electromagnetic simulations are conducted, revealing promising results. Considering the dissimilarities between the test and training sets when noise is added, the recognition performance has been improved when weighting is applied. Future experiments with larger sets are expected to be conducted with the aim of finally using actual profiles as test sets in a real hostile situation.

Keywords: HRRP, NCTI, simulated/synthetic database, SVD

Procedia PDF Downloads 354
97 Evaluation of Sustained Improvement in Trauma Education Approaches for the College of Emergency Nursing Australasia Trauma Nursing Program

Authors: Pauline Calleja, Brooke Alexander

Abstract:

In 2010 the College of Emergency Nursing Australasia (CENA) undertook sole administration of the Trauma Nursing Program (TNP) across Australia. The original TNP was developed from recommendations by the Review of Trauma and Emergency Services-Victoria. While participant and faculty feedback about the program was positive, issues were identified that were common for industry training programs in Australia. These issues included didactic approaches, with many lectures and little interaction/activity for participants. Participants were not necessarily encouraged to undertake deep learning due to the teaching and learning principles underpinning the course, and thus participants described having to learn by rote, and only gain a surface understanding of principles that were not always applied to their working context. In Australia, a trauma or emergency nurse may work in variable contexts that impact on practice, especially where resources influence scope and capacity of hospitals to provide trauma care. In 2011, a program review was undertaken resulting in major changes to the curriculum, teaching, learning and assessment approaches. The aim was to improve learning including a greater emphasis on pre-program preparation for participants, the learning environment and clinically applicable contextualized outcomes participants experienced. Previously if participants wished to undertake assessment, they were given a take home examination. The assessment had poor uptake and return, and provided no rigor since assessment was not invigilated. A new assessment structure was enacted with an invigilated examination during course hours. These changes were implemented in early 2012 with great improvement in both faculty and participant satisfaction. This presentation reports on a comparison of participant evaluations collected from courses post implementation in 2012 and in 2015 to evaluate if positive changes were sustained. Methods: Descriptive statistics were applied in analyzing evaluations. Since all questions had more than 20% of cells with a count of <5, Fisher’s Exact Test was used to identify significance (p = <0.05) between groups. Results: A total of fourteen group evaluations were included in this analysis, seven CENA TNP groups from 2012 and seven from 2015 (randomly chosen). A total of 173 participant evaluations were collated (n = 81 from 2012 and 92 from 2015). All course evaluations were anonymous, and nine of the original 14 questions were applicable for this evaluation. All questions were rated by participants on a five-point Likert scale. While all items showed improvement from 2012 to 2015, significant improvement was noted in two items. These were in regard to the content being delivered in a way that met participant learning needs and satisfaction with the length and pace of the program. Evaluation of written comments supports these results. Discussion: The aim of redeveloping the CENA TNP was to improve learning and satisfaction for participants. These results demonstrate that initial improvements in 2012 were able to be maintained and in two essential areas significantly improved. Changes that increased participant engagement, support and contextualization of course materials were essential for CENA TNP evolution.

Keywords: emergency nursing education, industry training programs, teaching and learning, trauma education

Procedia PDF Downloads 270
96 Logic of Appearance vs Explanatory Logic: A Systemic Functional Linguistics Approach to the Evolution of Communicative Strategies in the European Union Institutional Discourse

Authors: Antonio Piga

Abstract:

The issue of European cultural identity has become a prominent topic of discussion among political actors in the wake of the unsuccessful referenda held in France and the Netherlands in May and June 2006. The „period of reflection‟ announced by the European Council at the conclusion of June 2006 has provided an opportunity for the implementation of several initiatives and programmes designed to „bridge the gap‟ between the EU institutions and its citizens. Specific programmes were designed with the objective of enhancing the European Commission‟s external communication of its activities. Subsequently, further plans for democracy, debate, and dialogue were devised with the objective of fostering open and extensive discourse between EU institutions and citizens. Further documentation on communication policy emphasised the necessity of developing linguistic techniques to re-engage disenchanted or uninformed citizens with the European project. It was observed that the European Union is perceived as a „faceless‟ entity, which is attributed to the absence of a distinct public identity vis-à-vis its institutions. This contribution presents an analysis of a collection of informative publications regarding the European Union, entitled “Europe on the Move”. This collection of booklets provides comprehensive information about the European Union, including its historical origins, core values, and historical development, as well as its achievements, strategic objectives, policies, and operational procedures. The theoretical framework adopted for the longitudinal linguistic analysis of EU discourse is that of Systemic Functional Linguistics (SFL). In more detail, this study considers two basic systems of relations between clauses: firstly, the degree of interdependency (or taxis) and secondly, the logico-semantic relation of expansion. The former refers to the structural markers of grammatical relations between clauses within sentences, namely paratactic, hypotactic and embedded relations. The latter pertains to various logicosemantic relationships existing between the primary and secondary members of the clause nexus. These relationships include how the secondary clause expands the primary clause, which may be achieved by (a) elaborating it, (b) extending it or (c) enhancing it. This study examines the impact of the European Commission‟s post-referendum communication methods on the portrayal of Europe, its role in facilitating the EU institutional process, and its articulation of a specific EU identity linked to distinct values. The research reveals that the language employed by the EU is evidently grounded in an explanatory logic, elucidating the rationale behind their institutionalised acts. Nevertheless, the minimal use of hypotaxis in the post-referendum booklets, coupled with the inconsistent yet increasing ratio of parataxis to hypotaxis, may suggest a potential shift towards a logic of appearance, characterised by a predominant reliance on coordination and additive, and elaborative logico-semantic relations.

Keywords: systemic functional linguistics, logic of appearance, explanatory logic, interdependency, logico-semantic relation

Procedia PDF Downloads 6
95 Inclusion Advances of Disabled People in Higher Education: Possible Alignment with the Brazilian Statute of the Person with Disabilities

Authors: Maria Cristina Tommaso, Maria Das Graças L. Silva, Carlos Jose Pacheco

Abstract:

Have the advances of the Brazilian legislation reflected or have been consonant with the inclusion of PwD in higher education? In 1990 the World Declaration on Education for All, a document organized by the United Nations Educational, Scientific and Cultural Organization (UNESCO), stated that the basic learning needs of people with disabilities, as they were called, required special attention. Since then, legislation in signatory countries such as Brazil has made considerable progress in guaranteeing, in a gradual and increasing manner, the rights of persons with disabilities to education. Principles, policies, and practices of special educational needs were created and guided action at the regional, national and international levels on the structure of action in Special Education such as administration, recruitment of educators and community involvement. Brazilian Education Law No. 3.284 of 2003 ensures inclusion of people with disabilities in Brazilian higher education institutions and also in 2015 the Law 13,146/2015 - Brazilian Law on the Inclusion of Persons with Disabilities (Statute of the Person with Disabilities) regulates the inclusion of PwD by the guarantee of their rights. This study analyses data related to people with disability inclusion in High Education in the south region of Rio de Janeiro State - Brazil during the period between 2008 and 2018, based in its correlation with the changes in the Brazilian legislation in the last ten years that were subjected by PwD inclusion processes in the Brazilian High Education Systems. The region studied is composed by sixteen cities and this research refers to the largest one, Volta Redonda that represents 25 percent of the total regional population. The PwD reception process had the dicing data at the Volta Redonda University Center with 35 percent of high education students in this territorial area. The research methodology analyzed the changes occurring in the legislation about the inclusion of people with disability in High Education in the last ten years and its impacts on the samples of this study during the period between 2008 and 2018. It was verified an expressive increasing of the number of PwD students, from two in 2008 to 190 PwD students in 2018. The data conclusions are presented in quantitative terms and the aim of this study was to verify the effectiveness of the PwD inclusion in High Education, allowing visibility of this social group. This study verified that the fundamental human rights guarantees have a strong relation to the advances of legislation and the State as a guarantor instance of the rights of the people with disability and must be considered a mean of consolidation of their education opportunities isonomy. The recognition of full rights and the inclusion of people with disabilities requires the efforts of those who have decision-making power. This study aimed to demonstrate that legislative evolution is an effective instrument in the social integration of people with disabilities. The study confirms the fundamental role of the state in guaranteeing human rights and demonstrates that legislation not only protects the interests of vulnerable social groups, but can also, and this is perhaps its main mission, to change behavior patterns and provoke the social transformation necessary to the reduction of inequality of opportunity.

Keywords: high education, inclusion, legislation, people with disability

Procedia PDF Downloads 152
94 The New Contemporary Cross-Cultural Buddhist Woman and Her Attitude and Perception toward Motherhood

Authors: Szerena Vajkovszki

Abstract:

Among the relatively large volume of literature, the role and perception of women in Buddhism have been examined from various perspectives such as theology, history, anthropology, and feminism. When Buddhism spread to the West, women had a major role in its adaption and development. The meeting of different cultures and social structures had the fruit of a necessity to change. As Buddhism gained attention in the West, it produced a Buddhist feminist identity across national and ethnic boundaries. So globalization produced a contemporary cross-cultural Buddhist Women. The aim of the research is to find out the new role of such a Buddhist woman in aging societies. More precisely to understand what effect this contemporary Buddhist religion may have, direct or indirect, on fertility. Our worldwide aging society, especially in developed countries, including members of EU, raise sophisticated sociological and economic issues and challenges to be met. As declining fertility has outstanding influence underlying this trend, numerous studies have attempted to identify, describe, measure and interpret contributing factors of the fertility rate, out of which relatively few revealed the impact of religion. Among many religious guidelines, we can separate two major categories: direct and indirect. The aim of this research was to understand what are the most crucial identified (family values, gender related behaviors, religious sentiments) and not yet identified most influential contributing contemporary Buddhist religious factors. Above identifying these direct or indirect factors, it is also important to understand to what extent and how do they influence fertility, which requires a wider (inter-discipline) perspective. As proved by previous studies religion has also an influential role in health, mental state, well-being, working activity and many other components that are also related to fertility rates. All these components are inter-related, hence direct and indirect religious effects can only be well understood, if we figure out all necessary fields and their interaction. With the help of semi-structured opened interviews taking place in different countries, it was showed that indeed Buddhism has significant direct and indirect effect on fertility, hence the initial hypothesis was proved. However, the interviews showed an overall positive effect, the results could only serve for a general understanding about how Buddhism affects fertility. Evolution of Buddhism’s direct and indirect influence may vary in different nations and circumstances according to their specific environmental attributes. According to the local patterns, with special regard to women’s position and role in the society, outstandingly indirect influences could show diversifications. So it is advisory to investigate more for a deeper and clearer understanding of how Buddhism function in different socioeconomic circumstances. For example, in Hungary after the period of secularization more and more people tended to be attracted toward some transcendent values which could be an explanation for the rising number of Buddhists in the country. The present research could serve as a general starting point or a common basis for further specific national investigations how contemporary Buddhism affects fertility.

Keywords: contemporary Buddhism, cross-cultural woman, fertility, gender roles, religion

Procedia PDF Downloads 153
93 Predicting Open Chromatin Regions in Cell-Free DNA Whole Genome Sequencing Data by Correlation Clustering  

Authors: Fahimeh Palizban, Farshad Noravesh, Amir Hossein Saeidian, Mahya Mehrmohamadi

Abstract:

In the recent decade, the emergence of liquid biopsy has significantly improved cancer monitoring and detection. Dying cells, including those originating from tumors, shed their DNA into the blood and contribute to a pool of circulating fragments called cell-free DNA. Accordingly, identifying the tissue origin of these DNA fragments from the plasma can result in more accurate and fast disease diagnosis and precise treatment protocols. Open chromatin regions are important epigenetic features of DNA that reflect cell types of origin. Profiling these features by DNase-seq, ATAC-seq, and histone ChIP-seq provides insights into tissue-specific and disease-specific regulatory mechanisms. There have been several studies in the area of cancer liquid biopsy that integrate distinct genomic and epigenomic features for early cancer detection along with tissue of origin detection. However, multimodal analysis requires several types of experiments to cover the genomic and epigenomic aspects of a single sample, which will lead to a huge amount of cost and time. To overcome these limitations, the idea of predicting OCRs from WGS is of particular importance. In this regard, we proposed a computational approach to target the prediction of open chromatin regions as an important epigenetic feature from cell-free DNA whole genome sequence data. To fulfill this objective, local sequencing depth will be fed to our proposed algorithm and the prediction of the most probable open chromatin regions from whole genome sequencing data can be carried out. Our method integrates the signal processing method with sequencing depth data and includes count normalization, Discrete Fourie Transform conversion, graph construction, graph cut optimization by linear programming, and clustering. To validate the proposed method, we compared the output of the clustering (open chromatin region+, open chromatin region-) with previously validated open chromatin regions related to human blood samples of the ATAC-DB database. The percentage of overlap between predicted open chromatin regions and the experimentally validated regions obtained by ATAC-seq in ATAC-DB is greater than 67%, which indicates meaningful prediction. As it is evident, OCRs are mostly located in the transcription start sites (TSS) of the genes. In this regard, we compared the concordance between the predicted OCRs and the human genes TSS regions obtained from refTSS and it showed proper accordance around 52.04% and ~78% with all and the housekeeping genes, respectively. Accurately detecting open chromatin regions from plasma cell-free DNA-seq data is a very challenging computational problem due to the existence of several confounding factors, such as technical and biological variations. Although this approach is in its infancy, there has already been an attempt to apply it, which leads to a tool named OCRDetector with some restrictions like the need for highly depth cfDNA WGS data, prior information about OCRs distribution, and considering multiple features. However, we implemented a graph signal clustering based on a single depth feature in an unsupervised learning manner that resulted in faster performance and decent accuracy. Overall, we tried to investigate the epigenomic pattern of a cell-free DNA sample from a new computational perspective that can be used along with other tools to investigate genetic and epigenetic aspects of a single whole genome sequencing data for efficient liquid biopsy-related analysis.

Keywords: open chromatin regions, cancer, cell-free DNA, epigenomics, graph signal processing, correlation clustering

Procedia PDF Downloads 150
92 Contactless Heart Rate Measurement System based on FMCW Radar and LSTM for Automotive Applications

Authors: Asma Omri, Iheb Sifaoui, Sofiane Sayahi, Hichem Besbes

Abstract:

Future vehicle systems demand advanced capabilities, notably in-cabin life detection and driver monitoring systems, with a particular emphasis on drowsiness detection. To meet these requirements, several techniques employ artificial intelligence methods based on real-time vital sign measurements. In parallel, Frequency-Modulated Continuous-Wave (FMCW) radar technology has garnered considerable attention in the domains of healthcare and biomedical engineering for non-invasive vital sign monitoring. FMCW radar offers a multitude of advantages, including its non-intrusive nature, continuous monitoring capacity, and its ability to penetrate through clothing. In this paper, we propose a system utilizing the AWR6843AOP radar from Texas Instruments (TI) to extract precise vital sign information. The radar allows us to estimate Ballistocardiogram (BCG) signals, which capture the mechanical movements of the body, particularly the ballistic forces generated by heartbeats and respiration. These signals are rich sources of information about the cardiac cycle, rendering them suitable for heart rate estimation. The process begins with real-time subject positioning, followed by clutter removal, computation of Doppler phase differences, and the use of various filtering methods to accurately capture subtle physiological movements. To address the challenges associated with FMCW radar-based vital sign monitoring, including motion artifacts due to subjects' movement or radar micro-vibrations, Long Short-Term Memory (LSTM) networks are implemented. LSTM's adaptability to different heart rate patterns and ability to handle real-time data make it suitable for continuous monitoring applications. Several crucial steps were taken, including feature extraction (involving amplitude, time intervals, and signal morphology), sequence modeling, heart rate estimation through the analysis of detected cardiac cycles and their temporal relationships, and performance evaluation using metrics such as Root Mean Square Error (RMSE) and correlation with reference heart rate measurements. For dataset construction and LSTM training, a comprehensive data collection system was established, integrating the AWR6843AOP radar, a Heart Rate Belt, and a smart watch for ground truth measurements. Rigorous synchronization of these devices ensured data accuracy. Twenty participants engaged in various scenarios, encompassing indoor and real-world conditions within a moving vehicle equipped with the radar system. Static and dynamic subject’s conditions were considered. The heart rate estimation through LSTM outperforms traditional signal processing techniques that rely on filtering, Fast Fourier Transform (FFT), and thresholding. It delivers an average accuracy of approximately 91% with an RMSE of 1.01 beat per minute (bpm). In conclusion, this paper underscores the promising potential of FMCW radar technology integrated with artificial intelligence algorithms in the context of automotive applications. This innovation not only enhances road safety but also paves the way for its integration into the automotive ecosystem to improve driver well-being and overall vehicular safety.

Keywords: ballistocardiogram, FMCW Radar, vital sign monitoring, LSTM

Procedia PDF Downloads 72
91 Combining a Continuum of Hidden Regimes and a Heteroskedastic Three-Factor Model in Option Pricing

Authors: Rachid Belhachemi, Pierre Rostan, Alexandra Rostan

Abstract:

This paper develops a discrete-time option pricing model for index options. The model consists of two key ingredients. First, daily stock return innovations are driven by a continuous hidden threshold mixed skew-normal (HTSN) distribution which generates conditional non-normality that is needed to fit daily index return. The most important feature of the HTSN is the inclusion of a latent state variable with a continuum of states, unlike the traditional mixture distributions where the state variable is discrete with little number of states. The HTSN distribution belongs to the class of univariate probability distributions where parameters of the distribution capture the dependence between the variable of interest and the continuous latent state variable (the regime). The distribution has an interpretation in terms of a mixture distribution with time-varying mixing probabilities. It has been shown empirically that this distribution outperforms its main competitor, the mixed normal (MN) distribution, in terms of capturing the stylized facts known for stock returns, namely, volatility clustering, leverage effect, skewness, kurtosis and regime dependence. Second, heteroscedasticity in the model is captured by a threeexogenous-factor GARCH model (GARCHX), where the factors are taken from the principal components analysis of various world indices and presents an application to option pricing. The factors of the GARCHX model are extracted from a matrix of world indices applying principal component analysis (PCA). The empirically determined factors are uncorrelated and represent truly different common components driving the returns. Both factors and the eight parameters inherent to the HTSN distribution aim at capturing the impact of the state of the economy on price levels since distribution parameters have economic interpretations in terms of conditional volatilities and correlations of the returns with the hidden continuous state. The PCA identifies statistically independent factors affecting the random evolution of a given pool of assets -in our paper a pool of international stock indices- and sorting them by order of relative importance. The PCA computes a historical cross asset covariance matrix and identifies principal components representing independent factors. In our paper, factors are used to calibrate the HTSN-GARCHX model and are ultimately responsible for the nature of the distribution of random variables being generated. We benchmark our model to the MN-GARCHX model following the same PCA methodology and the standard Black-Scholes model. We show that our model outperforms the benchmark in terms of RMSE in dollar losses for put and call options, which in turn outperforms the analytical Black-Scholes by capturing the stylized facts known for index returns, namely, volatility clustering, leverage effect, skewness, kurtosis and regime dependence.

Keywords: continuous hidden threshold, factor models, GARCHX models, option pricing, risk-premium

Procedia PDF Downloads 297
90 Addressing Sustainable Development Goals in Palestine: Conflict, Sustainability, and Human Rights

Authors: Nowfiya Humayoon

Abstract:

The Sustainable Development Goals were launched by the UNO in 2015 as a global initiative aimed at eradicating poverty, safeguarding the environment, and promoting peace and prosperity with the target year of 2030. SDGs are vital for achieving global peace, prosperity, and sustainability. Like all nations of the world, these goals are crucial to Palestine but challenging due to the ongoing crisis. Effective action toward achieving each Sustainable Development Goals (SDGs) in Palestine has been severely challenged due to political instability, limited access to resources, International Aid Constraints, Economic blockade, etc., right from the beginning. In the context of the ongoing conflict, there are severe violations of international humanitarian law, which include targeting civilians, using excessive force, and blocking humanitarian aid, which has led to significant civilian casualties, sufferings, and deaths. Therefore, addressing the Sustainable Development Goals is imperative in ensuring human rights, combating violations and fostering sustainability. Methodology: The study adopts a historical, analytical and quantitative approach to evaluate the impact of the ongoing conflict on SDGs in Palestine, with a focus on sustainability and human rights. It examines historical documents, reports of international organizations and regional organizations, recent journal and newspaper articles, and other relevant literature to trace the evolution and the on-ground realities of the conflict and its effects. Quantitative data are collected by analyzing statistical reports from government agencies, non-governmental organizations (NGOs) and international bodies. Databases from World Bank, United Nations and World Health Organizations are utilized. Various health and economic indicators on mortality rates, infant mortality rates and income levels are also gathered. Major Findings: The study reveals profound challenges in achieving the Sustainable Development Goals (SDGs) in Palestine, which include economic blockades and restricted access to resources that have left a substantial portion of the population living below the poverty line, overburdened healthcare facilities struggling to cope with the demands, shortages of medical supplies, disrupted educational systems, with many schools destroyed or repurposed, and children facing significant barriers to accessing quality education, damaged infrastructure, restricted access to clean water and sanitation services and limited access to reliable energy sources . Conclusion: The ongoing crisis in Palestine has drastically affected progress towards the Sustainable Development Goals (SDGs), causing innumerable crises. Violations of international humanitarian law have caused substantial suffering and loss of life. Immediate and coordinated global action and efforts are crucial in addressing these challenges in order to uphold humanitarian values and promote sustainable development in the region.

Keywords: genocide, human rights, occupation, sustainable development goals

Procedia PDF Downloads 14
89 Polarization as a Proxy of Misinformation Spreading

Authors: Michela Del Vicario, Walter Quattrociocchi, Antonio Scala, Ana Lucía Schmidt, Fabiana Zollo

Abstract:

Information, rumors, and debates may shape and impact public opinion heavily. In the latest years, several concerns have been expressed about social influence on the Internet and the outcome that online debates might have on real-world processes. Indeed, on online social networks users tend to select information that is coherent to their system of beliefs and to form groups of like-minded people –i.e., echo chambers– where they reinforce and polarize their opinions. In this way, the potential benefits coming from the exposure to different points of view may be reduced dramatically, and individuals' views may become more and more extreme. Such a context fosters misinformation spreading, which has always represented a socio-political and economic risk. The persistence of unsubstantiated rumors –e.g., the hypothetical and hazardous link between vaccines and autism– suggests that social media do have the power to misinform, manipulate, or control public opinion. As an example, current approaches such as debunking efforts or algorithmic-driven solutions based on the reputation of the source seem to prove ineffective against collective superstition. Indeed, experimental evidence shows that confirmatory information gets accepted even when containing deliberately false claims while dissenting information is mainly ignored, influences users’ emotions negatively and may even increase group polarization. Moreover, confirmation bias has been shown to play a pivotal role in information cascades, posing serious warnings about the efficacy of current debunking efforts. Nevertheless, mitigation strategies have to be adopted. To generalize the problem and to better understand social dynamics behind information spreading, in this work we rely on a tight quantitative analysis to investigate the behavior of more than 300M users w.r.t. news consumption on Facebook over a time span of six years (2010-2015). Through a massive analysis on 920 news outlets pages, we are able to characterize the anatomy of news consumption on a global and international scale. We show that users tend to focus on a limited set of pages (selective exposure) eliciting a sharp and polarized community structure among news outlets. Moreover, we find similar patterns around the Brexit –the British referendum to leave the European Union– debate, where we observe the spontaneous emergence of two well segregated and polarized groups of users around news outlets. Our findings provide interesting insights into the determinants of polarization and the evolution of core narratives on online debating. Our main aim is to understand and map the information space on online social media by identifying non-trivial proxies for the early detection of massive informational cascades. Furthermore, by combining users traces, we are finally able to draft the main concepts and beliefs of the core narrative of an echo chamber and its related perceptions.

Keywords: information spreading, misinformation, narratives, online social networks, polarization

Procedia PDF Downloads 288
88 Human Behavioral Assessment to Derive Land-Use for Sustenance of River in India

Authors: Juhi Sah

Abstract:

Habitat is characterized by the inter-dependency of environmental elements. Anthropocentric development approach is increasing our vulnerability towards natural hazards. Hence, manmade interventions should have a higher level of sensitivity towards the natural settings. Sensitivity towards the environment can be assessed by the behavior of the stakeholders involved. This led to the establishment of a hypothesis: there exists a legitimate relationship between the behavioral sciences, land use evolution and environment conservation, in the planning process. An attempt has been made to establish this relationship by reviewing the existing set of knowledge and case examples pertaining to the three disciplines under inquiry. Understanding the scarce & deteriorating nature of fresh-water reserves of earth and experimenting the above concept, a case study of a growing urban center's river flood plain is selected, in a developing economy, India. Cases of urban flooding in Chennai, Delhi and other mega cities of India, imposes a high risk on the unauthorized settlement, on the floodplains of the rivers. The issue addressed here is the encroachment of floodplains, through psychological enlightenment and modification through knowledge building. The reaction of an individual or society can be compared to a cognitive process. This study documents all the stakeholders' behavior and perception for their immediate natural environment (water body), and produce various land uses suitable along a river in an urban settlement as per different stakeholder's perceptions. To assess and induce morally responsible behavior in a community (small scale or large scale), tools of psychological inquiry is used for qualitative analysis. The analysis will deal with varied data sets from two sectors namely: River and its geology, Land use planning and regulation. Identification of a distinctive pattern in the built up growth, river ecology degradation, and human behavior, by handling large quantum of data from the diverse sector and comments on the availability of relevant data and its implications, has been done. Along the whole river stretch, condition and usage of its bank vary, hence stakeholder specific survey questionnaires have been prepared to accurately map the responses and habits of the rational inhabitants. A conceptual framework has been designed to move forward with the empirical analysis. The classical principle of virtues says "virtue of a human depends on its character" but another concept defines that the behavior or response is a derivative of situations and to bring about a behavioral change one needs to introduce a disruption in the situation/environment. Owing to the present trends, blindly following the results of data analytics and using it to construct policy, is not proving to be in favor of planned development and natural resource conservation. Thus behavioral assessment of the rational inhabitants of the planet is also required, as their activities and interests have a large impact on the earth's pre-set systems and its sustenance.

Keywords: behavioral assessment, flood plain encroachment, land use planning, river sustenance

Procedia PDF Downloads 117
87 Non-Invasive Characterization of the Mechanical Properties of Arterial Walls

Authors: Bruno RamaëL, GwenaëL Page, Catherine Knopf-Lenoir, Olivier Baledent, Anne-Virginie Salsac

Abstract:

No routine technique currently exists for clinicians to measure the mechanical properties of vascular walls non-invasively. Most of the data available in the literature come from traction or dilatation tests conducted ex vivo on native blood vessels. The objective of the study is to develop a non-invasive characterization technique based on Magnetic Resonance Imaging (MRI) measurements of the deformation of vascular walls under pulsating blood flow conditions. The goal is to determine the mechanical properties of the vessels by inverse analysis, coupling imaging measurements and numerical simulations of the fluid-structure interactions. The hyperelastic properties are identified using Solidworks and Ansys workbench (ANSYS Inc.) solving an optimization technique. The vessel of interest targeted in the study is the common carotid artery. In vivo MRI measurements of the vessel anatomy and inlet velocity profiles was acquired along the facial vascular network on a cohort of 30 healthy volunteers: - The time-evolution of the blood vessel contours and, thus, of the cross-section surface area was measured by 3D imaging angiography sequences of phase-contrast MRI. - The blood flow velocity was measured using a 2D CINE MRI phase contrast (PC-MRI) method. Reference arterial pressure waveforms were simultaneously measured in the brachial artery using a sphygmomanometer. The three-dimensional (3D) geometry of the arterial network was reconstructed by first creating an STL file from the raw MRI data using the open source imaging software ITK-SNAP. The resulting geometry was then transformed with Solidworks into volumes that are compatible with Ansys softwares. Tetrahedral meshes of the wall and fluid domains were built using the ANSYS Meshing software, with a near-wall mesh refinement method in the case of the fluid domain to improve the accuracy of the fluid flow calculations. Ansys Structural was used for the numerical simulation of the vessel deformation and Ansys CFX for the simulation of the blood flow. The fluid structure interaction simulations showed that the systolic and diastolic blood pressures of the common carotid artery could be taken as reference pressures to identify the mechanical properties of the different arteries of the network. The coefficients of the hyperelastic law were identified using Ansys Design model for the common carotid. Under large deformations, a stiffness of 800 kPa is measured, which is of the same order of magnitude as the Young modulus of collagen fibers. Areas of maximum deformations were highlighted near bifurcations. This study is a first step towards patient-specific characterization of the mechanical properties of the facial vessels. The method is currently applied on patients suffering from facial vascular malformations and on patients scheduled for facial reconstruction. Information on the blood flow velocity as well as on the vessel anatomy and deformability will be key to improve surgical planning in the case of such vascular pathologies.

Keywords: identification, mechanical properties, arterial walls, MRI measurements, numerical simulations

Procedia PDF Downloads 319
86 Studies on the Bioactivity of Different Solvents Extracts of Selected Marine Macroalgae against Fish Pathogens

Authors: Mary Ghobrial, Sahar Wefky

Abstract:

Marine macroalgae have proven to be rich source of bioactive compounds with biomedical potential, not only for human but also for veterinary medicine. Emergence of microbial disease in aquaculture industries implies serious loses. Usage of commercial antibiotics for fish disease treatment produces undesirable side effects. Marine organisms are a rich source of structurally novel biologically active metabolites. Competition for space and nutrients led to the evolution of antimicrobial defense strategies in the aquatic environment. The interest in marine organisms as a potential and promising source of pharmaceutical agents has increased in the last years. Many bioactive and pharmacologically active substances have been isolated from microalgae. Compounds with antibacterial, antifungal and antiviral activities have been also detected in green, brown and red algae. Selected species of marine benthic algae belonging to the Phaeophyta and Rhodophyta, collected from different coastal areas of Alexandria (Egypt), were investigated for their antibacterial and antifungal, activities. Macroalgae samples were collected during low tide from the Alexandria Mediterranean coast. Samples were air dried under shade at room temperature. The dry algae were ground, using electric mixer grinder. They were soaked in 10 ml of each of the solvents acetone, ethanol, methanol and hexane. Antimicrobial activity was evaluated using well-cut diffusion technique In vitro screening of organic solvent extracts from the marine macroalgae Laurencia pinnatifida, Pterocladia capillaceae, Stepopodium zonale, Halopteris scoparia and Sargassum hystrix, showed specific activity in inhibiting the growth of five virulent strains of bacteria pathogenic to fish Pseudomonas fluorescens, Aeromonas hydrophila, Vibrio anguillarum, V. tandara, Escherichia coli and two fungi Aspergillus flavus and A. niger. Results showed that, acetone and ethanol extracts of all test macroalgae exhibited antibacterial activity, while acetone extract of the brown Sargassum hystrix displayed the highest antifungal activity. The extracts of seaweeds inhibited bacteria more strongly than fungi and species of the Rhodophyta showed the greatest activity against the bacteria rather than fungi tested. The gas liquid chromatography coupled with mass spectrometry detection technique allows good qualitative and quantitative analysis of the fractionated extracts with high sensitivity to the smaller amounts of components. Results indicated that, the main common component in the acetone extracts of L. pinnatifida and P. capillacea is 4-hydroxy-4-methyl2-pentanone representing 64.38 and 58.60%. Thus, the extracts derived from the red macroalgae were more efficient than those obtained from the brown macroalgae in combating bacterial pathogens rather than pathogenic fungi. The most preferred species over all was the red Laurencia pinnatifida. In conclusion, the present study provides the potential of red and brown macroalgae extracts for development of anti-pathogenic agents for use in fish aquaculture.

Keywords: bacteria, fungi, extracts, solvents

Procedia PDF Downloads 437
85 [Keynote Talk]: Bioactive Cyclic Dipeptides of Microbial Origin in Discovery of Cytokine Inhibitors

Authors: Sajeli A. Begum, Ameer Basha, Kirti Hira, Rukaiyya Khan

Abstract:

Cyclic dipeptides are simple diketopiperazine derivatives being investigated by several scientists for their biological effects which include anticancer, antimicrobial, haematological, anticonvulsant, immunomodulatory effect, etc. They are potentially active microbial metabolites having been synthesized too, for developing into drug candidates. Cultures of Pseudomonas species have earlier been reported to produce cyclic dipeptides, helping in quorum sensing signals and bacterial–host colonization phenomena during infections, causing cell anti-proliferation and immunosuppression. Fluorescing Pseudomonas species have been identified to secrete lipid derivatives, peptides, pyrroles, phenazines, indoles, aminoacids, pterines, pseudomonic acids and some antibiotics. In the present work, results of investigation on the cyclic dipeptide metabolites secreted by the culture broth of Pseudomonas species as potent pro-inflammatory cytokine inhibitors are discussed. The bacterial strain was isolated from the rhizospheric soil of groundnut crop and identified as Pseudomonas aeruginosa by 16S rDNA sequence (GenBank Accession No. KT625586). Culture broth of this strain was prepared by inoculating into King’s B broth and incubating at 30 ºC for 7 days. The ethyl acetate extract of culture broth was prepared and lyophilized to get a dry residue (EEPA). Lipopolysaccharide (LPS)-induced ELISA assay proved the inhibition of tumor necrosis factor-alpha (TNF-α) secretion in culture supernatant of RAW 264.7 cells by EEPA (IC50 38.8 μg/mL). The effect of oral administration of EEPA on plasma TNF-α level in rats was tested by ELISA kit. The LPS mediated plasma TNF-α level was reduced to 45% with 125 mg/kg dose of EEPA. Isolation of the chemical constituents of EEPA through column chromatography yielded ten cyclic dipeptides, which were characterized using nuclear magnetic resonance and mass spectroscopic techniques. These cyclic dipeptides are biosynthesized in microorganisms by multifunctional assembly of non-ribosomal peptide synthases and cyclic dipeptide synthase. Cyclo (Gly-L-Pro) was found to be more potentially (IC50 value 4.5 μg/mL) inhibiting TNF-α production followed by cyclo (trans-4-hydroxy-L-Pro-L-Phe) (IC50 value 14.2 μg/mL) and the effect was equal to that of standard immunosuppressant drug, prednisolone. Further, the effect was analyzed by determining mRNA expression of TNF-α in LPS-stimulated RAW 264.7 macrophages using quantitative real-time reverse transcription polymerase chain reaction. EEPA and isolated cyclic dipeptides demonstrated diminution of TNF-α mRNA expression levels in a dose-dependent manner under the tested conditions. Also, they were found to control the expression of other pro-inflammatory cytokines like IL-1β and IL-6, when tested through their mRNA expression levels in LPS-stimulated RAW 264.7 macrophages under LPS-stimulated conditions. In addition, significant inhibition effect was found on Nitric oxide production. Further all the compounds exhibited weak toxicity to LPS-induced RAW 264.7 cells. Thus the outcome of the study disclosed the effectiveness of EEPA and the isolated cyclic dipeptides in down-regulating key cytokines involved in pathophysiology of autoimmune diseases.In another study led by the investigators, microbial cyclic dipeptides were found to exhibit excellent antimicrobial effect against Fusarium moniliforme which is an important causative agent of Sorghum grain mold disease. Thus, cyclic dipeptides are emerging small molecular drug candidates for various autoimmune diseases.

Keywords: cyclic dipeptides, cytokines, Fusarium moniliforme, Pseudomonas, TNF-alpha

Procedia PDF Downloads 211
84 Primary and Secondary Big Bangs Theory of Creation of Universe

Authors: Shyam Sunder Gupta

Abstract:

The current theory for the creation of the universe, the Big Bang theory, is widely accepted but leaves some unanswered questions. It does not explain the origin of the singularity or what causes the Big Bang. The theory of the Big Bang also does not explain why there is such a huge amount of dark energy and dark matter in our universe. Also, there is a question related to one universe or multiple universes which needs to be answered. This research addresses these questions using the Bhagvat Puran and other Vedic scriptures as the basis. There is a Unique Pure Energy Field that is eternal, infinite, and finest of all and never transforms when in its original form. The Carrier Particles of Unique Pure Energy are Param-anus- Fundamental Energy Particles. Param-anus and a combination of these particles create bigger particles from which the Universe gets created. For creation to initiate, Unique Pure Energy is represented in three phases: positive phase energy, neutral phase eternal time energy and negative phase energy. Positive phase energy further expands in three forms of creative energies (CE1, CE2andCE3). From CE1 energy, three energy modes, mode of activation, mode of action, and mode of darkness, were created. From these three modes, 16 Principles, subtlest forms of energies, namely Pradhan, Mahat-tattva, Time, Ego, Intellect, Mind, Sound, Space, Touch, Air, Form, Fire, Taste, Water, Smell, and Earth, get created. In the Mahat-tattva, dominant in the Mode of Darkness, CE1 energy creates innumerable primary singularities from seven principles: Pradhan, Mahat-tattva, Ego, Sky, Air, Fire, and Water. CE1 energy gets divided as CE2 and enters, along with three modes and time, in each singularity, and primary Big Bang takes place, and innumerable Invisible Universes get created. Each Universe has seven coverings of 7 principles, and each layer is 10 times thicker than the previous layer. By energy CE2, space in Invisible Universe under the coverings is divided into two halves. In the lower half, the process of evolution gets initiated, and seeds of 24 elements get created, out of which 5 fundamental elements, building blocks of matter, Sky, Air, Fire, Water and Earth, create seeds of stars, planets, galaxies and all other matter. Since 5 fundamental elements get created out of the mode of darkness, it explains why there is so much dark energy and dark matter in our Universe. This process of creation, in the lower half of Invisible universe continues for 2.16 billion years. Further, in the lower part of the energy field, exactly at the Centre of Invisible Universe, Secondary Singularity is created, through which, by force of Mode of Action, Secondary Big Bang takes place and Visible Universe gets created in the shape of Lotus Flower, expanding into upper part. Visible matter starts appearing after a gap of 360,000 years. Within the Visible Universe, a small part gets created known as the Phenomenal Material World, which is our Solar System, the sun being in the Centre. Diameter of Solar planetary system is 6.4 billion km.

Keywords: invisible universe, phenomenal material world, primary Big Bang, secondary Big Bang, singularities, visible universe

Procedia PDF Downloads 89
83 Company's Orientation and Human Resource Management Evolution in Technological Startup Companies

Authors: Yael Livneh, Shay Tzafrir, Ilan Meshoulam

Abstract:

Technological startup companies have been recognized as bearing tremendous potential for business and economic success. However, many entrepreneurs who produce promising innovative ideas fail to implement them as successful businesses. A key argument for such failure is the entrepreneurs' lack of competence in adaptation of the relevant level of formality of human resource management (HRM). The purpose of the present research was to examine multiple antecedents and consequences of HRM formality in growing startup companies. A review of the research literature identified two central components of HRM formality: HR control and professionalism. The effect of three contextual predictors was examined. The first was an intra-organizational factor: the development level of the organization. We based on a differentiation between knowledge exploration and knowledge exploitation. At a given time, the organization chooses to focus on a specific mix of these orientations, a choice which requires an appropriate level of HRM formality, in order to efficiently overcome the challenges. It was hypothesized that the mix of orientations of knowledge exploration and knowledge exploitation would predict HRM formality. The second predictor was the personal characteristics the organization's leader. According the idea of blueprint effect of CEO's on HRM, it was hypothesized that the CEO's cognitive style would predict HRM formality. The third contextual predictor was an external organizational factor: the level of investor involvement. By using the agency theory, and based on Transaction Cost Economy, it was hypothesized that the level of investor involvement in general management and HRM would be positively related to the HRM formality. The effect of formality on trust was examined directly and indirectly by the mediation role of procedural justice. The research method included a time-lagged field study. In the first study, data was obtained using three questionnaires, each directed to a different source: CEO, HR position-holder and employees. 43 companies participated in this study. The second study was conducted approximately a year later. Data was recollected using three questionnaires by reapplying the same sample. 41 companies participated in the second study. The organizations samples included technological startup companies. Both studies included 884 respondents. The results indicated consistency between the two studies. HRM formality was predicted by the intra-organizational factor as well as the personal characteristics of the CEO, but not at all by the external organizational context. Specifically, the organizational orientations was the greatest contributor to both components of HRM formality. The cognitive style predicted formality to a lesser extent. The investor's involvement was found not to have any predictive effect on the HRM formality. The results indicated a positive contribution to trust in HRM, mainly via the mediation of procedural justice. This study contributed a new concept for technological startup company development by a mixture of organizational orientation. Practical implications indicated that the level of HRM formality should be matched to that of the company's development. This match should be challenged and adjusted periodically by referring to the organization orientation, relevant HR practices, and HR function characteristics. A relevant matching could enhance further trust and business success.

Keywords: control, formality, human resource management, organizational development, professionalism, technological startup company

Procedia PDF Downloads 264
82 Multi-scale Geographic Object-Based Image Analysis (GEOBIA) Approach to Segment a Very High Resolution Images for Extraction of New Degraded Zones. Application to The Region of Mécheria in The South-West of Algeria

Authors: Bensaid A., Mostephaoui T., Nedjai R.

Abstract:

A considerable area of Algerian lands are threatened by the phenomenon of wind erosion. For a long time, wind erosion and its associated harmful effects on the natural environment have posed a serious threat, especially in the arid regions of the country. In recent years, as a result of increases in the irrational exploitation of natural resources (fodder) and extensive land clearing, wind erosion has particularly accentuated. The extent of degradation in the arid region of the Algerian Mécheriadepartment generated a new situation characterized by the reduction of vegetation cover, the decrease of land productivity, as well as sand encroachment on urban development zones. In this study, we attempt to investigate the potential of remote sensing and geographic information systems for detecting the spatial dynamics of the ancient dune cords based on the numerical processing of PlanetScope PSB.SB sensors images by September 29, 2021. As a second step, we prospect the use of a multi-scale geographic object-based image analysis (GEOBIA) approach to segment the high spatial resolution images acquired on heterogeneous surfaces that vary according to human influence on the environment. We have used the fractal net evolution approach (FNEA) algorithm to segment images (Baatz&Schäpe, 2000). Multispectral data, a digital terrain model layer, ground truth data, a normalized difference vegetation index (NDVI) layer, and a first-order texture (entropy) layer were used to segment the multispectral images at three segmentation scales, with an emphasis on accurately delineating the boundaries and components of the sand accumulation areas (Dune, dunes fields, nebka, and barkhane). It is important to note that each auxiliary data contributed to improve the segmentation at different scales. The silted areas were classified using a nearest neighbor approach over the Naâma area using imagery. The classification of silted areas was successfully achieved over all study areas with an accuracy greater than 85%, although the results suggest that, overall, a higher degree of landscape heterogeneity may have a negative effect on segmentation and classification. Some areas suffered from the greatest over-segmentation and lowest mapping accuracy (Kappa: 0.79), which was partially attributed to confounding a greater proportion of mixed siltation classes from both sandy areas and bare ground patches. This research has demonstrated a technique based on very high-resolution images for mapping sanded and degraded areas using GEOBIA, which can be applied to the study of other lands in the steppe areas of the northern countries of the African continent.

Keywords: land development, GIS, sand dunes, segmentation, remote sensing

Procedia PDF Downloads 109
81 The Effect of Ionic Liquid Anion Type on the Properties of TiO2 Particles

Authors: Marta Paszkiewicz, Justyna Łuczak, Martyna Marchelek, Adriana Zaleska-Medynska

Abstract:

In recent years, photocatalytical processes have been intensively investigated for destruction of pollutants, hydrogen evolution, disinfection of water, air and surfaces, for the construction of self-cleaning materials (tiles, glass, fibres, etc.). Titanium dioxide (TiO2) is the most popular material used in heterogeneous photocatalysis due to its excellent properties, such as high stability, chemical inertness, non-toxicity and low cost. It is well known that morphology and microstructure of TiO2 significantly influence the photocatalytic activity. This characteristics as well as other physical and structural properties of photocatalysts, i.e., specific surface area or density of crystalline defects, could be controlled by preparation route. In this regard, TiO2 particles can be obtained by sol-gel, hydrothermal, sonochemical methods, chemical vapour deposition and alternatively, by ionothermal synthesis using ionic liquids (ILs). In the TiO2 particles synthesis ILs may play a role of a solvent, soft template, reagent, agent promoting reduction of the precursor or particles stabilizer during synthesis of inorganic materials. In this work, the effect of the ILs anion type on morphology and photoactivity of TiO2 is presented. The preparation of TiO2 microparticles with spherical structure was successfully achieved by solvothermal method, using tetra-tert-butyl orthotitatane (TBOT) as the precursor. The reaction process was assisted by an ionic liquids 1-butyl-3-methylimidazolium bromide [BMIM][Br], 1-butyl-3-methylimidazolium tetrafluoroborate [BMIM][BF4] and 1-butyl-3-methylimidazolium haxafluorophosphate [BMIM][PF6]. Various molar ratios of all ILs to TBOT (IL:TBOT) were chosen. For comparison, reference TiO2 was prepared using the same method without IL addition. Scanning electron microscopy (SEM), transmission electron microscopy (TEM), X-ray diffraction (XRD), Brenauer-Emmett-Teller surface area (BET), NCHS analysis, and FTIR spectroscopy were used to characterize the surface properties of the samples. The photocatalytic activity was investigated by means of phenol photodegradation in the aqueous phase as a model pollutant, as well as formation of hydroxyl radicals based on detection of fluorescent product of coumarine hydroxylation. The analysis results showed that the TiO2 microspheres had spherical structure with the diameters ranging from 1 to 6 µm. The TEM micrographs gave a bright observation of the samples in which the particles were comprised of inter-aggregated crystals. It could be also observed that the IL-assisted TiO2 microspheres are not hollow, which provides additional information about possible formation mechanism. Application of the ILs results in rise of the photocatalytic activity as well as BET surface area of TiO2 as compared to pure TiO2. The results of the formation of 7-hydroxycoumarin indicated that the increased amount of ·OH produced at the surface of excited TiO2 for samples TiO2_ILs well correlated with more efficient degradation of phenol. NCHS analysis showed that ionic liquids remained on the TiO2 surface confirming structure directing role of that compounds.

Keywords: heterogeneous photocatalysis, IL-assisted synthesis, ionic liquids, TiO2

Procedia PDF Downloads 267