Search results for: linear parameter varying systems
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 15410

Search results for: linear parameter varying systems

9050 Securing Online Voting With Blockchain and Smart Contracts

Authors: Anant Mehrotra, Krish Phagwani

Abstract:

Democratic voting is vital for any country, but current methods like ballot papers or EVMs have drawbacks, including transparency issues, low voter turnout, and security concerns. Blockchain technology offers a potential solution by providing a secure, decentralized, and transparent platform for e-voting. With features like immutability, security, and anonymity, blockchain combined with smart contracts can enhance trust and prevent vote tampering. This paper explores an Ethereum-based e-voting application using Solidity, showcasing a web app that prevents duplicate voting through a token-based system, while also discussing the advantages and limitations of blockchain in digital voting. Voting is a crucial component of democratic decision-making, yet current methods, like paper ballots, remain outdated and inefficient. This paper reviews blockchain-based voting systems, highlighting strategies and guidelines to create a comprehensive electronic voting system that leverages cryptographic techniques, such as zero-knowledge proofs, to enhance privacy. It addresses limitations of existing e-voting solutions, including cost, identity management, and scalability, and provides key insights for organizations looking to design their own blockchain-based voting systems.

Keywords: electronic voting, smart contracts, blockchain nased voting, security

Procedia PDF Downloads 19
9049 Simultaneous Determination of p-Phenylenediamine, N-Acetyl-p-phenylenediamine and N,N-Diacetyl-p-phenylenediamine in Human Urine by LC-MS/MS

Authors: Khaled M. Mohamed

Abstract:

Background: P-Phenylenediamine (PPD) is used in the manufacture of hair dyes and skin decoration. In some developing countries, suicidal, homicidal and accidental cases by PPD were recorded. In this work, a sensitive LC-MS/MS method for determination of PPD and its metabolites N-acetyl-p-phenylenediamine (MAPPD) and N,N-diacetyl-p-phenylenediamine (DAPPD) in human urine has been developed and validated. Methods: PPD, MAPPD and DAPPD were extracted from urine by methylene chloride at alkaline pH. Acetanilide was used as internal standard (IS). The analytes and IS were separated on an Eclipse XDB- C18 column (150 X 4.6 mm, 5 µm) using a mobile phase of acetonitrile-1% formic acid in gradient elution. Detection was performed by LC-MS/MS using electrospray positive ionization under multiple reaction-monitoring mode. The transition ions m/z 109 → 92, m/z 151 → 92, m/z 193 → 92, and m/z 136 → 77 were selected for the quantification of PPD, MAPPD, DAPPD, and IS, respectively. Results: Calibration curves were linear in the range 10–2000 ng/mL for all analytes. The mean recoveries for PPD, MAPPD and DAPPD were 57.62, 74.19 and 50.99%, respectively. Intra-assay and inter-assay imprecisions were within 1.58–9.52% and 5.43–9.45% respectively for PPD, MAPPD and DAPPD. Inter-assay accuracies were within -7.43 and 7.36 for all compounds. PPD, MAPPD and DAPPD were stable in urine at –20 degrees for 24 hours. Conclusions: The method was successfully applied to the analysis of PPD, MAPPD and DAPPD in urine samples collected from suicidal cases.

Keywords: p-Phenylenediamine, metabolites, urine, LC-MS/MS, validation

Procedia PDF Downloads 358
9048 Drilling Quantification and Bioactivity of Machinable Hydroxyapatite : Yttrium phosphate Bioceramic Composite

Authors: Rupita Ghosh, Ritwik Sarkar, Sumit K. Pal, Soumitra Paul

Abstract:

The use of Hydroxyapatite bioceramics as restorative implants is widely known. These materials can be manufactured by pressing and sintering route to a particular shape. However machining processes are still a basic requirement to give a near net shape to those implants for ensuring dimensional and geometrical accuracy. In this context, optimising the machining parameters is an important factor to understand the machinability of the materials and to reduce the production cost. In the present study a method has been optimized to produce true particulate drilled composite of Hydroxyapatite Yttrium Phosphate. The phosphates are used in varying ratio for a comparative study on the effect of flexural strength, hardness, machining (drilling) parameters and bioactivity.. The maximum flexural strength and hardness of the composite that could be attained are 46.07 MPa and 1.02 GPa respectively. Drilling is done with a conventional radial drilling machine aided with dynamometer with high speed steel (HSS) and solid carbide (SC) drills. The effect of variation in drilling parameters (cutting speed and feed), cutting tool, batch composition on torque, thrust force and tool wear are studied. It is observed that the thrust force and torque varies greatly with the increase in the speed, feed and yttrium phosphate content in the composite. Significant differences in the thrust and torque are noticed due to the change of the drills as well. Bioactivity study is done in simulated body fluid (SBF) upto 28 days. The growth of the bone like apatite has become denser with the increase in the number of days for all the composition of the composites and it is comparable to that of the pure hydroxyapatite.

Keywords: Bioactivity, Drilling, Hydroxyapatite, Yttrium Phosphate

Procedia PDF Downloads 303
9047 Surface Characterization of Zincblende and Wurtzite Semiconductors Using Nonlinear Optics

Authors: Hendradi Hardhienata, Tony Sumaryada, Sri Setyaningsih

Abstract:

Current progress in the field of nonlinear optics has enabled precise surface characterization in semiconductor materials. Nonlinear optical techniques are favorable due to their nondestructive measurement and ability to work in nonvacuum and ambient conditions. The advance of the bond hyperpolarizability models opens a wide range of nanoscale surface investigation including the possibility to detect molecular orientation at the surface of silicon and zincblende semiconductors, investigation of electric field induced second harmonic fields at the semiconductor interface, detection of surface impurities, and very recently, study surface defects such as twin boundary in wurtzite semiconductors. In this work, we show using nonlinear optical techniques, e.g. nonlinear bond models how arbitrary polarization of the incoming electric field in Rotational Anisotropy Spectroscopy experiments can provide more information regarding the origin of the nonlinear sources in zincblende and wurtzite semiconductor structure. In addition, using hyperpolarizability consideration, we describe how the nonlinear susceptibility tensor describing SHG can be well modelled using only few parameter because of the symmetry of the bonds. We also show how the third harmonic intensity feature shows considerable changes when the incoming field polarization angle is changed from s-polarized to p-polarized. We also propose a method how to investigate surface reconstruction and defects in wurtzite and zincblende structure at the nanoscale level.

Keywords: surface characterization, bond model, rotational anisotropy spectroscopy, effective hyperpolarizability

Procedia PDF Downloads 160
9046 Optimizing the Design Parameters of Acoustic Power Transfer Model to Achieve High Power Intensity and Compact System

Authors: Ariba Siddiqui, Amber Khan

Abstract:

The need for bio-implantable devices in the field of medical sciences has been increasing day by day; however, the charging of these devices is a major issue. Batteries, a very common method of powering the implants, have a limited lifetime and bulky nature. Therefore, as a replacement of batteries, acoustic power transfer (APT) technology is being accepted as the most suitable technique to wirelessly power the medical implants in the present scenario. The basic model of APT consists of piezoelectric transducers that work on the principle of converse piezoelectric effect at the transmitting end and direct piezoelectric effect at the receiving end. This paper provides mechanistic insight into the parameters affecting the design and efficient working of acoustic power transfer systems. The optimum design considerations have been presented that will help to compress the size of the device and augment the intensity of the pressure wave. A COMSOL model of the PZT (Lead Zirconate Titanate) transducer was developed. The model was simulated and analyzed on a frequency spectrum. The simulation results displayed that the efficiency of these devices is strongly dependent on the frequency of operation, and a wrong choice of the operating frequency leads to the high absorption of acoustic field inside the tissue (medium), poor power strength, and heavy transducers, which in effect influence the overall configuration of the acoustic systems. Considering all the tradeoffs, the simulations were performed again by determining an optimum frequency (900 kHz) that resulted in the reduction of the transducer's thickness to 1.96 mm and augmented the power strength with an intensity of 432 W/m². Thus, the results obtained after the second simulation contribute to lesser attenuation, lightweight systems, high power intensity, and also comply with safety limits provided by the U.S Food and Drug Administration (FDA). It was also found that the chosen operating frequency enhances the directivity of the acoustic wave at the receiver side.

Keywords: acoustic power, bio-implantable, COMSOL, Lead Zirconate Titanate, piezoelectric, transducer

Procedia PDF Downloads 178
9045 Corrosion Characteristics and Electrochemical Treatment of Heritage Silver Alloys

Authors: Ahmad N. Abu-Baker

Abstract:

This study investigated the corrosion of a group of heritage silver-copper alloy coins and their conservation treatment by potentiostatic methods. The corrosion products of the coins were characterized by a combination of scanning electron microscopy/ energy-dispersive X-ray spectroscopy (SEM/EDX) and X-ray diffraction (XRD) analyses. Cathodic polarization curves, measured by linear sweep voltammetry (LSV), also identified the corrosion products and the working conditions to treat the coins using a potentiostatic reduction method, which was monitored by chronoamperometry. The corrosion products showed that the decay mechanisms were dominated by selective attack on the copper-rich phases of the silver-copper alloys, which is consistent with an internal galvanic corrosion phenomenon, which leads to the deposition of copper corrosion products on the surface of the coins. Silver chloride was also detected on the coins, which reflects selective corrosion of the silver-rich phases under different chemical environments. The potentiostatic treatment showed excellent effectiveness in determining treatment parameters and monitoring the reduction process of the corrosion products on the coins, which helped to preserve surface details in the cleaning process and to prevent over-treatment.

Keywords: silver alloys, corrosion, conservation, heritage

Procedia PDF Downloads 146
9044 Accountability of Artificial Intelligence: An Analysis Using Edgar Morin’s Complex Thought

Authors: Sylvie Michel, Sylvie Gerbaix, Marc Bidan

Abstract:

Artificial intelligence (AI) can be held accountable for its detrimental impacts. This question gains heightened relevance given AI's pervasive reach across various domains, magnifying its power and potential. The expanding influence of AI raises fundamental ethical inquiries, primarily centering on biases, responsibility, and transparency. This encompasses discriminatory biases arising from algorithmic criteria or data, accidents attributed to autonomous vehicles or other systems, and the imperative of transparent decision-making. This article aims to stimulate reflection on AI accountability, denoting the necessity to elucidate the effects it generates. Accountability comprises two integral aspects: adherence to legal and ethical standards and the imperative to elucidate the underlying operational rationale. The objective is to initiate a reflection on the obstacles to this "accountability," facing the challenges of the complexity of artificial intelligence's system and its effects. Then, this article proposes to mobilize Edgar Morin's complex thought to encompass and face the challenges of this complexity. The first contribution is to point out the challenges posed by the complexity of A.I., with fractional accountability between a myriad of human and non-human actors, such as software and equipment, which ultimately contribute to the decisions taken and are multiplied in the case of AI. Accountability faces three challenges resulting from the complexity of the ethical issues combined with the complexity of AI. The challenge of the non-neutrality of algorithmic systems as fully ethically non-neutral actors is put forward by a revealing ethics approach that calls for assigning responsibilities to these systems. The challenge of the dilution of responsibility is induced by the multiplicity and distancing between the actors. Thus, a dilution of responsibility is induced by a split in decision-making between developers, who feel they fulfill their duty by strictly respecting the requests they receive, and management, which does not consider itself responsible for technology-related flaws. Accountability is confronted with the challenge of transparency of complex and scalable algorithmic systems, non-human actors self-learning via big data. A second contribution involves leveraging E. Morin's principles, providing a framework to grasp the multifaceted ethical dilemmas and subsequently paving the way for establishing accountability in AI. When addressing the ethical challenge of biases, the "hologrammatic" principle underscores the imperative of acknowledging the non-ethical neutrality of algorithmic systems inherently imbued with the values and biases of their creators and society. The "dialogic" principle advocates for the responsible consideration of ethical dilemmas, encouraging the integration of complementary and contradictory elements in solutions from the very inception of the design phase. Aligning with the principle of organizing recursiveness, akin to the "transparency" of the system, it promotes a systemic analysis to account for the induced effects and guides the incorporation of modifications into the system to rectify deviations and reintroduce modifications into the system to rectify its drifts. In conclusion, this contribution serves as an inception for contemplating the accountability of "artificial intelligence" systems despite the evident ethical implications and potential deviations. Edgar Morin's principles, providing a lens to contemplate this complexity, offer valuable perspectives to address these challenges concerning accountability.

Keywords: accountability, artificial intelligence, complexity, ethics, explainability, transparency, Edgar Morin

Procedia PDF Downloads 67
9043 A General Framework for Measuring the Internal Fraud Risk of an Enterprise Resource Planning System

Authors: Imran Dayan, Ashiqul Khan

Abstract:

Internal corporate fraud, which is fraud carried out by internal stakeholders of a company, affects the well-being of the organisation just like its external counterpart. Even if such an act is carried out for the short-term benefit of a corporation, the act is ultimately harmful to the entity in the long run. Internal fraud is often carried out by relying upon aberrations from usual business processes. Business processes are the lifeblood of a company in modern managerial context. Such processes are developed and fine-tuned over time as a corporation grows through its life stages. Modern corporations have embraced technological innovations into their business processes, and Enterprise Resource Planning (ERP) systems being at the heart of such business processes is a testimony to that. Since ERP systems record a huge amount of data in their event logs, the logs are a treasure trove for anyone trying to detect any sort of fraudulent activities hidden within the day-to-day business operations and processes. This research utilises the ERP systems in place within corporations to assess the likelihood of prospective internal fraud through developing a framework for measuring the risks of fraud through Process Mining techniques and hence finds risky designs and loose ends within these business processes. This framework helps not only in identifying existing cases of fraud in the records of the event log, but also signals the overall riskiness of certain business processes, and hence draws attention for carrying out a redesign of such processes to reduce the chance of future internal fraud while improving internal control within the organisation. The research adds value by applying the concepts of Process Mining into the analysis of data from modern day applications of business process records, which is the ERP event logs, and develops a framework that should be useful to internal stakeholders for strengthening internal control as well as provide external auditors with a tool of use in case of suspicion. The research proves its usefulness through a few case studies conducted with respect to big corporations with complex business processes and an ERP in place.

Keywords: enterprise resource planning, fraud risk framework, internal corporate fraud, process mining

Procedia PDF Downloads 340
9042 Effect of Fatty Acids in Feed on Levels of Antibody Titers and CD4 and CD8 T-Lymphocyte against Newcastle Disease Virus of Vaccinated Broiler Chicken

Authors: Alaa A. Shamaun Al-Abboodi, Yunis A. A. Bapeer

Abstract:

400 one-day-old male broiler chicks (Ross-308) randomly divided to 2 main groups, 1st main group (GA) was feeding basal diet with medium chain fatty acid (MCFA) at rate of 0.15% and divided to four subgroups, 3 subgroups vaccinated with different routes with Newcastle Disease Virus (NDV) and non-vaccinated group. The 2nd main group (GB) feeding basal diet without MCFA and divided the same as 1st main group. The parameters used in this study included: ND antibody titers at 1, 10, 21, 28, 35 and 42 days of age and values of CD4 and CD8 at 1, 20, 30 and 42 days of age. This experiment detected increase in ND antibodies titers in (G1, G2, G3) groups were fed on basal diet MCFA comparing to groups were fed without adding MCFA (G5, G6, G7) and control groups (G4, G8). The results of cellular immune response (CD4 and CD8) T-cells in broiler chicks indicated that there was obviously significant relationship between dietary Fatty Acid (FA) versus the diet without FA on the level of CD4 parameter, for the entire experimental period. The effect of different ages was statistically significant in creating different values of CD4 level, whereas the CD4 level decreases markedly with age. However, analyzing the data of different vaccination methods, oculonasal method of vaccination led to the highest value of CD4 compared with the oral, S/C and control groups. There were statistical differences in CD8 values due to supplementation of FA versus the basal diet and due to the effect of different age periods. As for the age effect, the CD8 value at 20 days of age was significantly higher than at 42 and 30 days.

Keywords: broiler, CD4 and CD8, fatty acids, Newcastle Disease

Procedia PDF Downloads 148
9041 Use of Soil Microorganisms for the Production of Electricity through Microbial Fuel Cells

Authors: Abhipsa Mohanty, Harit Jha

Abstract:

The world's energy demands are continuing to rise, resulting in a worldwide energy crisis and environmental pollution. Because of finite, declining supply and environmental damage, reliance on fossil fuels is unsustainable. As a result, experts are concentrating on alternative, renewable, and carbon-free energy sources. Energy sources that are both environmentally and economically sustainable are required. Microbial fuel cells (MFCs) have recently received a lot of attention due to their low operating temperatures and ability to use a variety of biodegradable substrates as fuel. There are single-chamber MFCs as well as traditional MFCs with anode and cathode compartments. Bioelectricity is produced when microorganisms actively catabolize substrate. MFCs can be used as a power source in small devices like biosensors. Understanding of its components, microbiological processes, limiting variables, and construction designs in MFC systems must be simplified, and large-scale systems must be developed for them to be cost-effective as well as increase electricity production. The purpose of this research was to review current microbiology knowledge in the field of electricity. The manufacturing process, the materials, and procedures utilized to construct the technology, as well as the applications of MFC technology, are all covered.

Keywords: bio-electricity, exoelectrogenic bacteria, microbial fuel cells, soil microorganisms

Procedia PDF Downloads 96
9040 Study on Two Way Reinforced Concrete Slab Using ANSYS with Different Boundary Conditions and Loading

Authors: A. Gherbi, L. Dahmani, A. Boudjemia

Abstract:

This paper presents the Finite Element Method (FEM) for analyzing the failure pattern of rectangular slab with various edge conditions. Non-Linear static analysis is carried out using ANSYS 15 Software. Using SOLID65 solid elements, the compressive crushing of concrete is facilitated using plasticity algorithm, while the concrete cracking in tension zone is accommodated by the nonlinear material model. Smeared reinforcement is used and introduced as a percentage of steel embedded in concrete slab. The behavior of the analyzed concrete slab has been observed in terms of the crack pattern and displacement for various loading and boundary conditions. The finite element results are also compared with the experimental data. One of the other objectives of the present study is to show how similar the crack path found by ANSYS program to those observed for the yield line analysis. The smeared reinforcement method is found to be more practical especially for the layered elements like concrete slabs. The value of this method is that it does not require explicit modeling of the rebar, and thus a much coarser mesh can be defined.

Keywords: ANSYS, cracking pattern, displacements, reinforced concrete slab, smeared reinforcements

Procedia PDF Downloads 207
9039 Numerical Simulation of Footing on Reinforced Loose Sand

Authors: M. L. Burnwal, P. Raychowdhury

Abstract:

Earthquake leads to adverse effects on buildings resting on soft soils. Mitigating the response of shallow foundations on soft soil with different methods reduces settlement and provides foundation stability. Few methods such as the rocking foundation (used in Performance-based design), deep foundation, prefabricated drain, grouting, and Vibro-compaction are used to control the pore pressure and enhance the strength of the loose soils. One of the problems with these methods is that the settlement is uncontrollable, leading to differential settlement of the footings, further leading to the collapse of buildings. The present study investigates the utility of geosynthetics as a potential improvement of the subsoil to reduce the earthquake-induced settlement of structures. A steel moment-resisting frame building resting on loose liquefiable dry soil, subjected to Uttarkashi 1991 and Chamba 1995 earthquakes, is used for the soil-structure interaction (SSI) analysis. The continuum model can simultaneously simulate structure, soil, interfaces, and geogrids in the OpenSees framework. Soil is modeled with PressureDependentMultiYield (PDMY) material models with Quad element that provides stress-strain at gauss points and is calibrated to predict the behavior of Ganga sand. The model analyzed with a tied degree of freedom contact reveals that the system responses align with the shake table experimental results. An attempt is made to study the responses of footing structure and geosynthetics with unreinforced and reinforced bases with varying parameters. The result shows that geogrid reinforces shallow foundation effectively reduces the settlement by 60%.

Keywords: settlement, shallow foundation, SSI, continuum FEM

Procedia PDF Downloads 198
9038 Policy Views of Sustainable Integrated Solution for Increased Synergy between Light Railways and Electrical Distribution Network

Authors: Mansoureh Zangiabadi, Shamil Velji, Rajendra Kelkar, Neal Wade, Volker Pickert

Abstract:

The EU has set itself a long-term goal of reducing greenhouse gas emissions by 80-95% of the 1990 levels by 2050 as set in the Energy Roadmap 2050. This paper reports on the European Union H2020 funded E-Lobster project which demonstrates tools and technologies, software and hardware in integrating the grid distribution, and the railway power systems with power electronics technologies (Smart Soft Open Point - sSOP) and local energy storage. In this context this paper describes the existing policies and regulatory frameworks of the energy market at European level with a special focus then at National level, on the countries where the members of the consortium are located, and where the demonstration activities will be implemented. By taking into account the disciplinary approach of E-Lobster, the main policy areas investigated includes electricity, energy market, energy efficiency, transport and smart cities. Energy storage will play a key role in enabling the EU to develop a low-carbon electricity system. In recent years, Energy Storage System (ESSs) are gaining importance due to emerging applications, especially electrification of the transportation sector and grid integration of volatile renewables. The need for storage systems led to ESS technologies performance improvements and significant price decline. This allows for opening a new market where ESSs can be a reliable and economical solution. One such emerging market for ESS is R+G management which will be investigated and demonstrated within E-Lobster project. The surplus of energy in one type of power system (e.g., due to metro braking) might be directly transferred to the other power system (or vice versa). However, it would usually happen at unfavourable instances when the recipient does not need additional power. Thus, the role of ESS is to enhance advantages coming from interconnection of the railway power systems and distribution grids by offering additional energy buffer. Consequently, the surplus/deficit of energy in, e.g. railway power systems, is not to be immediately transferred to/from the distribution grid but it could be stored and used when it is really needed. This will assure better energy management exchange between the railway power systems and distribution grids and lead to more efficient loss reduction. In this framework, to identify the existing policies and regulatory frameworks is crucial for the project activities and for the future development of business models for the E-Lobster solutions. The projections carried out by the European Commission, the Member States and stakeholders and their analysis indicated some trends, challenges, opportunities and structural changes needed to design the policy measures to provide the appropriate framework for investors. This study will be used as reference for the discussion in the envisaged workshops with stakeholders (DSOs and Transport Managers) in the E-Lobster project.

Keywords: light railway, electrical distribution network, Electrical Energy Storage, policy

Procedia PDF Downloads 140
9037 Development of Vapor Absorption Refrigeration System for Mini-Bus Car’s Air Conditioning: A Two-Fluid Model

Authors: Yoftahe Nigussie

Abstract:

This research explores the implementation of a vapor absorption refrigeration system (VARS) in mini-bus cars to enhance air conditioning efficiency. The conventional vapor compression refrigeration system (VCRS) in vehicles relies on mechanical work from the engine, leading to increased fuel consumption. The proposed VARS aims to utilize waste heat and exhaust gas from the internal combustion engine to cool the mini-bus cabin, thereby reducing fuel consumption and atmospheric pollution. The project involves two models: Model 1, a two-fluid vapor absorption system (VAS), and Model 2, a three-fluid VAS. Model 1 uses ammonia (NH₃) and water (H₂O) as refrigerants, where water absorbs ammonia rapidly, producing a cooling effect. The absorption cycle operates on the principle that absorbing ammonia in water decreases vapor pressure. The ammonia-water solution undergoes cycles of desorption, condensation, expansion, and absorption, facilitated by a generator, condenser, expansion valve, and absorber. The objectives of this research include reducing atmospheric pollution, minimizing air conditioning maintenance costs, lowering capital costs, enhancing fuel economy, and eliminating the need for a compressor. The comparison between vapor absorption and compression systems reveals advantages such as smoother operation, fewer moving parts, and the ability to work at lower evaporator pressures without affecting the Coefficient of Performance (COP). The proposed VARS demonstrates potential benefits for mini-bus air conditioning systems, providing a sustainable and energy-efficient alternative. By utilizing waste heat and exhaust gas, this system contributes to environmental preservation while addressing economic considerations for vehicle owners. Further research and development in this area could lead to the widespread adoption of vapor absorption technology in automotive air conditioning systems.

Keywords: room, zone, space, thermal resistance

Procedia PDF Downloads 77
9036 Artificial Intelligence Protecting Birds against Collisions with Wind Turbines

Authors: Aleksandra Szurlej-Kielanska, Lucyna Pilacka, Dariusz Górecki

Abstract:

The dynamic development of wind energy requires the simultaneous implementation of effective systems minimizing the risk of collisions between birds and wind turbines. Wind turbines are installed in more and more challenging locations, often close to the natural environment of birds. More and more countries and organizations are defining guidelines for the necessary functionality of such systems. The minimum bird detection distance, trajectory tracking, and shutdown time are key factors in eliminating collisions. Since 2020, we have continued the survey on the validation of the subsequent version of the BPS detection and reaction system. Bird protection system (BPS) is a fully automatic camera system which allows one to estimate the distance of the bird to the turbine, classify its size and autonomously undertake various actions depending on the bird's distance and flight path. The BPS was installed and tested in a real environment at a wind turbine in northern Poland and Central Spain. The performed validation showed that at a distance of up to 300 m, the BPS performs at least as well as a skilled ornithologist, and large bird species are successfully detected from over 600 m. In addition, data collected by BPS systems installed in Spain showed that 60% of the detections of all birds of prey were from individuals approaching the turbine, and these detections meet the turbine shutdown criteria. Less than 40% of the detections of birds of prey took place at wind speeds below 2 m/s while the turbines were not working. As shown by the analysis of the data collected by the system over 12 months, the system classified the improved size of birds with a wingspan of more than 1.1 m in 90% and the size of birds with a wingspan of 0.7 - 1 m in 80% of cases. The collected data also allow the conclusion that some species keep a certain distance from the turbines at a wind speed of over 8 m/s (Aquila sp., Buteo sp., Gyps sp.), but Gyps sp. and Milvus sp. remained active at this wind speed on the tested area. The data collected so far indicate that BPS is effective in detecting and stopping wind turbines in response to the presence of birds of prey with a wingspan of more than 1 m.

Keywords: protecting birds, birds monitoring, wind farms, green energy, sustainable development

Procedia PDF Downloads 79
9035 The Effect of Tax Avoidance on Firm Value: Evidence from Amman Stock Exchange

Authors: Mohammad Abu Nassar, Mahmoud Al Khalilah, Hussein Abu Nassar

Abstract:

The purpose of this study is to examine whether corporate tax avoidance practices can impact firm value in the Jordanian context. The study employs a quantitative approach using s sample of (124) industrial and services companies listed on the Amman Stock Exchange for the period from 2010 to 2019. Multiple linear regression analysis has been applied to test the study's hypothesis. The study employs effective tax rate and book-tax difference to measure tax avoidance and Tobin's Q factor to measure firm value. The results of the study revealed that tax avoidance practices, when measured using effective tax rates, do not significantly impact firm value. When the book-tax difference is used to measure tax avoidance, the study results showed a negative impact on firm value. The result of the study has not supported the traditional view of tax avoidance as a transfer of wealth from the government to shareholders for industrial and services companies listed on the Amman Stock Exchange, indicating that Jordanian firms should not use tax avoidance strategies to enhance their value.

Keywords: tax avoidance, effective tax rate, book-tax difference, firm value, Amman stock exchange

Procedia PDF Downloads 173
9034 Algorithmic Obligations: Proactive Liability for AI-Generated Content and Copyright Compliance

Authors: Aleksandra Czubek

Abstract:

As AI systems increasingly shape content creation, existing copyright frameworks face significant challenges in determining liability for AI-generated outputs. Current legal discussions largely focus on who bears responsibility for infringing works, be it developers, users, or entities benefiting from AI outputs. This paper introduces a novel concept of algorithmic obligations, proposing that AI developers be subject to proactive duties that ensure their models prevent copyright infringement before it occurs. Building on principles of obligations law traditionally applied to human actors, the paper suggests a shift from reactive enforcement to proactive legal requirements. AI developers would be legally mandated to incorporate copyright-aware mechanisms within their systems, turning optional safeguards into enforceable standards. These obligations could vary in implementation across international, EU, UK, and U.S. legal frameworks, creating a multi-jurisdictional approach to copyright compliance. This paper explores how the EU’s existing copyright framework, exemplified by the Copyright Directive (2019/790), could evolve to impose a duty of foresight on AI developers, compelling them to embed mechanisms that prevent infringing outputs. By drawing parallels to GDPR’s “data protection by design,” a similar principle could be applied to copyright law, where AI models are designed to minimize copyright risks. In the UK, post-Brexit text and data mining exemptions are seen as pro-innovation but pose risks to copyright protections. This paper proposes a balanced approach, introducing algorithmic obligations to complement these exemptions. AI systems benefiting from text and data mining provisions should integrate safeguards that flag potential copyright violations in real time, ensuring both innovation and protection. In the U.S., where copyright law focuses on human-centric works, this paper suggests an evolution toward algorithmic due diligence. AI developers would have a duty similar to product liability, ensuring that their systems do not produce infringing outputs, even if the outputs themselves cannot be copyrighted. This framework introduces a shift from post-infringement remedies to preventive legal structures, where developers actively mitigate risks. The paper also breaks new ground by addressing obligations surrounding the training data of large language models (LLMs). Currently, training data is often treated under exceptions such as the EU’s text and data mining provisions or U.S. fair use. However, this paper proposes a proactive framework where developers are obligated to verify and document the legal status of their training data, ensuring it is licensed or otherwise cleared for use. In conclusion, this paper advocates for an obligations-centered model that shifts AI-related copyright law from reactive litigation to proactive design. By holding AI developers to a heightened standard of care, this approach aims to prevent infringement at its source, addressing both the outputs of AI systems and the training processes that underlie them.

Keywords: ip, technology, copyright, data, infringement, comparative analysis

Procedia PDF Downloads 23
9033 Impedimetric Phage-Based Sensor for the Rapid Detection of Staphylococcus aureus from Nasal Swab

Authors: Z. Yousefniayejahr, S. Bolognini, A. Bonini, C. Campobasso, N. Poma, F. Vivaldi, M. Di Luca, A. Tavanti, F. Di Francesco

Abstract:

Pathogenic bacteria represent a threat to healthcare systems and the food industry because their rapid detection remains challenging. Electrochemical biosensors are gaining prominence as a novel technology for the detection of pathogens due to intrinsic features such as low cost, rapid response time, and portability, which make them a valuable alternative to traditional methodologies. These sensors use biorecognition elements that are crucial for the identification of specific bacteria. In this context, bacteriophages are promising tools for their inherent high selectivity towards bacterial hosts, which is of fundamental importance when detecting bacterial pathogens in complex biological samples. In this study, we present the development of a low-cost and portable sensor based on the Zeno phage for the rapid detection of Staphylococcus aureus. Screen-printed gold electrodes functionalized with the Zeno phage were used, and electrochemical impedance spectroscopy was applied to evaluate the change of the charge transfer resistance (Rct) as a result of the interaction with S. aureus MRSA ATCC 43300. The phage-based biosensor showed a linear range from 101 to 104 CFU/mL with a 20-minute response time and a limit of detection (LOD) of 1.2 CFU/mL under physiological conditions. The biosensor’s ability to recognize various strains of staphylococci was also successfully demonstrated in the presence of clinical isolates collected from different geographic areas. Assays using S. epidermidis were also carried out to verify the species-specificity of the phage sensor. We only observed a remarkable change of the Rct in the presence of the target S. aureus bacteria, while no substantial binding to S. epidermidis occurred. This confirmed that the Zeno phage sensor only targets S. aureus species within the genus Staphylococcus. In addition, the biosensor's specificity with respect to other bacterial species, including gram-positive bacteria like Enterococcus faecium and the gram-negative bacterium Pseudomonas aeruginosa, was evaluated, and a non-significant impedimetric signal was observed. Notably, the biosensor successfully identified S. aureus bacterial cells in a complex matrix such as a nasal swab, opening the possibility of its use in a real-case scenario. We diluted different concentrations of S. aureus from 108 to 100 CFU/mL with a ratio of 1:10 in the nasal swap matrices collected from healthy donors. Three different sensors were applied to measure various concentrations of bacteria. Our sensor indicated high selectivity to detect S. aureus in biological matrices compared to time-consuming traditional methods, such as enzyme-linked immunosorbent assay (ELISA), polymerase chain reaction (PCR), and radioimmunoassay (RIA), etc. With the aim to study the possibility to use this biosensor to address the challenge associated to pathogen detection, ongoing research is focused on the assessment of the biosensor’s analytical performances in different biological samples and the discovery of new phage bioreceptors.

Keywords: electrochemical impedance spectroscopy, bacteriophage, biosensor, Staphylococcus aureus

Procedia PDF Downloads 69
9032 Study of Seismic Damage Reinforced Concrete Frames in Variable Height with Logistic Statistic Function Distribution

Authors: P. Zarfam, M. Mansouri Baghbaderani

Abstract:

In seismic design, the proper reaction to the earthquake and the correct and accurate prediction of its subsequent effects on the structure are critical. Choose a proper probability distribution, which gives a more realistic probability of the structure's damage rate, is essential in damage discussions. With the development of design based on performance, analytical method of modal push over as an inexpensive, efficacious, and quick one in the estimation of the structures' seismic response is broadly used in engineering contexts. In this research three concrete frames of 3, 6, and 13 stories are analyzed in non-linear modal push over by 30 different earthquake records by OpenSEES software, then the detriment indexes of roof's displacement and relative displacement ratio of the stories are calculated by two parameters: peak ground acceleration and spectra acceleration. These indexes are used to establish the value of damage relations with log-normal distribution and logistics distribution. Finally the value of these relations is compared and the effect of height on the mentioned damage relations is studied, too.

Keywords: modal pushover analysis, concrete structure, seismic damage, log-normal distribution, logistic distribution

Procedia PDF Downloads 248
9031 Assessment Using Copulas of Simultaneous Damage to Multiple Buildings Due to Tsunamis

Authors: Yo Fukutani, Shuji Moriguchi, Takuma Kotani, Terada Kenjiro

Abstract:

If risk management of the assets owned by companies, risk assessment of real estate portfolio, and risk identification of the entire region are to be implemented, it is necessary to consider simultaneous damage to multiple buildings. In this research, the Sagami Trough earthquake tsunami that could have a significant effect on the Japanese capital region is focused on, and a method is proposed for simultaneous damage assessment using copulas that can take into consideration the correlation of tsunami depths and building damage between two sites. First, the tsunami inundation depths at two sites were simulated by using a nonlinear long-wave equation. The tsunamis were simulated by varying the slip amount (five cases) and the depths (five cases) for each of 10 sources of the Sagami Trough. For each source, the frequency distributions of the tsunami inundation depth were evaluated by using the response surface method. Then, Monte-Carlo simulation was conducted, and frequency distributions of tsunami inundation depth were evaluated at the target sites for all sources of the Sagami Trough. These are marginal distributions. Kendall’s tau for the tsunami inundation simulation at two sites was 0.83. Based on this value, the Gaussian copula, t-copula, Clayton copula, and Gumbel copula (n = 10,000) were generated. Then, the simultaneous distributions of the damage rate were evaluated using the marginal distributions and the copulas. For the correlation of the tsunami inundation depth at the two sites, the expected value hardly changed compared with the case of no correlation, but the damage rate of the ninety-ninth percentile value was approximately 2%, and the maximum value was approximately 6% when using the Gumbel copula.

Keywords: copulas, Monte-Carlo simulation, probabilistic risk assessment, tsunamis

Procedia PDF Downloads 146
9030 Disaster Management Supported by Unmanned Aerial Systems

Authors: Agoston Restas

Abstract:

Introduction: This paper describes many initiatives and shows also practical examples which happened recently using Unmanned Aerial Systems (UAS) to support disaster management. Since the operation of manned aircraft at disasters is usually not only expensive but often impossible to use as well, in many cases managers fail to use the aerial activity. UAS can be an alternative moreover cost-effective solution for supporting disaster management. Methods: This article uses thematic division of UAS applications; it is based on two key elements, one of them is the time flow of managing disasters, other is its tactical requirements. Logically UAS can be used like pre-disaster activity, activity immediately after the occurrence of a disaster and the activity after the primary disaster elimination. Paper faces different disasters, like dangerous material releases, floods, earthquakes, forest fires and human-induced disasters. Research used function analysis, practical experiments, mathematical formulas, economic analysis and also expert estimation. Author gathered international examples and used own experiences in this field as well. Results and discussion: An earthquake is a rapid escalating disaster, where, many times, there is no other way for a rapid damage assessment than aerial reconnaissance. For special rescue teams, the UAS application can help much in a rapid location selection, where enough place remained to survive for victims. Floods are typical for a slow onset disaster. In contrast, managing floods is a very complex and difficult task. It requires continuous monitoring of dykes, flooded and threatened areas. UAS can help managers largely keeping an area under observation. Forest fires are disasters, where the tactical application of UAS is already well developed. It can be used for fire detection, intervention monitoring and also for post-fire monitoring. In case of nuclear accident or hazardous material leakage, UAS is also a very effective or can be the only one tool for supporting disaster management. Paper shows some efforts using UAS to avoid human-induced disasters in low-income countries as part of health cooperation.

Keywords: disaster management, floods, forest fires, Unmanned Aerial Systems

Procedia PDF Downloads 242
9029 The Development of Space-Time and Space-Number Associations: The Role of Non-Symbolic vs. Symbolic Representations

Authors: Letizia Maria Drammis, Maria Antonella Brandimonte

Abstract:

The idea that people use space representations to think about time and number received support from several lines of research. However, how these representations develop in children and then shape space-time and space-number mappings is still a debated issue. In the present study, 40 children (20 pre-schoolers and 20 elementary-school children) performed 4 main tasks, which required the use of more concrete (non-symbolic) or more abstract (symbolic) space-time and space-number associations. In the non-symbolic conditions, children were required to order pictures of everyday-life events occurring in a specific temporal order (Temporal sequences) and of quantities varying in numerosity (Numerical sequences). In the symbolic conditions, they were asked to perform the typical time-to-position and number-to-position tasks by mapping time-related words and numbers onto lines. Results showed that children performed reliably better in the non-symbolic Time conditions than the symbolic Time conditions, independently of age, whereas only pre-schoolers performed worse in the Number-to-position task (symbolic) as compared to the Numerical sequence (non-symbolic) task. In addition, only older children mapped time-related words onto space following the typical left-right orientation, pre-schoolers’ performance being somewhat mixed. In contrast, mapping numbers onto space showed a clear left-right orientation, independently of age. Overall, these results indicate a cross-domain difference in the way younger and older children process time and number, with time-related tasks being more difficult than number-related tasks only when space-time tasks require symbolic representations.

Keywords: space-time associations, space-number associations, orientation, children

Procedia PDF Downloads 341
9028 Variability of the Snowline Altitude at Different Region in the Eastern Tibetan Plateau in Recent 20 Years

Authors: Zhen Li, Chang Liu, Ping Zhang

Abstract:

These Glaciers are thought of as natural water reservoirs and are of vital importance to hydrological models and industrial production, and glacial changes act as significant indicators of climate change. The glacier snowline can be used as an indicator of the equilibrium line, which may be a key parameter to study the effect of climate change on glaciers. Using Google Earth Engine, we select optical satellite imageries and implement the Otsu thresholding method on a near-infrared band to detect snowline altitudes (SLAs) of 26 glaciers in three regions of the eastern Tibetan Plateau. Three different study regions in the eastern Tibetan Plateau have different climate regimes, which are Sepu Kangri (SK, maritime glacier), Bu’Gyai Kangri (BK, continental glacier) and west of Qiajajima (WQ, continental glacier), along a latitudinal transect from south to north. We analyzed the effects of climatic factors on the SLA changes from 1995 to 2016. SLAs are fluctuating upward, and the rising values are 100 m, 60 m, and 34 m from south to north during the 22 years. We also observed that the climatic factor that affects the variability of SLA gradually changes from precipitation to temperature from south to north. The northern continental glaciers are mainly affected by temperature, and the southern maritime glaciers affected by precipitation. Owing to the influence of primary climatic factors, continental glaciers are found to have higher SLAs on the south slope, while maritime glaciers have higher SLAs on the north slope.

Keywords: climate change, glacier, snowline altitude, tibetan plateau

Procedia PDF Downloads 155
9027 EarlyWarning for Financial Stress Events:A Credit-Regime Switching Approach

Authors: Fuchun Li, Hong Xiao

Abstract:

We propose a new early warning model for predicting financial stress events for a given future time. In this model, we examine whether credit conditions play an important role as a nonlinear propagator of shocks when predicting the likelihood of occurrence of financial stress events for a given future time. This propagation takes the form of a threshold regression in which a regime change occurs if credit conditions cross a critical threshold. Given the new early warning model for financial stress events, we evaluate the performance of this model and currently available alternatives, such as the model from signal extraction approach, and linear regression model. In-sample forecasting results indicate that the three types of models are useful tools for predicting financial stress events while none of them outperforms others across all criteria considered. The out-of-sample forecasting results suggest that the credit-regime switching model performs better than the two others across all criteria and all forecasting horizons considered.

Keywords: cut-off probability, early warning model, financial crisis, financial stress, regime-switching model, forecasting horizons

Procedia PDF Downloads 442
9026 Effects of Monofin Training on Left Ventricular Performance in Elite Egyptian Children Athletes

Authors: Magdy Abouzeid

Abstract:

Objectives: The aim of this study was to examine the influence of Monofin training, 36 weeks, 6 times per week, 90 min/unit on left ventricular performance in elite Egyptian Monofin athletes. Background: The elite athletes are one who has superior athletic talent. Monofin swimming already provide the most efficient way of swimming for human being, it is an aquatics sport practice on the surface or under water. Methods :To study these effects,14 elite Monofin children(3 girls and 11boys) aged(11.95± 1.09yr) HT (153.07± 4.2 cm) , WT(52.4 ± 3.7 kg ) , body surface area (BSA.m2 1.48 ± 5.6 m2 ) took part in long-term Monofin Training(LTMT).All subjects underwent two-dimension and M-mode Echordiography at rest before and after(LTMT). Results: There was significant difference (P < 0.01) and percentage improvement for all echocardiography parameter after (LTMT). Inter ventricular septal thickness in diastole and in systole increased by 27.9 % and 42.75 %. Left ventricular end systolic dimension and diastole increased by 16.81 % and 42.7 % respectively. Posterior wall thickness in systole was very highly increased by 283.3 % and in diastole increased by 51.78 %. Left ventricular mass in diastole and in systole increased by 44.8 % and 40.1 % respectively. Stroke volume and resting heart rate (HR) significant changed (sv) 25 %, (HR) 14.7 %. Conclusion: Monofin training is an effective sport to enhance ‘Heart athlete's’ for children, because the unique swim fin tool and create propulsion and overcome resistance. Further researches are needed to determine the effects of Monofin training on right ventricular in child athletes.

Keywords: prepubertal, monofin training, heart athlete's, elite child athlete, echocardiography

Procedia PDF Downloads 318
9025 Effects of Free-Hanging Horizontal Sound Absorbers on the Cooling Performance of Thermally Activated Building Systems

Authors: L. Marcos Domínguez, Nils Rage, Ongun B. Kazanci, Bjarne W. Olesen

Abstract:

Thermally Activated Building Systems (TABS) have proven to be an energy-efficient solution to provide buildings with an optimal indoor thermal environment. This solution uses the structure of the building to store heat, reduce the peak loads, and decrease the primary energy demand. TABS require the heated or cooled surfaces to be as exposed as possible to the indoor space, but exposing the bare concrete surfaces has a diminishing effect on the acoustic qualities of the spaces in a building. Acoustic solutions capable of providing optimal acoustic comfort and allowing the heat exchange between the TABS and the room are desirable. In this study, the effects of free-hanging units on the cooling performance of TABS and the occupants’ thermal comfort was measured in a full-scale TABS laboratory. Investigations demonstrate that the use of free-hanging sound absorbers are compatible with the performance of TABS and the occupant’s thermal comfort, but an appropriate acoustic design is needed to find the most suitable solution for each case. The results show a reduction of 11% of the cooling performance of the TABS when 43% of the ceiling area is covered with free-hanging horizontal sound absorbers, of 23% for 60% ceiling coverage ratio and of 36% for 80% coverage. Measurements in actual buildings showed an increase of the room operative temperature of 0.3 K when 50% of the ceiling surface is covered with horizontal panels and of 0.8 to 1 K for a 70% coverage ratio. According to numerical simulations using a new TRNSYS Type, the use of comfort ventilation has a considerable influence on the thermal conditions in the room; if the ventilation is removed, then the operative temperature increases by 1.8 K for a 60%-covered ceiling.

Keywords: acoustic comfort, concrete core activation, full-scale measurements, thermally activated building systems, TRNSys

Procedia PDF Downloads 331
9024 Segregation Patterns of Trees and Grass Based on a Modified Age-Structured Continuous-Space Forest Model

Authors: Jian Yang, Atsushi Yagi

Abstract:

Tree-grass coexistence system is of great importance for forest ecology. Mathematical models are being proposed to study the dynamics of tree-grass coexistence and the stability of the systems. However, few of the models concentrates on spatial dynamics of the tree-grass coexistence. In this study, we modified an age-structured continuous-space population model for forests, obtaining an age-structured continuous-space population model for the tree-grass competition model. In the model, for thermal competitions, adult trees can out-compete grass, and grass can out-compete seedlings. We mathematically studied the model to make sure tree-grass coexistence solutions exist. Numerical experiments demonstrated that a fraction of area that trees or grass occupies can affect whether the coexistence is stable or not. We also tried regulating the mortality of adult trees with other parameters and the fraction of area trees and grass occupies were fixed; results show that the mortality of adult trees is also a factor affecting the stability of the tree-grass coexistence in this model.

Keywords: population-structured models, stabilities of ecosystems, thermal competitions, tree-grass coexistence systems

Procedia PDF Downloads 166
9023 Ecosystem Services Assessment for Urban Nature-Based Solutions Implemented in the Public Space: Case Study of Alhambra Square in Bogotá, Colombia

Authors: Diego Sánchez, Sandra M. Aguilar, José F. Gómez, Gustavo Montaño, Laura P. Otero, Carlos V. Rey, José A. Martínez, Juliana Robles, Jorge E. Burgos, Juan S. López

Abstract:

Bogota is making efforts towards urban resilience through Nature-based Solutions (NbS) incorporation in public projects as a climate change resilience strategy. The urban renovation project on the Alhambra square includes Green Infrastructure (GI), like Sustainable Urban Drainage Systems (SUDS) and Urban Trees (UT), as ecosystem services (ES) boosters. This study analyzes 3 scenarios: (1) the initial situation without NbS, (2) the expected situation including NbS in the design and (3) the projection of the second one after 30 years, calculating the ecosystem services, the stormwater management benefits provided by SUDS and the cultural services. The obtained results contribute to the understanding of the urban NbS benefits in public spaces, providing valuable information to foster investment in sustainable projects and encouraging policy makers to integrate NbS in urban planning.

Keywords: ecosystem services, nature-based solutions, stormwater management, sustainable urban drainage systems

Procedia PDF Downloads 164
9022 Electromyography Pattern Classification with Laplacian Eigenmaps in Human Running

Authors: Elnaz Lashgari, Emel Demircan

Abstract:

Electromyography (EMG) is one of the most important interfaces between humans and robots for rehabilitation. Decoding this signal helps to recognize muscle activation and converts it into smooth motion for the robots. Detecting each muscle’s pattern during walking and running is vital for improving the quality of a patient’s life. In this study, EMG data from 10 muscles in 10 subjects at 4 different speeds were analyzed. EMG signals are nonlinear with high dimensionality. To deal with this challenge, we extracted some features in time-frequency domain and used manifold learning and Laplacian Eigenmaps algorithm to find the intrinsic features that represent data in low-dimensional space. We then used the Bayesian classifier to identify various patterns of EMG signals for different muscles across a range of running speeds. The best result for vastus medialis muscle corresponds to 97.87±0.69 for sensitivity and 88.37±0.79 for specificity with 97.07±0.29 accuracy using Bayesian classifier. The results of this study provide important insight into human movement and its application for robotics research.

Keywords: electromyography, manifold learning, ISOMAP, Laplacian Eigenmaps, locally linear embedding

Procedia PDF Downloads 367
9021 A New Lateral Load Pattern for Pushover Analysis of RC Frame Structures

Authors: Mohammad Reza Ameri, Ali Massumi, Mohammad Haghbin

Abstract:

Non-linear static analysis, commonly referred to as pushover analysis, is a powerful tool for assessing the seismic response of structures. A suitable lateral load pattern for pushover analysis can bring the results of this simple, quick and low-cost analysis close to the realistic results of nonlinear dynamic analyses. In this research, four samples of 10- and 15 story (two- and four-bay) reinforced concrete frames were studied. The lateral load distribution patterns recommended in FEMA 273/356 guidelines were applied to the sample models in order to perform pushover analyses. The results were then compared to the results obtained from several nonlinear incremental dynamic analyses for a range of earthquakes. Finally, a lateral load distribution pattern was proposed for pushover analysis of medium-rise reinforced concrete buildings based on the results of nonlinear static and dynamic analyses.

Keywords: lateral load pattern, nonlinear static analysis, incremental dynamic analysis, medium-rise reinforced concrete frames, performance based design

Procedia PDF Downloads 483