Search results for: artificial reasoning
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2401

Search results for: artificial reasoning

841 Analysis of Artificial Hip Joint Using Finite Element Method

Authors: Syed Zameer, Mohamed Haneef

Abstract:

Hip joint plays very important role in human beings as it takes up the whole body forces generated due to various activities. These loads are repetitive and fluctuating depending on the activities such as standing, sitting, jogging, stair casing, climbing, etc. which may lead to failure of Hip joint. Hip joint modification and replacement are common in old aged persons as well as younger persons. In this research study static and Fatigue analysis of Hip joint model was carried out using finite element software ANSYS. Stress distribution obtained from result of static analysis, material properties and S-N curve data of fabricated Ultra High molecular weight polyethylene / 50 wt% short E glass fibres + 40 wt% TiO2 Polymer matrix composites specimens were used to estimate fatigue life of Hip joint using stiffness Degradation model for polymer matrix composites. The stress distribution obtained from static analysis was found to be within the acceptable range.The factor of safety calculated from linear Palmgren linear damage rule is less than one, which indicates the component is safe under the design.

Keywords: hip joint, polymer matrix composite, static analysis, fatigue analysis, stress life approach

Procedia PDF Downloads 356
840 Studies on the Teaching Pedagogy and Effectiveness for the Multi-Channel Storytelling for Social Media, Cinema, Game, and Streaming Platform: Case Studies of Squid Game

Authors: Chan Ka Lok Sobel

Abstract:

The rapid evolution of digital media platforms has given rise to new forms of narrative engagement, particularly through multi-channel storytelling. This research focuses on exploring the teaching pedagogy and effectiveness of multi-channel storytelling for social media, cinema, games, and streaming platforms. The study employs case studies of the popular series "Squid Game" to investigate the diverse pedagogical approaches and strategies used in teaching multi-channel storytelling. Through qualitative research methods, including interviews, surveys, and content analysis, the research assesses the effectiveness of these approaches in terms of student engagement, knowledge acquisition, critical thinking skills, and the development of digital literacy. The findings contribute to understanding best practices for incorporating multi-channel storytelling into educational contexts and enhancing learning outcomes in the digital media landscape.

Keywords: digital literacy, game-based learning, artificial intelligence, animation production, educational technology

Procedia PDF Downloads 114
839 Glucose Monitoring System Using Machine Learning Algorithms

Authors: Sangeeta Palekar, Neeraj Rangwani, Akash Poddar, Jayu Kalambe

Abstract:

The bio-medical analysis is an indispensable procedure for identifying health-related diseases like diabetes. Monitoring the glucose level in our body regularly helps us identify hyperglycemia and hypoglycemia, which can cause severe medical problems like nerve damage or kidney diseases. This paper presents a method for predicting the glucose concentration in blood samples using image processing and machine learning algorithms. The glucose solution is prepared by the glucose oxidase (GOD) and peroxidase (POD) method. An experimental database is generated based on the colorimetric technique. The image of the glucose solution is captured by the raspberry pi camera and analyzed using image processing by extracting the RGB, HSV, LUX color space values. Regression algorithms like multiple linear regression, decision tree, RandomForest, and XGBoost were used to predict the unknown glucose concentration. The multiple linear regression algorithm predicts the results with 97% accuracy. The image processing and machine learning-based approach reduce the hardware complexities of existing platforms.

Keywords: artificial intelligence glucose detection, glucose oxidase, peroxidase, image processing, machine learning

Procedia PDF Downloads 203
838 Control of Single Axis Magnetic Levitation System Using Fuzzy Logic Control

Authors: A. M. Benomair, M. O. Tokhi

Abstract:

This paper presents the investigation on a system model for the stabilization of a Magnetic Levitation System (Maglev’s). The magnetic levitation system is a challenging nonlinear mechatronic system in which an electromagnetic force is required to suspend an object (metal sphere) in air space. The electromagnetic force is very sensitive to the noise which can create acceleration forces on the metal sphere, causing the sphere to move into the unbalanced region. Maglev’s give the contribution in industry and this system has reduce the power consumption, has increase the power efficiency and reduce the cost maintenance. The common applications for Maglev’s Power Generation (e.g. wind turbine), Maglev’s trains and Medical Device (e.g. Magnetically suspended Artificial Heart Pump). This paper presents the comparison between dynamic response and robust characteristic for both conventional PD and Fuzzy PD controller. The main contribution of this paper is the proof of fuzzy PD type stabilization and robustness. By use of a method to tune the scaling factors of the linear PD type fuzzy controller from an equivalent tuned conventional PD.

Keywords: magnetic levitation system, PD controller, Fuzzy Logic Control, Fuzzy PD

Procedia PDF Downloads 273
837 Detection of Concrete Reinforcement Damage Using Piezoelectric Materials: Analytical and Experimental Study

Authors: C. P. Providakis, G. M. Angeli, M. J. Favvata, N. A. Papadopoulos, C. E. Chalioris, C. G. Karayannis

Abstract:

An effort for the detection of damages in the reinforcement bars of reinforced concrete members using PZTs is presented. The damage can be the result of excessive elongation of the steel bar due to steel yielding or due to local steel corrosion. In both cases the damage is simulated by considering reduced diameter of the rebar along the damaged part of its length. An integration approach based on both electromechanical admittance methodology and guided wave propagation technique is used to evaluate the artificial damage on the examined longitudinal steel bar. Two actuator PZTs and a sensor PZT are considered to be bonded on the examined steel bar. The admittance of the Sensor PZT is calculated using COMSOL 3.4a. Fast Furrier Transformation for a better evaluation of the results is employed. An effort for the quantification of the damage detection using the root mean square deviation (RMSD) between the healthy condition and damage state of the sensor PZT is attempted. The numerical value of the RSMD yields a level for the difference between the healthy and the damaged admittance computation indicating this way the presence of damage in the structure. Experimental measurements are also presented.

Keywords: concrete reinforcement, damage detection, electromechanical admittance, experimental measurements, finite element method, guided waves, PZT

Procedia PDF Downloads 255
836 Forecasting the Sea Level Change in Strait of Hormuz

Authors: Hamid Goharnejad, Amir Hossein Eghbali

Abstract:

Recent investigations have demonstrated the global sea level rise due to climate change impacts. In this study climate changes study the effects of increasing water level in the strait of Hormuz. The probable changes of sea level rise should be investigated to employ the adaption strategies. The climatic output data of a GCM (General Circulation Model) named CGCM3 under climate change scenario of A1b and A2 were used. Among different variables simulated by this model, those of maximum correlation with sea level changes in the study region and least redundancy among themselves were selected for sea level rise prediction by using stepwise regression. One models of Discrete Wavelet artificial Neural Network (DWNN) was developed to explore the relationship between climatic variables and sea level changes. In these models, wavelet was used to disaggregate the time series of input and output data into different components and then ANN was used to relate the disaggregated components of predictors and predictands to each other. The results showed in the Shahid Rajae Station for scenario A1B sea level rise is among 64 to 75 cm and for the A2 Scenario sea level rise is among 90 to 105 cm. Furthermore the result showed a significant increase of sea level at the study region under climate change impacts, which should be incorporated in coastal areas management.

Keywords: climate change scenarios, sea-level rise, strait of Hormuz, forecasting

Procedia PDF Downloads 271
835 Complementing Assessment Processes with Standardized Tests: A Work in Progress

Authors: Amparo Camacho

Abstract:

ABET accredited programs must assess the development of student learning outcomes (SOs) in engineering programs. Different institutions implement different strategies for this assessment, and they are usually designed “in house.” This paper presents a proposal for including standardized tests to complement the ABET assessment model in an engineering college made up of six distinct engineering programs. The engineering college formulated a model of quality assurance in education to be implemented throughout the six engineering programs to regularly assess and evaluate the achievement of SOs in each program offered. The model uses diverse techniques and sources of data to assess student performance and to implement actions of improvement based on the results of this assessment. The model is called “Assessment Process Model” and it includes SOs A through K, as defined by ABET. SOs can be divided into two categories: “hard skills” and “professional skills” (soft skills). The first includes abilities, such as: applying knowledge of mathematics, science, and engineering and designing and conducting experiments, as well as analyzing and interpreting data. The second category, “professional skills”, includes communicating effectively, and understanding professional and ethnical responsibility. Within the Assessment Process Model, various tools were used to assess SOs, related to both “hard” as well as “soft” skills. The assessment tools designed included: rubrics, surveys, questionnaires, and portfolios. In addition to these instruments, the Engineering College decided to use tools that systematically gather consistent quantitative data. For this reason, an in-house exam was designed and implemented, based on the curriculum of each program. Even though this exam was administered during various academic periods, it is not currently considered standardized. In 2017, the Engineering College included three standardized tests: one to assess mathematical and scientific reasoning and two more to assess reading and writing abilities. With these exams, the college hopes to obtain complementary information that can help better measure the development of both hard and soft skills of students in the different engineering programs. In the first semester of 2017, the three exams were given to three sample groups of students from the six different engineering programs. Students in the sample groups were either from the first, fifth, and tenth semester cohorts. At the time of submission of this paper, the engineering college has descriptive statistical data and is working with various statisticians to have a more in-depth and detailed analysis of the sample group of students’ achievement on the three exams. The overall objective of including standardized exams in the assessment model is to identify more precisely the least developed SOs in order to define and implement educational strategies necessary for students to achieve them in each engineering program.

Keywords: assessment, hard skills, soft skills, standardized tests

Procedia PDF Downloads 284
834 Economic Decision Making under Cognitive Load: The Role of Numeracy and Financial Literacy

Authors: Vânia Costa, Nuno De Sá Teixeira, Ana C. Santos, Eduardo Santos

Abstract:

Financial literacy and numeracy have been regarded as paramount for rational household decision making in the increasing complexity of financial markets. However, financial decisions are often made under sub-optimal circumstances, including cognitive overload. The present study aims to clarify how financial literacy and numeracy, taken as relevant expert knowledge for financial decision-making, modulate possible effects of cognitive load. Participants were required to perform a choice between a sure loss or a gambling pertaining a financial investment, either with or without a competing memory task. Two experiments were conducted varying only the content of the competing task. In the first, the financial choice task was made while maintaining on working memory a list of five random letters. In the second, cognitive load was based upon the retention of six random digits. In both experiments, one of the items in the list had to be recalled given its serial position. Outcomes of the first experiment revealed no significant main effect or interactions involving cognitive load manipulation and numeracy and financial literacy skills, strongly suggesting that retaining a list of random letters did not interfere with the cognitive abilities required for financial decision making. Conversely, and in the second experiment, a significant interaction between the competing mnesic task and level of financial literacy (but not numeracy) was found for the frequency of choice of a gambling option. Overall, and in the control condition, both participants with high financial literacy and high numeracy were more prone to choose the gambling option. However, and when under cognitive load, participants with high financial literacy were as likely as their illiterate counterparts to choose the gambling option. This outcome is interpreted as evidence that financial literacy prevents intuitive risk-aversion reasoning only under highly favourable conditions, as is the case when no other task is competing for cognitive resources. In contrast, participants with higher levels of numeracy were consistently more prone to choose the gambling option in both experimental conditions. These results are discussed in the light of the opposition between classical dual-process theories and fuzzy-trace theories for intuitive decision making, suggesting that while some instances of expertise (as numeracy) are prone to support easily accessible gist representations, other expert skills (as financial literacy) depend upon deliberative processes. It is furthermore suggested that this dissociation between types of expert knowledge might depend on the degree to which they are generalizable across disparate settings. Finally, applied implications of the present study are discussed with a focus on how it informs financial regulators and the importance and limits of promoting financial literacy and general numeracy.

Keywords: decision making, cognitive load, financial literacy, numeracy

Procedia PDF Downloads 182
833 Detection of Concrete Reinforcement Damage Using Piezoelectric Materials: Analytical and Experimental Study

Authors: C. P. Providakis, G. M. Angeli, M. J. Favvata, N. A. Papadopoulos, C. E. Chalioris, C. G. Karayannis

Abstract:

An effort for the detection of damages in the reinforcement bars of reinforced concrete members using PZTs is presented. The damage can be the result of excessive elongation of the steel bar due to steel yielding or due to local steel corrosion. In both cases the damage is simulated by considering reduced diameter of the rebar along the damaged part of its length. An integration approach based on both electro-mechanical admittance methodology and guided wave propagation technique is used to evaluate the artificial damage on the examined longitudinal steel bar. Two actuator PZTs and a sensor PZT are considered to be bonded on the examined steel bar. The admittance of the Sensor PZT is calculated using COMSOL 3.4a. Fast Furrier Transformation for a better evaluation of the results is employed. An effort for the quantification of the damage detection using the root mean square deviation (RMSD) between the healthy condition and damage state of the sensor PZT is attempted. The numerical value of the RSMD yields a level for the difference between the healthy and the damaged admittance computation indicating this way the presence of damage in the structure. Experimental measurements are also presented.

Keywords: concrete reinforcement, damage detection, electromechanical admittance, experimental measurements, finite element method, guided waves, PZT

Procedia PDF Downloads 293
832 ChatGPT 4.0 Demonstrates Strong Performance in Standardised Medical Licensing Examinations: Insights and Implications for Medical Educators

Authors: K. O'Malley

Abstract:

Background: The emergence and rapid evolution of large language models (LLMs) (i.e., models of generative artificial intelligence, or AI) has been unprecedented. ChatGPT is one of the most widely used LLM platforms. Using natural language processing technology, it generates customized responses to user prompts, enabling it to mimic human conversation. Responses are generated using predictive modeling of vast internet text and data swathes and are further refined and reinforced through user feedback. The popularity of LLMs is increasing, with a growing number of students utilizing these platforms for study and revision purposes. Notwithstanding its many novel applications, LLM technology is inherently susceptible to bias and error. This poses a significant challenge in the educational setting, where academic integrity may be undermined. This study aims to evaluate the performance of the latest iteration of ChatGPT (ChatGPT4.0) in standardized state medical licensing examinations. Methods: A considered search strategy was used to interrogate the PubMed electronic database. The keywords ‘ChatGPT’ AND ‘medical education’ OR ‘medical school’ OR ‘medical licensing exam’ were used to identify relevant literature. The search included all peer-reviewed literature published in the past five years. The search was limited to publications in the English language only. Eligibility was ascertained based on the study title and abstract and confirmed by consulting the full-text document. Data was extracted into a Microsoft Excel document for analysis. Results: The search yielded 345 publications that were screened. 225 original articles were identified, of which 11 met the pre-determined criteria for inclusion in a narrative synthesis. These studies included performance assessments in national medical licensing examinations from the United States, United Kingdom, Saudi Arabia, Poland, Taiwan, Japan and Germany. ChatGPT 4.0 achieved scores ranging from 67.1 to 88.6 percent. The mean score across all studies was 82.49 percent (SD= 5.95). In all studies, ChatGPT exceeded the threshold for a passing grade in the corresponding exam. Conclusion: The capabilities of ChatGPT in standardized academic assessment in medicine are robust. While this technology can potentially revolutionize higher education, it also presents several challenges with which educators have not had to contend before. The overall strong performance of ChatGPT, as outlined above, may lend itself to unfair use (such as the plagiarism of deliverable coursework) and pose unforeseen ethical challenges (arising from algorithmic bias). Conversely, it highlights potential pitfalls if users assume LLM-generated content to be entirely accurate. In the aforementioned studies, ChatGPT exhibits a margin of error between 11.4 and 32.9 percent, which resonates strongly with concerns regarding the quality and veracity of LLM-generated content. It is imperative to highlight these limitations, particularly to students in the early stages of their education who are less likely to possess the requisite insight or knowledge to recognize errors, inaccuracies or false information. Educators must inform themselves of these emerging challenges to effectively address them and mitigate potential disruption in academic fora.

Keywords: artificial intelligence, ChatGPT, generative ai, large language models, licensing exam, medical education, medicine, university

Procedia PDF Downloads 32
831 Emotion Recognition Using Artificial Intelligence

Authors: Rahul Mohite, Lahcen Ouarbya

Abstract:

This paper focuses on the interplay between humans and computer systems and the ability of these systems to understand and respond to human emotions, including non-verbal communication. Current emotion recognition systems are based solely on either facial or verbal expressions. The limitation of these systems is that it requires large training data sets. The paper proposes a system for recognizing human emotions that combines both speech and emotion recognition. The system utilizes advanced techniques such as deep learning and image recognition to identify facial expressions and comprehend emotions. The results show that the proposed system, based on the combination of facial expression and speech, outperforms existing ones, which are based solely either on facial or verbal expressions. The proposed system detects human emotion with an accuracy of 86%, whereas the existing systems have an accuracy of 70% using verbal expression only and 76% using facial expression only. In this paper, the increasing significance and demand for facial recognition technology in emotion recognition are also discussed.

Keywords: facial reputation, expression reputation, deep gaining knowledge of, photo reputation, facial technology, sign processing, photo type

Procedia PDF Downloads 121
830 Current Status and Prospects of Further Control of Brucellosis in Humans and Domestic Ruminants in Bangladesh

Authors: A. K. M. Anisur Rahman

Abstract:

Brucellosis is an ancient and one of the world's most widespread zoonotic diseases affecting both, public health and animal production. Its current status in humans and domestic ruminants along with probable means to control further in Bangladesh are described. The true exposure prevalence of brucellosis in cattle, goats, and sheep seems to be low: 0.3% in cattle, 1% in goats and 1.2% in sheep. The true prevalence of brucellosis in humans was also reported to be around 2%. In such a low prevalence scenario both in humans and animals, the positive predictive values of the diagnostic tests were very low. The role Brucella species in the abortion of domestic ruminants is less likely. Still now, no Brucella spp. was isolated from animal and human samples. However, Brucella abortus DNA was detected from seropositive humans, cattle, and buffalo; milk of cow, goats, and gayals and semen of an infected bull. Consuming raw milk and unpasteurized milk products by Bangladeshi people are not common. Close contact with animals, artificial insemination using semen from infected bulls, grazing mixed species of animals together in the field and transboundary animal movement are important factors, which should be considered for the further control of this zoonosis in Bangladesh.

Keywords: brucellosis, control, human, zoonosis

Procedia PDF Downloads 363
829 Effectiveness of the Resistance to Irradiance Test on Sunglasses Standards

Authors: Mauro Masili, Liliane Ventura

Abstract:

It is still controversial in the literature the ultraviolet (UV) radiation effects on the ocular media, but the World Health Organization has established safe limits on the exposure of eyes to UV radiation based on reports in literature. Sunglasses play an important role in providing safety, and their lenses should provide adequate UV filters. Regarding UV protection for ocular media, the resistance-to-irradiance test for sunglasses under many national standards requires irradiating lenses for 50 uninterrupted hours with a 450 W solar simulator. This artificial aging test may provide a corresponding evaluation of exposure to the sun. Calculating the direct and diffuse solar irradiance at a vertical surface and the corresponding radiant exposure for the entire year, we compare the latter with the 50-hour radiant exposure of a 450 W xenon arc lamp from a solar simulator required by national standards. Our calculations indicate that this stress test is ineffective in its present form. We provide evidence of the need to re-evaluate the parameters of the tests to establish appropriate safe limits against UV radiation. This work is potentially significant for scientists and legislators in the field of sunglasses standards to improve the requirements of sunglasses quality and safety.

Keywords: ISO 12312-1, solar simulator, sunglasses standards, UV protection

Procedia PDF Downloads 197
828 The Impact of Artificial Intelligence on Human Rights Priciples and Obligations

Authors: Adel Atta Youssef Rezkalla

Abstract:

Russia's invasion of Ukraine tested the international community and prompted not only states but also non-state actors to take deterrent measures in response. In fact, international sports federations, notably FIFA and UEFA, have managed to shift the power dynamic quite effectively by imposing a blanket ban on Russian national teams and clubs. The purpose of this article is to examine the human rights consequences of such actions by international sports organizations. First, the article moves away from assessing the legal status of FIFA and UEFA under international law and examines the question of how a legal connection can be established with their human rights obligations. Secondly, the human rights aspects of the controversial FIFA and UEFA measures against Russian athletes are examined and these are analyzed in more detail using the proportionality test than the principle of non-discrimination under international human rights law. Finally, the main avenues for redress for possible human rights violations related to the actions taken by these organizations are identified and the challenges of arbitration and litigation in Switzerland are highlighted.

Keywords: sustainable development, human rights, the right to development, the human rights-based approach to development, environmental rights, economic development, social sustainability human rights protection, human rights violations, workers’ rights, justice, security.

Procedia PDF Downloads 76
827 On the Question of Ideology: Criticism of the Enlightenment Approach and Theory of Ideology as Objective Force in Gramsci and Althusser

Authors: Edoardo Schinco

Abstract:

Studying the Marxist intellectual tradition, it is possible to verify that there were numerous cases of philosophical regression, in which the important achievements of detailed studies have been replaced by naïve ideas and previous misunderstandings: one of most important example of this tendency is related to the question of ideology. According to a common Enlightenment approach, the ideology is essentially not a reality, i.e., a factor capable of having an effect on the reality itself; in other words, the ideology is a mere error without specific historical meaning, which is only due to ignorance or inability of subjects to understand the truth. From this point of view, the consequent and immediate practice against every form of ideology are the rational dialogue, the reasoning based on common sense, in order to dispel the obscurity of ignorance through the light of pure reason. The limits of this philosophical orientation are however both theoretical and practical: on the one hand, the Enlightenment criticism of ideology is not an historicistic thought, since it cannot grasp the inner connection that ties an historical context and its peculiar ideology together; moreover, on the other hand, when the Enlightenment approach fails to release people from their illusions (e.g., when the ideology persists, despite the explanation of its illusoriness), it usually becomes a racist or elitarian thought. Unlike this first conception of ideology, Gramsci attempts to recover Marx’s original thought and to valorize its dialectical methodology with respect to the reality of ideology. As Marx suggests, the ideology – in negative meaning – is surely an error, a misleading knowledge, which aims to defense the current state of things and to conceal social, political or moral contradictions; but, that is precisely why the ideological error is not casual: every ideology mediately roots in a particular material context, from which it takes its reason being. Gramsci avoids, however, any mechanistic interpretation of Marx and, for this reason; he underlines the dialectic relation that exists between material base and ideological superstructure; in this way, a specific ideology is not only a passive product of base but also an active factor that reacts on the base itself and modifies it. Therefore, there is a considerable revaluation of ideology’s role in maintenance of status quo and the consequent thematization of both ideology as objective force, active in history, and ideology as cultural hegemony of ruling class on subordinate groups. Among the Marxists, the French philosopher Louis Althusser also gives his contribution to this crucial question; as follower of Gramsci’s thought, he develops the idea of ideology as an objective force through the notions of Repressive State Apparatus (RSA) and Ideological State Apparatuses (ISA). In addition to this, his philosophy is characterized by the presence of structuralist elements, which must be studied, since they deeply change the theoretical foundation of his Marxist thought.

Keywords: Althusser, enlightenment, Gramsci, ideology

Procedia PDF Downloads 199
826 Deconstructing and Reconstructing the Definition of Inhuman Treatment in International Law

Authors: Sonia Boulos

Abstract:

The prohibition on ‘inhuman treatment’ constitutes one of the central tenets of modern international human rights law. It is incorporated in principal international human rights instruments including Article 5 of the Universal Declaration of Human Rights, and Article 7 of the International Covenant on Civil and Political Rights. However, in the absence of any legislative definition of the term ‘inhuman’, its interpretation becomes challenging. The aim of this article is to critically analyze the interpretation of the term ‘inhuman’ in international human rights law and to suggest a new approach to construct its meaning. The article is composed of two central parts. The first part is a critical appraisal of the interpretation of the term ‘inhuman’ by supra-national human rights law institutions. It highlights the failure of supra-national institutions to provide an independent definition for the term ‘inhuman’. In fact, those institutions consistently fail to distinguish the term ‘inhuman’ from its other kin terms, i.e. ‘cruel’ and ‘degrading.’ Very often, they refer to these three prohibitions as ‘CIDT’, as if they were one collective. They were primarily preoccupied with distinguishing ‘CIDT’ from ‘torture.’ By blurring the conceptual differences between these three terms, supra-national institutions supplemented them with a long list of specific and purely descriptive subsidiary rules. In most cases, those subsidiary rules were announced in the absence of sufficient legal reasoning explaining how they were derived from abstract and evaluative standards embodied in the prohibitions collectively referred to as ‘CIDT.’ By opting for this option, supra-national institutions have created the risk for the development of an incoherent body of jurisprudence on those terms at the international level. They also have failed to provide guidance for domestic courts on how to enforce these prohibitions. While blurring the differences between the terms ‘cruel,’ ‘inhuman,’ and ‘degrading’ has consequences for the three, the term ‘inhuman’ remains the most impoverished one. It is easy to link the term ‘cruel’ to the clause on ‘cruel and unusual punishment’ originating from the English Bill of Rights of 1689. It is also easy to see that the term ‘degrading’ reflects a dignatarian ideal. However, when we turn to the term ‘inhuman’, we are left without any interpretative clue. The second part of the article suggests that the ordinary meaning of the word ‘inhuman’ should be our first clue. However, regaining the conceptual independence of the term ‘inhuman’ requires more than a mere reflection on the word-meaning of the term. Thus, the second part introduces philosophical concepts related to the understanding of what it means to be human. It focuses on ‘the capabilities approach’ and the notion of ‘human functioning’, introduced by Amartya Sen and further explored by Martha Nussbaum. Nussbaum’s work on the basic human capabilities is particularly helpful or even vital for understanding the moral and legal substance of the prohibition on ‘inhuman’ treatment.

Keywords: inhuman treatment, capabilities approach, human functioning, supra-national institutions

Procedia PDF Downloads 278
825 Automating 2D CAD to 3D Model Generation Process: Wall pop-ups

Authors: Mohit Gupta, Chialing Wei, Thomas Czerniawski

Abstract:

In this paper, we have built a neural network that can detect walls on 2D sheets and subsequently create a 3D model in Revit using Dynamo. The training set includes 3500 labeled images, and the detection algorithm used is YOLO. Typically, engineers/designers make concentrated efforts to convert 2D cad drawings to 3D models. This costs a considerable amount of time and human effort. This paper makes a contribution in automating the task of 3D walls modeling. 1. Detecting Walls in 2D cad and generating 3D pop-ups in Revit. 2. Saving designer his/her modeling time in drafting elements like walls from 2D cad to 3D representation. An object detection algorithm YOLO is used for wall detection and localization. The neural network is trained over 3500 labeled images of size 256x256x3. Then, Dynamo is interfaced with the output of the neural network to pop-up 3D walls in Revit. The research uses modern technological tools like deep learning and artificial intelligence to automate the process of generating 3D walls without needing humans to manually model them. Thus, contributes to saving time, human effort, and money.

Keywords: neural networks, Yolo, 2D to 3D transformation, CAD object detection

Procedia PDF Downloads 144
824 Influence Analysis of Macroeconomic Parameters on Real Estate Price Variation in Taipei, Taiwan

Authors: Li Li, Kai-Hsuan Chu

Abstract:

It is well known that the real estate price depends on a lot of factors. Each house current value is dependent on the location, room number, transportation, living convenience, year and surrounding environments. Although, there are different experienced models for housing agent to estimate the price, it is a case by case study without overall dynamic variation investigation. However, many economic parameters may more or less influence the real estate price variation. Here, the influences of most macroeconomic parameters on real estate price are investigated individually based on least-square scheme and grey correlation strategy. Then those parameters are classified into leading indices, simultaneous indices and laggard indices. In addition, the leading time period is evaluated based on least square method. The important leading and simultaneous indices can be used to establish an artificial intelligent neural network model for real estate price variation prediction. The real estate price variation of Taipei, Taiwan during 2005 ~ 2017 are chosen for this research data analysis and validation. The results show that the proposed method has reasonable prediction function for real estate business reference.

Keywords: real estate price, least-square, grey correlation, macroeconomics

Procedia PDF Downloads 198
823 Equivalent Circuit Representation of Lossless and Lossy Power Transmission Systems Including Discrete Sampler

Authors: Yuichi Kida, Takuro Kida

Abstract:

In a new smart society supported by the recent development of 5G and 6G Communication systems, the im- portance of wireless power transmission is increasing. These systems contain discrete sampling systems in the middle of the transmission path and equivalent circuit representation of lossless or lossy power transmission through these systems is an important issue in circuit theory. In this paper, for the given weight function, we show that a lossless power transmission system with the given weight is expressed by an equivalent circuit representation of the Kida’s optimal signal prediction system followed by a reactance multi-port circuit behind it. Further, it is shown that, when the system is lossy, the system has an equivalent circuit in the form of connecting a multi-port positive-real circuit behind the Kida’s optimal signal prediction system. Also, for the convenience of the reader, in this paper, the equivalent circuit expression of the reactance multi-port circuit and the positive- real multi-port circuit by Cauer and Ohno, whose information is currently being lost even in the world of the Internet.

Keywords: signal prediction, pseudo inverse matrix, artificial intelligence, power transmission

Procedia PDF Downloads 122
822 Design and Fabrication of AI-Driven Kinetic Facades with Soft Robotics for Optimized Building Energy Performance

Authors: Mohammadreza Kashizadeh, Mohammadamin Hashemi

Abstract:

This paper explores a kinetic building facade designed for optimal energy capture and architectural expression. The system integrates photovoltaic panels with soft robotic actuators for precise solar tracking, resulting in enhanced electricity generation compared to static facades. Driven by the growing interest in dynamic building envelopes, the exploration of facade systems are necessitated. Increased energy generation and regulation of energy flow within buildings are potential benefits offered by integrating photovoltaic (PV) panels as kinetic elements. However, incorporating these technologies into mainstream architecture presents challenges due to the complexity of coordinating multiple systems. To address this, the design leverages soft robotic actuators, known for their compliance, resilience, and ease of integration. Additionally, the project investigates the potential for employing Large Language Models (LLMs) to streamline the design process. The research methodology involved design development, material selection, component fabrication, and system assembly. Grasshopper (GH) was employed within the digital design environment for parametric modeling and scripting logic, and an LLM was experimented with to generate Python code for the creation of a random surface with user-defined parameters. Various techniques, including casting, Three-dimensional 3D printing, and laser cutting, were utilized to fabricate physical components. A modular assembly approach was adopted to facilitate installation and maintenance. A case study focusing on the application of this facade system to an existing library building at Polytechnic University of Milan is presented. The system is divided into sub-frames to optimize solar exposure while maintaining a visually appealing aesthetic. Preliminary structural analyses were conducted using Karamba3D to assess deflection behavior and axial loads within the cable net structure. Additionally, Finite Element (FE) simulations were performed in Abaqus to evaluate the mechanical response of the soft robotic actuators under pneumatic pressure. To validate the design, a physical prototype was created using a mold adapted for a 3D printer's limitations. Casting Silicone Rubber Sil 15 was used for its flexibility and durability. The 3D-printed mold components were assembled, filled with the silicone mixture, and cured. After demolding, nodes and cables were 3D-printed and connected to form the structure, demonstrating the feasibility of the design. This work demonstrates the potential of soft robotics and Artificial Intelligence (AI) for advancements in sustainable building design and construction. The project successfully integrates these technologies to create a dynamic facade system that optimizes energy generation and architectural expression. While limitations exist, this approach paves the way for future advancements in energy-efficient facade design. Continued research efforts will focus on cost reduction, improved system performance, and broader applicability.

Keywords: artificial intelligence, energy efficiency, kinetic photovoltaics, pneumatic control, soft robotics, sustainable building

Procedia PDF Downloads 31
821 An Intelligent Traffic Management System Based on the WiFi and Bluetooth Sensing

Authors: Hamed Hossein Afshari, Shahrzad Jalali, Amir Hossein Ghods, Bijan Raahemi

Abstract:

This paper introduces an automated clustering solution that applies to WiFi/Bluetooth sensing data and is later used for traffic management applications. The paper initially summarizes a number of clustering approaches and thereafter shows their performance for noise removal. In this context, clustering is used to recognize WiFi and Bluetooth MAC addresses that belong to passengers traveling by a public urban transit bus. The main objective is to build an intelligent system that automatically filters out MAC addresses that belong to persons located outside the bus for different routes in the city of Ottawa. The proposed intelligent system alleviates the need for defining restrictive thresholds that however reduces the accuracy as well as the range of applicability of the solution for different routes. This paper moreover discusses the performance benefits of the presented clustering approaches in terms of the accuracy, time and space complexity, and the ease of use. Note that results of clustering can further be used for the purpose of the origin-destination estimation of individual passengers, predicting the traffic load, and intelligent management of urban bus schedules.

Keywords: WiFi-Bluetooth sensing, cluster analysis, artificial intelligence, traffic management

Procedia PDF Downloads 241
820 Functionalized Ultra-Soft Rubber for Soft Robotics Application

Authors: Shib Shankar Banerjeea, Andreas Ferya, Gert Heinricha, Amit Das

Abstract:

Recently, the growing need for the development of soft robots consisting of highly deformable and compliance materials emerge from the serious limitations of conventional service robots. However, one of the main challenges of soft robotics is to develop such compliance materials, which facilitates the design of soft robotic structures and, simultaneously, controls the soft-body systems, like soft artificial muscles. Generally, silicone or acrylic-based elastomer composites are used for soft robotics. However, mechanical performance and long-term reliabilities of the functional parts (sensors, actuators, main body) of the robot made from these composite materials are inferior. This work will present the development and characterization of robust super-soft programmable elastomeric materials from crosslinked natural rubber that can serve as touch and strain sensors for soft robotic arms with very high elastic properties and strain, while the modulus is altered in the kilopascal range. Our results suggest that such soft natural programmable elastomers can be promising materials and can replace conventional silicone-based elastomer for soft robotics applications.

Keywords: elastomers, soft materials, natural rubber, sensors

Procedia PDF Downloads 154
819 Empowering and Educating Young People Against Cybercrime by Playing: The Rayuela Method

Authors: Jose L. Diego, Antonio Berlanga, Gregorio López, Diana López

Abstract:

The Rayuela method is a success story, as it is part of a project selected by the European Commission to face the challenge launched by itself for achieving a better understanding of human factors, as well as social and organisational aspects that are able to solve issues in fighting against crime. Rayuela's method specifically focuses on the drivers of cyber criminality, including approaches to prevent, investigate, and mitigate cybercriminal behavior. As the internet has become an integral part of young people’s lives, they are the key target of the Rayuela method because they (as a victim or as a perpetrator) are the most vulnerable link of the chain. Considering the increased time spent online and the control of their internet usage and the low level of awareness of cyber threats and their potential impact, it is understandable the proliferation of incidents due to human mistakes. 51% of Europeans feel not well informed about cyber threats, and 86% believe that the risk of becoming a victim of cybercrime is rapidly increasing. On the other hand, Law enforcement has noted that more and more young people are increasingly committing cybercrimes. This is an international problem that has considerable cost implications; it is estimated that crimes in cyberspace will cost the global economy $445B annually. Understanding all these phenomena drives to the necessity of a shift in focus from sanctions to deterrence and prevention. As a research project, Rayuela aims to bring together law enforcement agencies (LEAs), sociologists, psychologists, anthropologists, legal experts, computer scientists, and engineers, to develop novel methodologies that allow better understanding the factors affecting online behavior related to new ways of cyber criminality, as well as promoting the potential of these young talents for cybersecurity and technologies. Rayuela’s main goal is to better understand the drivers and human factors affecting certain relevant ways of cyber criminality, as well as empower and educate young people in the benefits, risks, and threats intrinsically linked to the use of the Internet by playing, thus preventing and mitigating cybercriminal behavior. In order to reach that goal it´s necessary an interdisciplinary consortium (formed by 17 international partners) carries out researches and actions like Profiling and case studies of cybercriminals and victims, risk assessments, studies on Internet of Things and its vulnerabilities, development of a serious gaming environment, training activities, data analysis and interpretation using Artificial intelligence, testing and piloting, etc. For facilitating the real implementation of the Rayuela method, as a community policing strategy, is crucial to count on a Police Force with a solid background in trust-building and community policing in order to do the piloting, specifically with young people. In this sense, Valencia Local Police is a pioneer Police Force working with young people in conflict solving, through providing police mediation and peer mediation services and advice. As an example, it is an official mediation institution, so agreements signed by their police mediators have once signed by the parties, the value of a judicial decision.

Keywords: fight against crime and insecurity, avert and prepare young people against aggression, ICT, serious gaming and artificial intelligence against cybercrime, conflict solving and mediation with young people

Procedia PDF Downloads 128
818 Drinking Water Quality Assessment Using Fuzzy Inference System Method: A Case Study of Rome, Italy

Authors: Yas Barzegar, Atrin Barzegar

Abstract:

Drinking water quality assessment is a major issue today; technology and practices are continuously improving; Artificial Intelligence (AI) methods prove their efficiency in this domain. The current research seeks a hierarchical fuzzy model for predicting drinking water quality in Rome (Italy). The Mamdani fuzzy inference system (FIS) is applied with different defuzzification methods. The Proposed Model includes three fuzzy intermediate models and one fuzzy final model. Each fuzzy model consists of three input parameters and 27 fuzzy rules. The model is developed for water quality assessment with a dataset considering nine parameters (Alkalinity, Hardness, pH, Ca, Mg, Fluoride, Sulphate, Nitrates, and Iron). Fuzzy-logic-based methods have been demonstrated to be appropriate to address uncertainty and subjectivity in drinking water quality assessment; it is an effective method for managing complicated, uncertain water systems and predicting drinking water quality. The FIS method can provide an effective solution to complex systems; this method can be modified easily to improve performance.

Keywords: water quality, fuzzy logic, smart cities, water attribute, fuzzy inference system, membership function

Procedia PDF Downloads 75
817 The Effect of Artificial Intelligence on Accounting and Finance

Authors: Evrime Fawzy Ishak Gadelsayed

Abstract:

This paper presents resource intake accounting as an inventive manner to cope with control accounting, which concentrates on administrators as the crucial customers of the information and offers satisfactory statistics of conventional control accounting. This machine underscores that the association's asset motivates prices; as a consequence, in costing frameworks, the emphasis ought to be on assets and their usage. Resource consumption accounting consolidates two costing methodologies, action-based totally and the German cost accounting approach called GPK. This methodology, however, is a danger to managers when making the management accounting undertaking operational. The motive for this article is to clarify the concept of resource intake accounting, its elements and highlights and use of this approach in associations. Inside the first area, we present useful resource consumption accounting, the basis, reasons for its improvement, and the issues that are faced beyond costing frameworks. At that point, we deliver the requirements and presumptions of this approach; ultimately, we depict the execution of this approach in associations and its preferences over other costing techniques.

Keywords: financial statement fraud, forensic accounting, fraud prevention and detection, auditing, audit expectation gap, corporate governance resource consumption accounting, management accounting, action based method, German cost accounting method

Procedia PDF Downloads 4
816 Challenges in Teaching Code of Ethics and Professional Conduct

Authors: Rasika Dayarathna

Abstract:

Computing has reached every corner of our lives in many forms. The Internet, particularly Social Media, Artificial Intelligence, are prominent among them. As a result, computing has changed our lives and it is expected that severe changes will take place in the coming years. It has introduced a new set of ethical challenges and amplified the existing ethical challenges. It is the duty of everyone involved from conceptualizing, designing, implementing, deploying, and using to follow generally accepted practices in order to avoid or minimize harm and improve the quality of life. Since computing in various forms mentioned above has a significant impact on our lives, various codes of conduct and standards have been introduced. Among many, the ACM (Association of Computing Machinery) Code of Ethics and Professional Conduct is a leading one. This was drafted for everyone, including aspiring computing professionals. However, teaching a code of conduct for aspiring computing professionals is very challenging since this universal code needs to be taught for young computing professionals in a local setting where there are value mismatches and exposure to information systems. This paper discusses the importance of teaching the code, how to overcome the challenges, and suggestions to improve the code to make it more appealing and buying in. It is expected that the improved approach would contribute to improving the quality of life.

Keywords: code of conduct, professionalism, ethics, code of ethics, ethics education, moral development

Procedia PDF Downloads 181
815 The Resource-Base View of Organization and Innovation: Recognition of Significant Relationship in an Organization

Authors: Francis Deinmodei W. Poazi, Jasmine O. Tamunosiki-Amadi, Maurice Fems

Abstract:

In recent times the resource-based view (RBV) of strategic management has recorded a sizeable attention yet there has not been a considerable scholarly and managerial discourse, debate and attention. As a result, this paper gives special bit of critical reasoning as well as top-notch analyses and relationship between RBV and organizational innovation. The study examines those salient aspects of RBV that basically have the will power in ensuring the organization's capacity to go for innovative capability. In achieving such fit and standpoint, the paper joins other relevant academic discourse and empirical evidence. To this end, a reasonable amount of contributions in setting the ground running for future empirical researches would have been provided. More so, the study is guided and built on the following strength and significance: Firstly, RBV sees resources as heterogeneity which forms a strong point of strength and allows organisations to gain competitive advantage. In order words, competitive advantage can be achieved or delivered to the organization when resources are distinctively utilized in a valuable manner more than the envisaged competitors of the organization. Secondly, RBV is significantly influential in determining the real resources that are available in the organization with a view to locate capabilities within in order to attract more profitability into the organization when applied. Thus, there will be more sustainable growth and success in the ever competitive and emerging market. Thus, to have succinct description of the basic methodologies, the study adopts both qualitative as well as quantitative approach with a view to have a broad samples of opinion in establishing and identifying key and strategic organizational resources to enable managers of resources to gain a competitive advantage as well as generating a sustainable increase and growth in profit. Furthermore, a comparative approach and analysis was used to examine the performance of RBV within the organization. Thus, the following are some of the findings of the study: it is clear that there is a nexus between RBV and growth of competitively viable organizations. More so, in most parts, organizations have heterogeneous resources domiciled in their organizations but not all organizations as it was specifically and intelligently adopting the tenets of RBV to strengthen heterogeneity of resources which allows organisations to gain competitive advantage. Other findings of this study reveal that of managerial perception of RBV with respect to application and transformation of resources to achieve a profitable end. It is against this backdrop, the importance of RBV cannot be overemphasized; the study is strongly convinced and think that RBV view is one focal and distinct approach that is focused on internal to outside strategy which engenders sourcing or generating resources internally as well as having the quest to apply such internally sourced resources diligently to increase or gain competitive advantage.

Keywords: resource-based view, innovation, organisation, recognition significant relationship and theoretical perspective

Procedia PDF Downloads 307
814 The Impact of Artificial Intelligence on Sustainable Architecture and Urban Design

Authors: Alfons Aziz Asaad Hozain

Abstract:

The goal of sustainable architecture is to design buildings that have the least negative impact on the environment and provide better conditions for people. What forms of development enhance the area? This question was asked at the Center for the Study of Spatial Development and Building Forms in Cambridge in the late 1960s. This has resulted in many influential articles that have had a profound impact on the practice of urban planning. This article focuses on the sustainability outcomes caused by the climatic conditions of traditional Iranian architecture in hot and dry regions. Since people spend a lot of time at home, it is very important that these homes meet their physical and spiritual needs as well as the cultural and religious aspects of their lifestyle. In a country as large as Iran with different climates, traditional builders have put forward a number of logical solutions to ensure human comfort. With these solutions, the environmental problems of the have long been solved. Taking into account the experiences of traditional architecture in Iran's hot and dry climate, sustainable architecture can be achieved.

Keywords: sustainable development, human rights, the right to development, the human rights-based approach to development, environmental rights, economic development, social sustainability human rights protection, human rights violations, workers’ rights, justice, security

Procedia PDF Downloads 77
813 Optimization of Strategies and Models Review for Optimal Technologies-Based on Fuzzy Schemes for Green Architecture

Authors: Ghada Elshafei, A. Elazim Negm

Abstract:

Recently, Green architecture becomes a significant way to a sustainable future. Green building designs involve finding the balance between comfortable homebuilding and sustainable environment. Moreover, the utilization of the new technologies such as artificial intelligence techniques are used to complement current practices in creating greener structures to keep the built environment more sustainable. The most common objectives are green buildings should be designed to minimize the overall impact of the built environment on ecosystems in general and particularly on human health and on the natural environment. This will lead to protecting occupant health, improving employee productivity, reducing pollution and sustaining the environmental. In green building design, multiple parameters which may be interrelated, contradicting, vague and of qualitative/quantitative nature are broaden to use. This paper presents a comprehensive critical state of art review of current practices based on fuzzy and its combination techniques. Also, presented how green architecture/building can be improved using the technologies that been used for analysis to seek optimal green solutions strategies and models to assist in making the best possible decision out of different alternatives.

Keywords: green architecture/building, technologies, optimization, strategies, fuzzy techniques, models

Procedia PDF Downloads 475
812 Regulatory and Economic Challenges of AI Integration in Cyber Insurance

Authors: Shreyas Kumar, Mili Shangari

Abstract:

Integrating artificial intelligence (AI) in the cyber insurance sector represents a significant advancement, offering the potential to revolutionize risk assessment, fraud detection, and claims processing. However, this integration introduces a range of regulatory and economic challenges that must be addressed to ensure responsible and effective deployment of AI technologies. This paper examines the multifaceted regulatory landscape governing AI in cyber insurance and explores the economic implications of compliance, innovation, and market dynamics. AI's capabilities in processing vast amounts of data and identifying patterns make it an invaluable tool for insurers in managing cyber risks. Yet, the application of AI in this domain is subject to stringent regulatory scrutiny aimed at safeguarding data privacy, ensuring algorithmic transparency, and preventing biases. Regulatory bodies, such as the European Union with its General Data Protection Regulation (GDPR), mandate strict compliance requirements that can significantly impact the deployment of AI systems. These regulations necessitate robust data protection measures, ethical AI practices, and clear accountability frameworks, all of which entail substantial compliance costs for insurers. The economic implications of these regulatory requirements are profound. Insurers must invest heavily in upgrading their IT infrastructure, implementing robust data governance frameworks, and training personnel to handle AI systems ethically and effectively. These investments, while essential for regulatory compliance, can strain financial resources, particularly for smaller insurers, potentially leading to market consolidation. Furthermore, the cost of regulatory compliance can translate into higher premiums for policyholders, affecting the overall affordability and accessibility of cyber insurance. Despite these challenges, the potential economic benefits of AI integration in cyber insurance are significant. AI-enhanced risk assessment models can provide more accurate pricing, reduce the incidence of fraudulent claims, and expedite claims processing, leading to overall cost savings and increased efficiency. These efficiencies can improve the competitiveness of insurers and drive innovation in product offerings. However, balancing these benefits with regulatory compliance is crucial to avoid legal penalties and reputational damage. The paper also explores the potential risks associated with AI integration, such as algorithmic biases that could lead to unfair discrimination in policy underwriting and claims adjudication. Regulatory frameworks need to evolve to address these issues, promoting fairness and transparency in AI applications. Policymakers play a critical role in creating a balanced regulatory environment that fosters innovation while protecting consumer rights and ensuring market stability. In conclusion, the integration of AI in cyber insurance presents both regulatory and economic challenges that require a coordinated approach involving regulators, insurers, and other stakeholders. By navigating these challenges effectively, the industry can harness the transformative potential of AI, driving advancements in risk management and enhancing the resilience of the cyber insurance market. This paper provides insights and recommendations for policymakers and industry leaders to achieve a balanced and sustainable integration of AI technologies in cyber insurance.

Keywords: artificial intelligence (AI), cyber insurance, regulatory compliance, economic impact, risk assessment, fraud detection, cyber liability insurance, risk management, ransomware

Procedia PDF Downloads 33