Search results for: algorithmic anti-Blackness
45 Knowledge Diffusion via Automated Organizational Cartography (Autocart)
Authors: Mounir Kehal
Abstract:
The post-globalization epoch has placed businesses everywhere in new and different competitive situations where knowledgeable, effective and efficient behavior has come to provide the competitive and comparative edge. Enterprises have turned to explicit - and even conceptualizing on tacit - knowledge management to elaborate a systematic approach to develop and sustain the intellectual capital needed to succeed. To be able to do that, you have to be able to visualize your organization as consisting of nothing but knowledge and knowledge flows, whilst being presented in a graphical and visual framework, referred to as automated organizational cartography. Hence, creating the ability of further actively classifying existing organizational content evolving from and within data feeds, in an algorithmic manner, potentially giving insightful schemes and dynamics by which organizational know-how is visualized. It is discussed and elaborated on most recent and applicable definitions and classifications of knowledge management, representing a wide range of views from mechanistic (systematic, data driven) to a more socially (psychologically, cognitive/metadata driven) orientated. More elaborate continuum models, for knowledge acquisition and reasoning purposes, are being used for effectively representing the domain of information that an end user may contain in their decision making process for utilization of available organizational intellectual resources (i.e. Autocart). In this paper, we present an empirical research study conducted previously to try and explore knowledge diffusion in a specialist knowledge domain.Keywords: knowledge management, knowledge maps, knowledge diffusion, organizational cartography
Procedia PDF Downloads 30844 Implementation of Inference Fuzzy System as a Valuation Subsidiary is Based Particle Swarm Optimization for Solves the Issue of Decision Making in Middle Size Soccer Robot League
Authors: Zahra Abdolkarimi, Naser Zouri
Abstract:
Nowadays, there is unbelievable growing of Robots created a collection of complex and motivate subject in robotic and intellectual ornate, also it made a mechatronics style base of theoretical and technical way in Robocop. Additionally, robotics system recommended RoboCup factor as a provider of some standardization and testing method in case of computer discussion widely. The actual purpose of RoboCup is creating independent team of robots in 2050 based of FiFa roles to bring the victory in compare of world star team. In addition, decision making of robots depends to environment reaction, self-player and rival player with using inductive Fuzzy system valuation subsidiary to solve issue of robots in land game. The measure of selection in compare with other methods depends to amount of victories percentage in the same team that plays accidently. Consequences, shows method of our discussion is the best way for Particle Swarm Optimization and Fuzzy system compare to other decision of robotics algorithmic.Keywords: PSO algorithm, inference fuzzy system, chaos theory, soccer robot league
Procedia PDF Downloads 40343 Knowledge Diffusion via Automated Organizational Cartography: Autocart
Authors: Mounir Kehal, Adel Al Araifi
Abstract:
The post-globalisation epoch has placed businesses everywhere in new and different competitive situations where knowledgeable, effective and efficient behaviour has come to provide the competitive and comparative edge. Enterprises have turned to explicit- and even conceptualising on tacit- Knowledge Management to elaborate a systematic approach to develop and sustain the Intellectual Capital needed to succeed. To be able to do that, you have to be able to visualize your organization as consisting of nothing but knowledge and knowledge flows, whilst being presented in a graphical and visual framework, referred to as automated organizational cartography. Hence, creating the ability of further actively classifying existing organizational content evolving from and within data feeds, in an algorithmic manner, potentially giving insightful schemes and dynamics by which organizational know-how is visualised. It is discussed and elaborated on most recent and applicable definitions and classifications of knowledge management, representing a wide range of views from mechanistic (systematic, data driven) to a more socially (psychologically, cognitive/metadata driven) orientated. More elaborate continuum models, for knowledge acquisition and reasoning purposes, are being used for effectively representing the domain of information that an end user may contain in their decision making process for utilization of available organizational intellectual resources (i.e. Autocart). In this paper we present likewise an empirical research study conducted previously to try and explore knowledge diffusion in a specialist knowledge domain.Keywords: knowledge management, knowledge maps, knowledge diffusion, organizational cartography
Procedia PDF Downloads 41742 The Impact of AI on Higher Education
Authors: Georges Bou Ghantous
Abstract:
This literature review examines the transformative impact of Artificial Intelligence (AI) on higher education, highlighting both the potential benefits and challenges associated with its adoption. The review reveals that AI significantly enhances personalized learning by tailoring educational experiences to individual student needs, thereby boosting engagement and learning outcomes. Automated grading systems streamline assessment processes, allowing educators to focus on improving instructional quality and student interaction. AI's data-driven insights provide valuable analytics, helping educators identify trends in at-risk students and refine teaching strategies. Moreover, AI promotes enhanced instructional innovation through the adoption of advanced teaching methods and technologies, enriching the educational environment. Administrative efficiency is also improved as AI automates routine tasks, freeing up time for educators to engage in research and curriculum development. However, the review also addresses the challenges that accompany AI integration, such as data privacy concerns, algorithmic bias, dependency on technology, reduced human interaction, and ethical dilemmas. This balanced exploration underscores the need for careful consideration of both the advantages and potential hurdles in the implementation of AI in higher education.Keywords: administrative efficiency, data-driven insights, data privacy, ethical dilemmas, higher education, personalized learning
Procedia PDF Downloads 2641 Geometric Intuition and Formalism in Passing from Indivisibles to Infinitesimals: Pascal and Leibniz
Authors: Remus Titiriga
Abstract:
The paper focuses on Pascal's indivisibles evolving to Leibniz's infinitesimals. It starts with parallel developments by the two savants in Combinatorics (triangular numbers for Pascal and harmonic triangles for Leibniz) and their implication in determining the sum of mathematical series. It follows with a focus on the geometrical contributions of Pascal. He considered the cycloid and other mechanical curves the epitome of geometric comprehensibility in a series of challenging problems he posed to the mathematical world. Pascal provided the solutions in 1658, in a volume published under the pseudonym of Dettonville, using indivisibles and ratios between curved and straight lines. In the third part, the research follows the impact of this volume on Leibniz as the initial impetus for the elaboration of modern calculus as an algorithmic method disjoint of geometrical intuition. Then paper analyses the further steps and proves that Leibniz's developments relate to his philosophical frame (the search for a characteristic Universalis, the consideration of principle of continuity or the rule of sufficient reason) different from Pascal's and impacting mathematical problems and their solutions. At this stage in Leibniz's evolution, the infinitesimals replaced the indivisibles proper. The last part of the paper starts with speculation around "What if?". Could Pascal, if he lived more, accomplish the same feat? The document uses Pascal's reconstructed philosophical frame to formulate a positive answer. It also proposes to teach calculus with indivisibles and infinitesimals mimicking Pascal and Leibniz's achievements.Keywords: indivisibles, infinitesimals, characteristic triangle, the principle of continuity
Procedia PDF Downloads 12940 Fast Approximate Bayesian Contextual Cold Start Learning (FAB-COST)
Authors: Jack R. McKenzie, Peter A. Appleby, Thomas House, Neil Walton
Abstract:
Cold-start is a notoriously difficult problem which can occur in recommendation systems, and arises when there is insufficient information to draw inferences for users or items. To address this challenge, a contextual bandit algorithm – the Fast Approximate Bayesian Contextual Cold Start Learning algorithm (FAB-COST) – is proposed, which is designed to provide improved accuracy compared to the traditionally used Laplace approximation in the logistic contextual bandit, while controlling both algorithmic complexity and computational cost. To this end, FAB-COST uses a combination of two moment projection variational methods: Expectation Propagation (EP), which performs well at the cold start, but becomes slow as the amount of data increases; and Assumed Density Filtering (ADF), which has slower growth of computational cost with data size but requires more data to obtain an acceptable level of accuracy. By switching from EP to ADF when the dataset becomes large, it is able to exploit their complementary strengths. The empirical justification for FAB-COST is presented, and systematically compared to other approaches on simulated data. In a benchmark against the Laplace approximation on real data consisting of over 670, 000 impressions from autotrader.co.uk, FAB-COST demonstrates at one point increase of over 16% in user clicks. On the basis of these results, it is argued that FAB-COST is likely to be an attractive approach to cold-start recommendation systems in a variety of contexts.Keywords: cold-start learning, expectation propagation, multi-armed bandits, Thompson Sampling, variational inference
Procedia PDF Downloads 10839 A Recommender System for Job Seekers to Show up Companies Based on Their Psychometric Preferences and Company Sentiment Scores
Authors: A. Ashraff
Abstract:
The increasing importance of the web as a medium for electronic and business transactions has served as a catalyst or rather a driving force for the introduction and implementation of recommender systems. Recommender Systems play a major role in processing and analyzing thousands of data rows or reviews and help humans make a purchase decision of a product or service. It also has the ability to predict whether a particular user would rate a product or service based on the user’s profile behavioral pattern. At present, Recommender Systems are being used extensively in every domain known to us. They are said to be ubiquitous. However, in the field of recruitment, it’s not being utilized exclusively. Recent statistics show an increase in staff turnover, which has negatively impacted the organization as well as the employee. The reasons being company culture, working flexibility (work from home opportunity), no learning advancements, and pay scale. Further investigations revealed that there are lacking guidance or support, which helps a job seeker find the company that will suit him best, and though there’s information available about companies, job seekers can’t read all the reviews by themselves and get an analytical decision. In this paper, we propose an approach to study the available review data on IT companies (score their reviews based on user review sentiments) and gather information on job seekers, which includes their Psychometric evaluations. Then presents the job seeker with useful information or rather outputs on which company is most suitable for the job seeker. The theoretical approach, Algorithmic approach and the importance of such a system will be discussed in this paper.Keywords: psychometric tests, recommender systems, sentiment analysis, hybrid recommender systems
Procedia PDF Downloads 10638 Exploring Time-Series Phosphoproteomic Datasets in the Context of Network Models
Authors: Sandeep Kaur, Jenny Vuong, Marcel Julliard, Sean O'Donoghue
Abstract:
Time-series data are useful for modelling as they can enable model-evaluation. However, when reconstructing models from phosphoproteomic data, often non-exact methods are utilised, as the knowledge regarding the network structure, such as, which kinases and phosphatases lead to the observed phosphorylation state, is incomplete. Thus, such reactions are often hypothesised, which gives rise to uncertainty. Here, we propose a framework, implemented via a web-based tool (as an extension to Minardo), which given time-series phosphoproteomic datasets, can generate κ models. The incompleteness and uncertainty in the generated model and reactions are clearly presented to the user via the visual method. Furthermore, we demonstrate, via a toy EGF signalling model, the use of algorithmic verification to verify κ models. Manually formulated requirements were evaluated with regards to the model, leading to the highlighting of the nodes causing unsatisfiability (i.e. error causing nodes). We aim to integrate such methods into our web-based tool and demonstrate how the identified erroneous nodes can be presented to the user via the visual method. Thus, in this research we present a framework, to enable a user to explore phosphorylation proteomic time-series data in the context of models. The observer can visualise which reactions in the model are highly uncertain, and which nodes cause incorrect simulation outputs. A tool such as this enables an end-user to determine the empirical analysis to perform, to reduce uncertainty in the presented model - thus enabling a better understanding of the underlying system.Keywords: κ-models, model verification, time-series phosphoproteomic datasets, uncertainty and error visualisation
Procedia PDF Downloads 25537 Algorithmic Approach to Management of Complications of Permanent Facial Filler: A Saudi Experience
Authors: Luay Alsalmi
Abstract:
Background: Facial filler is the most common type of cosmetic surgery next to botox. Permanent filler is preferred nowadays due to the low cost brought about by non-recurring injection appointments. However, such fillers pose a higher risk for complications, with even greater adverse effects when the procedure is done using unknown dermal filler injections. AIM: This study aimed to establish an algorithm to categorize and manage patients that receive permanent fillers. Materials and Methods: Twelve participants were presented to the service through emergency or as outpatient from November 2015 to May 2021. Demographics such as age, sex, date of injection, time of onset, and types of complications were collected. After examination, all cases were managed based on an algorithm established. FACE-Q was used to measure overall satisfaction and psychological well-being. Results: The algorithm to diagnose and manage these patients effectively with a high satisfaction rate was established in this study. All participants were non-smoker females with no known medical comorbidities. The algorithm presented determined the treatment plan when faced with complications. Results revealed high appearance-related psychosocial distress was observed prior to surgery, while it significantly dropped after surgery. FACE-Q was able to establish evidence of satisfactory ratings among patients prior to and after surgery. Conclusion: This treatment algorithm can guide the surgeon in formulating a suitable plan with fewer complications and a high satisfaction rate.Keywords: facial filler, FACE-Q, psycho-social stress, botox, treatment algorithm
Procedia PDF Downloads 8436 Study of Religious Women's Acceptance of Religious Women Bloggers on Instagram
Authors: Ali Momeni
Abstract:
Visual media has had a significant impact on the mental structure and behaviors of humanity. One interactive platform that has played a major role in this is Instagram. In Islamic countries, particularly Iran, many Muslims have embraced this interactive media platform for various reasons. Instagram has also provided an opportunity for individuals to become famous and gain micro-celebrity status through its semi-algorithmic features. A notable group of Iranian women who have gained fame through Instagram are religious Muslim women who have transitioned into bloggers. These Iranian religious women bloggers (IRWB) have garnered a large following by showcasing different models of hijab and their private lives. This research aims to qualitatively study the representation of femininity and religiosity of these women. The main question addressed in this study is the acceptance of Instagram activity by IRWB among religious women. Drawing on concepts such as 'The Society of the Spectacle' and 'Celebrity Online', this study utilized the netnography method to analyze 14 pages of IRWB. Data was collected in two phases, with the first phase involving the analysis of religious women's comments on posts related to these themes. The second phase included interviews with religious women students who view or follow these pages. A total of 120 comments and 14 interviews were thematically analyzed. The results revealed that the reception of these pages by religious women fell into four main themes: the spectacle of femininity, the commercialization of religiosity, the distortion of Islam, and the construction of religiosity and femininity. Ultimately, religious women did not find these pages to be reflective of their own experiences of female and religious life.Keywords: women, bloggers, instagram, IRWB, reception.
Procedia PDF Downloads 7535 Image Ranking to Assist Object Labeling for Training Detection Models
Authors: Tonislav Ivanov, Oleksii Nedashkivskyi, Denis Babeshko, Vadim Pinskiy, Matthew Putman
Abstract:
Training a machine learning model for object detection that generalizes well is known to benefit from a training dataset with diverse examples. However, training datasets usually contain many repeats of common examples of a class and lack rarely seen examples. This is due to the process commonly used during human annotation where a person would proceed sequentially through a list of images labeling a sufficiently high total number of examples. Instead, the method presented involves an active process where, after the initial labeling of several images is completed, the next subset of images for labeling is selected by an algorithm. This process of algorithmic image selection and manual labeling continues in an iterative fashion. The algorithm used for the image selection is a deep learning algorithm, based on the U-shaped architecture, which quantifies the presence of unseen data in each image in order to find images that contain the most novel examples. Moreover, the location of the unseen data in each image is highlighted, aiding the labeler in spotting these examples. Experiments performed using semiconductor wafer data show that labeling a subset of the data, curated by this algorithm, resulted in a model with a better performance than a model produced from sequentially labeling the same amount of data. Also, similar performance is achieved compared to a model trained on exhaustive labeling of the whole dataset. Overall, the proposed approach results in a dataset that has a diverse set of examples per class as well as more balanced classes, which proves beneficial when training a deep learning model.Keywords: computer vision, deep learning, object detection, semiconductor
Procedia PDF Downloads 13634 The Cultural Persona of Artificial Intelligence: An Analysis of Anthropological Challenges to Public Communication
Authors: Abhivardhan, Ritu Agarwal
Abstract:
The role of entrepreneurial ethics is connected with materializing the core components of human life, and the flexible and gullible attributions dominate the materialization of human lifestyle and outreach in the age of the internet and globalization. One of the key bi-products of the age of information – Artificial Intelligence has become a relevant mechanism to materialize and understand human empathy and originality via various algorithmic policing methodologies with specific intricacies. Since it has a special connection with ethnocentrism – it has the potential to influence the approach of international law and politics owed to the rise of and approach towards perception and communication via populism in progressive and third world countries. The paper argues about the cultural persona of artificial intelligence, and its ontological resemblance in human life is connected with the ethnocentric treatment of cyberspace, with an analysis of the influence of the ethics of entrepreneurship in international politics. The paper further provides an analysis of fake news and misinformation as the sub-strata of communication strategies involving populism determined as a communication strategy and about the legal case of constitutional redemption in recent legislative developments in Europe, the U.S, and Asia with reference to certain important strategies, policy documentation, declarations, and legal instruments. The paper concludes that the capillaries of the anthropomorphic developments of cultural perception via towards artificial intelligence have a hidden and unstable connection with the common approach of entrepreneurial ethics, which influences populism to disrupt the peaceful order of international politics via some minor backlashes in the technological, legal and social realm of human life. Suggestions with the conclusion are hereby provided.Keywords: ethnocentrism, perception politics, populism, international law, slacktivism, artificial intelligence ethics, enculturation
Procedia PDF Downloads 12933 A Review of Deep Learning Methods in Computer-Aided Detection and Diagnosis Systems based on Whole Mammogram and Ultrasound Scan Classification
Authors: Ian Omung'a
Abstract:
Breast cancer remains to be one of the deadliest cancers for women worldwide, with the risk of developing tumors being as high as 50 percent in Sub-Saharan African countries like Kenya. With as many as 42 percent of these cases set to be diagnosed late when cancer has metastasized and or the prognosis has become terminal, Full Field Digital [FFD] Mammography remains an effective screening technique that leads to early detection where in most cases, successful interventions can be made to control or eliminate the tumors altogether. FFD Mammograms have been proven to multiply more effective when used together with Computer-Aided Detection and Diagnosis [CADe] systems, relying on algorithmic implementations of Deep Learning techniques in Computer Vision to carry out deep pattern recognition that is comparable to the level of a human radiologist and decipher whether specific areas of interest in the mammogram scan image portray abnormalities if any and whether these abnormalities are indicative of a benign or malignant tumor. Within this paper, we review emergent Deep Learning techniques that will prove relevant to the development of State-of-The-Art FFD Mammogram CADe systems. These techniques will span self-supervised learning for context-encoded occlusion, self-supervised learning for pre-processing and labeling automation, as well as the creation of a standardized large-scale mammography dataset as a benchmark for CADe systems' evaluation. Finally, comparisons are drawn between existing practices that pre-date these techniques and how the development of CADe systems that incorporate them will be different.Keywords: breast cancer diagnosis, computer aided detection and diagnosis, deep learning, whole mammogram classfication, ultrasound classification, computer vision
Procedia PDF Downloads 9332 Solving a Micromouse Maze Using an Ant-Inspired Algorithm
Authors: Rolando Barradas, Salviano Soares, António Valente, José Alberto Lencastre, Paulo Oliveira
Abstract:
This article reviews the Ant Colony Optimization, a nature-inspired algorithm, and its implementation in the Scratch/m-Block programming environment. The Ant Colony Optimization is a part of Swarm Intelligence-based algorithms and is a subset of biological-inspired algorithms. Starting with a problem in which one has a maze and needs to find its path to the center and return to the starting position. This is similar to an ant looking for a path to a food source and returning to its nest. Starting with the implementation of a simple wall follower simulator, the proposed solution uses a dynamic graphical interface that allows young students to observe the ants’ movement while the algorithm optimizes the routes to the maze’s center. Things like interface usability, Data structures, and the conversion of algorithmic language to Scratch syntax were some of the details addressed during this implementation. This gives young students an easier way to understand the computational concepts of sequences, loops, parallelism, data, events, and conditionals, as they are used through all the implemented algorithms. Future work includes the simulation results with real contest mazes and two different pheromone update methods and the comparison with the optimized results of the winners of each one of the editions of the contest. It will also include the creation of a Digital Twin relating the virtual simulator with a real micromouse in a full-size maze. The first test results show that the algorithm found the same optimized solutions that were found by the winners of each one of the editions of the Micromouse contest making this a good solution for maze pathfinding.Keywords: nature inspired algorithms, scratch, micromouse, problem-solving, computational thinking
Procedia PDF Downloads 12531 A Distributed Mobile Agent Based on Intrusion Detection System for MANET
Authors: Maad Kamal Al-Anni
Abstract:
This study is about an algorithmic dependence of Artificial Neural Network on Multilayer Perceptron (MPL) pertaining to the classification and clustering presentations for Mobile Adhoc Network vulnerabilities. Moreover, mobile ad hoc network (MANET) is ubiquitous intelligent internetworking devices in which it has the ability to detect their environment using an autonomous system of mobile nodes that are connected via wireless links. Security affairs are the most important subject in MANET due to the easy penetrative scenarios occurred in such an auto configuration network. One of the powerful techniques used for inspecting the network packets is Intrusion Detection System (IDS); in this article, we are going to show the effectiveness of artificial neural networks used as a machine learning along with stochastic approach (information gain) to classify the malicious behaviors in simulated network with respect to different IDS techniques. The monitoring agent is responsible for detection inference engine, the audit data is collected from collecting agent by simulating the node attack and contrasted outputs with normal behaviors of the framework, whenever. In the event that there is any deviation from the ordinary behaviors then the monitoring agent is considered this event as an attack , in this article we are going to demonstrate the signature-based IDS approach in a MANET by implementing the back propagation algorithm over ensemble-based Traffic Table (TT), thus the signature of malicious behaviors or undesirable activities are often significantly prognosticated and efficiently figured out, by increasing the parametric set-up of Back propagation algorithm during the experimental results which empirically shown its effectiveness for the ratio of detection index up to 98.6 percentage. Consequently it is proved in empirical results in this article, the performance matrices are also being included in this article with Xgraph screen show by different through puts like Packet Delivery Ratio (PDR), Through Put(TP), and Average Delay(AD).Keywords: Intrusion Detection System (IDS), Mobile Adhoc Networks (MANET), Back Propagation Algorithm (BPA), Neural Networks (NN)
Procedia PDF Downloads 19430 Testing and Validation Stochastic Models in Epidemiology
Authors: Snigdha Sahai, Devaki Chikkavenkatappa Yellappa
Abstract:
This study outlines approaches for testing and validating stochastic models used in epidemiology, focusing on the integration and functional testing of simulation code. It details methods for combining simple functions into comprehensive simulations, distinguishing between deterministic and stochastic components, and applying tests to ensure robustness. Techniques include isolating stochastic elements, utilizing large sample sizes for validation, and handling special cases. Practical examples are provided using R code to demonstrate integration testing, handling of incorrect inputs, and special cases. The study emphasizes the importance of both functional and defensive programming to enhance code reliability and user-friendliness.Keywords: computational epidemiology, epidemiology, public health, infectious disease modeling, statistical analysis, health data analysis, disease transmission dynamics, predictive modeling in health, population health modeling, quantitative public health, random sampling simulations, randomized numerical analysis, simulation-based analysis, variance-based simulations, algorithmic disease simulation, computational public health strategies, epidemiological surveillance, disease pattern analysis, epidemic risk assessment, population-based health strategies, preventive healthcare models, infection dynamics in populations, contagion spread prediction models, survival analysis techniques, epidemiological data mining, host-pathogen interaction models, risk assessment algorithms for disease spread, decision-support systems in epidemiology, macro-level health impact simulations, socioeconomic determinants in disease spread, data-driven decision making in public health, quantitative impact assessment of health policies, biostatistical methods in population health, probability-driven health outcome predictions
Procedia PDF Downloads 629 Slowness in Architecture: The Pace of Human Engagement with the Built Environment
Authors: Jaidev Tripathy
Abstract:
A human generation’s lifestyle, behaviors, habits, and actions are governed heavily by homogenous mindsets. But the current scenario is witnessing a rapid gap in this homogeneity as a result of an intervention, or rather, the dominance of the digital revolution in the human lifestyle. The current mindset for mass production, employment, multi-tasking, rapid involvement, and stiff competition to stay above the rest has led to a major shift in human consciousness. Architecture, as an entity, is being perceived differently. The screens are replacing the skies. The pace at which operation and evolution is taking place has increased. It is paradoxical, that time seems to be moving faster despite the intention to save time. Parallelly, there is an evident shift in architectural typologies spanning across different generations. The architecture of today is now seems influenced heavily from here and there. Mass production of buildings and over-exploitation of resources giving shape to uninspiring algorithmic designs, ambiguously catering to multiple user groups, has become a prevalent theme. Borrow-and-steal replaces influence, and the diminishing depth in today’s designs reflects a lack of understanding and connection. The digitally dominated world, perceived as an aid to connect and network, is making humans less capable of real-life interactions and understanding. It is not wrong, but it doesn’t seem right either. The engagement level between human beings and the built environment is a concern which surfaces. This leads to a question: Does human engagement drive architecture, or does architecture drive human engagement? This paper attempts to relook at architecture's capacity and its relativity with pace to influence the conscious decisions of a human being. Secondary research, supported with case examples, helps in understanding the translation of human engagement with the built environment through physicality of architecture. The procedure, or theme, is pace and the role of slowness in the context of human behaviors, thus bridging the widening gap between the human race and the architecture themselves give shape to, avoiding a possible future dystopian world.Keywords: junkspace, pace, perception, slowness
Procedia PDF Downloads 10928 System Dietadhoc® - A Fusion of Human-Centred Design and Agile Development for the Explainability of AI Techniques Based on Nutritional and Clinical Data
Authors: Michelangelo Sofo, Giuseppe Labianca
Abstract:
In recent years, the scientific community's interest in the exploratory analysis of biomedical data has increased exponentially. Considering the field of research of nutritional biologists, the curative process, based on the analysis of clinical data, is a very delicate operation due to the fact that there are multiple solutions for the management of pathologies in the food sector (for example can recall intolerances and allergies, management of cholesterol metabolism, diabetic pathologies, arterial hypertension, up to obesity and breathing and sleep problems). In this regard, in this research work a system was created capable of evaluating various dietary regimes for specific patient pathologies. The system is founded on a mathematical-numerical model and has been created tailored for the real working needs of an expert in human nutrition using the human-centered design (ISO 9241-210), therefore it is in step with continuous scientific progress in the field and evolves through the experience of managed clinical cases (machine learning process). DietAdhoc® is a decision support system nutrition specialists for patients of both sexes (from 18 years of age) developed with an agile methodology. Its task consists in drawing up the biomedical and clinical profile of the specific patient by applying two algorithmic optimization approaches on nutritional data and a symbolic solution, obtained by transforming the relational database underlying the system into a deductive database. For all three solution approaches, particular emphasis has been given to the explainability of the suggested clinical decisions through flexible and customizable user interfaces. Furthermore, the system has multiple software modules based on time series and visual analytics techniques that allow to evaluate the complete picture of the situation and the evolution of the diet assigned for specific pathologies.Keywords: medical decision support, physiological data extraction, data driven diagnosis, human centered AI, symbiotic AI paradigm
Procedia PDF Downloads 2327 The Role of Twitter Bots in Political Discussion on 2019 European Elections
Authors: Thomai Voulgari, Vasilis Vasilopoulos, Antonis Skamnakis
Abstract:
The aim of this study is to investigate the effect of the European election campaigns (May 23-26, 2019) on Twitter achieving with artificial intelligence tools such as troll factories and automated inauthentic accounts. Our research focuses on the last European Parliamentary elections that took place between 23 and 26 May 2019 specifically in Italy, Greece, Germany and France. It is difficult to estimate how many Twitter users are actually bots (Echeverría, 2017). Detection for fake accounts is becoming even more complicated as AI bots are made more advanced. A political bot can be programmed to post comments on a Twitter account for a political candidate, target journalists with manipulated content or engage with politicians and artificially increase their impact and popularity. We analyze variables related to 1) the scope of activity of automated bots accounts and 2) degree of coherence and 3) degree of interaction taking into account different factors, such as the type of content of Twitter messages and their intentions, as well as the spreading to the general public. For this purpose, we collected large volumes of Twitter accounts of party leaders and MEP candidates between 10th of May and 26th of July based on content analysis of tweets based on hashtags while using an innovative network analysis tool known as MediaWatch.io (https://mediawatch.io/). According to our findings, one of the highest percentage (64.6%) of automated “bot” accounts during 2019 European election campaigns was in Greece. In general terms, political bots aim to proliferation of misinformation on social media. Targeting voters is a way that it can be achieved contribute to social media manipulation. We found that political parties and individual politicians create and promote purposeful content on Twitter using algorithmic tools. Based on this analysis, online political advertising play an important role to the process of spreading misinformation during elections campaigns. Overall, inauthentic accounts and social media algorithms are being used to manipulate political behavior and public opinion.Keywords: artificial intelligence tools, human-bot interactions, political manipulation, social networking, troll factories
Procedia PDF Downloads 13826 A Statistical-Algorithmic Approach for the Design and Evaluation of a Fresnel Solar Concentrator-Receiver System
Authors: Hassan Qandil
Abstract:
Using a statistical algorithm incorporated in MATLAB, four types of non-imaging Fresnel lenses are designed; spot-flat, linear-flat, dome-shaped and semi-cylindrical-shaped. The optimization employs a statistical ray-tracing methodology of the incident light, mainly considering effects of chromatic aberration, varying focal lengths, solar inclination and azimuth angles, lens and receiver apertures, and the optimum number of prism grooves. While adopting an equal-groove-width assumption of the Poly-methyl-methacrylate (PMMA) prisms, the main target is to maximize the ray intensity on the receiver’s aperture and therefore achieving higher values of heat flux. The algorithm outputs prism angles and 2D sketches. 3D drawings are then generated via AutoCAD and linked to COMSOL Multiphysics software to simulate the lenses under solar ray conditions, which provides optical and thermal analysis at both the lens’ and the receiver’s apertures while setting conditions as per the Dallas-TX weather data. Once the lenses’ characterization is finalized, receivers are designed based on its optimized aperture size. Several cavity shapes; including triangular, arc-shaped and trapezoidal, are tested while coupled with a variety of receiver materials, working fluids, heat transfer mechanisms, and enclosure designs. A vacuum-reflective enclosure is also simulated for an enhanced thermal absorption efficiency. Each receiver type is simulated via COMSOL while coupled with the optimized lens. A lab-scale prototype for the optimum lens-receiver configuration is then fabricated for experimental evaluation. Application-based testing is also performed for the selected configuration, including that of a photovoltaic-thermal cogeneration system and solar furnace system. Finally, some future research work is pointed out, including the coupling of the collector-receiver system with an end-user power generator, and the use of a multi-layered genetic algorithm for comparative studies.Keywords: COMSOL, concentrator, energy, fresnel, optics, renewable, solar
Procedia PDF Downloads 15525 Bioinformatics High Performance Computation and Big Data
Authors: Javed Mohammed
Abstract:
Right now, bio-medical infrastructure lags well behind the curve. Our healthcare system is dispersed and disjointed; medical records are a bit of a mess; and we do not yet have the capacity to store and process the crazy amounts of data coming our way from widespread whole-genome sequencing. And then there are privacy issues. Despite these infrastructure challenges, some researchers are plunging into bio medical Big Data now, in hopes of extracting new and actionable knowledge. They are doing delving into molecular-level data to discover bio markers that help classify patients based on their response to existing treatments; and pushing their results out to physicians in novel and creative ways. Computer scientists and bio medical researchers are able to transform data into models and simulations that will enable scientists for the first time to gain a profound under-standing of the deepest biological functions. Solving biological problems may require High-Performance Computing HPC due either to the massive parallel computation required to solve a particular problem or to algorithmic complexity that may range from difficult to intractable. Many problems involve seemingly well-behaved polynomial time algorithms (such as all-to-all comparisons) but have massive computational requirements due to the large data sets that must be analyzed. High-throughput techniques for DNA sequencing and analysis of gene expression have led to exponential growth in the amount of publicly available genomic data. With the increased availability of genomic data traditional database approaches are no longer sufficient for rapidly performing life science queries involving the fusion of data types. Computing systems are now so powerful it is possible for researchers to consider modeling the folding of a protein or even the simulation of an entire human body. This research paper emphasizes the computational biology's growing need for high-performance computing and Big Data. It illustrates this article’s indispensability in meeting the scientific and engineering challenges of the twenty-first century, and how Protein Folding (the structure and function of proteins) and Phylogeny Reconstruction (evolutionary history of a group of genes) can use HPC that provides sufficient capability for evaluating or solving more limited but meaningful instances. This article also indicates solutions to optimization problems, and benefits Big Data and Computational Biology. The article illustrates the Current State-of-the-Art and Future-Generation Biology of HPC Computing with Big Data.Keywords: high performance, big data, parallel computation, molecular data, computational biology
Procedia PDF Downloads 36324 Data Clustering Algorithm Based on Multi-Objective Periodic Bacterial Foraging Optimization with Two Learning Archives
Authors: Chen Guo, Heng Tang, Ben Niu
Abstract:
Clustering splits objects into different groups based on similarity, making the objects have higher similarity in the same group and lower similarity in different groups. Thus, clustering can be treated as an optimization problem to maximize the intra-cluster similarity or inter-cluster dissimilarity. In real-world applications, the datasets often have some complex characteristics: sparse, overlap, high dimensionality, etc. When facing these datasets, simultaneously optimizing two or more objectives can obtain better clustering results than optimizing one objective. However, except for the objectives weighting methods, traditional clustering approaches have difficulty in solving multi-objective data clustering problems. Due to this, evolutionary multi-objective optimization algorithms are investigated by researchers to optimize multiple clustering objectives. In this paper, the Data Clustering algorithm based on Multi-objective Periodic Bacterial Foraging Optimization with two Learning Archives (DC-MPBFOLA) is proposed. Specifically, first, to reduce the high computing complexity of the original BFO, periodic BFO is employed as the basic algorithmic framework. Then transfer the periodic BFO into a multi-objective type. Second, two learning strategies are proposed based on the two learning archives to guide the bacterial swarm to move in a better direction. On the one hand, the global best is selected from the global learning archive according to the convergence index and diversity index. On the other hand, the personal best is selected from the personal learning archive according to the sum of weighted objectives. According to the aforementioned learning strategies, a chemotaxis operation is designed. Third, an elite learning strategy is designed to provide fresh power to the objects in two learning archives. When the objects in these two archives do not change for two consecutive times, randomly initializing one dimension of objects can prevent the proposed algorithm from falling into local optima. Fourth, to validate the performance of the proposed algorithm, DC-MPBFOLA is compared with four state-of-art evolutionary multi-objective optimization algorithms and one classical clustering algorithm on evaluation indexes of datasets. To further verify the effectiveness and feasibility of designed strategies in DC-MPBFOLA, variants of DC-MPBFOLA are also proposed. Experimental results demonstrate that DC-MPBFOLA outperforms its competitors regarding all evaluation indexes and clustering partitions. These results also indicate that the designed strategies positively influence the performance improvement of the original BFO.Keywords: data clustering, multi-objective optimization, bacterial foraging optimization, learning archives
Procedia PDF Downloads 13923 Synthetic Classicism: A Machine Learning Approach to the Recognition and Design of Circular Pavilions
Authors: Federico Garrido, Mostafa El Hayani, Ahmed Shams
Abstract:
The exploration of the potential of artificial intelligence (AI) in architecture is still embryonic, however, its latent capacity to change design disciplines is significant. 'Synthetic Classism' is a research project that questions the underlying aspects of classically organized architecture not just in aesthetic terms but also from a geometrical and morphological point of view, intending to generate new architectural information using historical examples as source material. The main aim of this paper is to explore the uses of artificial intelligence and machine learning algorithms in architectural design while creating a coherent narrative to be contained within a design process. The purpose is twofold: on one hand, to develop and train machine learning algorithms to produce architectural information of small pavilions and on the other, to synthesize new information from previous architectural drawings. These algorithms intend to 'interpret' graphical information from each pavilion and then generate new information from it. The procedure, once these algorithms are trained, is the following: parting from a line profile, a synthetic 'front view' of a pavilion is generated, then using it as a source material, an isometric view is created from it, and finally, a top view is produced. Thanks to GAN algorithms, it is also possible to generate Front and Isometric views without any graphical input as well. The final intention of the research is to produce isometric views out of historical information, such as the pavilions from Sebastiano Serlio, James Gibbs, or John Soane. The idea is to create and interpret new information not just in terms of historical reconstruction but also to explore AI as a novel tool in the narrative of a creative design process. This research also challenges the idea of the role of algorithmic design associated with efficiency or fitness while embracing the possibility of a creative collaboration between artificial intelligence and a human designer. Hence the double feature of this research, both analytical and creative, first by synthesizing images based on a given dataset and then by generating new architectural information from historical references. We find that the possibility of creatively understand and manipulate historic (and synthetic) information will be a key feature in future innovative design processes. Finally, the main question that we propose is whether an AI could be used not just to create an original and innovative group of simple buildings but also to explore the possibility of fostering a novel architectural sensibility grounded on the specificities on the architectural dataset, either historic, human-made or synthetic.Keywords: architecture, central pavilions, classicism, machine learning
Procedia PDF Downloads 14022 Pareto Optimal Material Allocation Mechanism
Authors: Peter Egri, Tamas Kis
Abstract:
Scheduling problems have been studied by the algorithmic mechanism design research from the beginning. This paper is focusing on a practically important, but theoretically rather neglected field: the project scheduling problem where the jobs connected by precedence constraints compete for various nonrenewable resources, such as materials. Although the centralized problem can be solved in polynomial-time by applying the algorithm of Carlier and Rinnooy Kan from the Eighties, obtaining materials in a decentralized environment is usually far from optimal. It can be observed in practical production scheduling situations that project managers tend to cache the required materials as soon as possible in order to avoid later delays due to material shortages. This greedy practice usually leads both to excess stocks for some projects and materials, and simultaneously, to shortages for others. The aim of this study is to develop a model for the material allocation problem of a production plant, where a central decision maker—the inventory—should assign the resources arriving at different points in time to the jobs. Since the actual due dates are not known by the inventory, the mechanism design approach is applied with the projects as the self-interested agents. The goal of the mechanism is to elicit the required information and allocate the available materials such that it minimizes the maximal tardiness among the projects. It is assumed that except the due dates, the inventory is familiar with every other parameters of the problem. A further requirement is that due to practical considerations monetary transfer is not allowed. Therefore a mechanism without money is sought which excludes some widely applied solutions such as the Vickrey–Clarke–Groves scheme. In this work, a type of Serial Dictatorship Mechanism (SDM) is presented for the studied problem, including a polynomial-time algorithm for computing the material allocation. The resulted mechanism is both truthful and Pareto optimal. Thus the randomization over the possible priority orderings of the projects results in a universally truthful and Pareto optimal randomized mechanism. However, it is shown that in contrast to problems like the many-to-many matching market, not every Pareto optimal solution can be generated with an SDM. In addition, no performance guarantee can be given compared to the optimal solution, therefore this approximation characteristic is investigated with experimental study. All in all, the current work studies a practically relevant scheduling problem and presents a novel truthful material allocation mechanism which eliminates the potential benefit of the greedy behavior that negatively influences the outcome. The resulted allocation is also shown to be Pareto optimal, which is the most widely used criteria describing a necessary condition for a reasonable solution.Keywords: material allocation, mechanism without money, polynomial-time mechanism, project scheduling
Procedia PDF Downloads 33221 Governance in the Age of Artificial intelligence and E- Government
Authors: Mernoosh Abouzari, Shahrokh Sahraei
Abstract:
Electronic government is a way for governments to use new technology that provides people with the necessary facilities for proper access to government information and services, improving the quality of services and providing broad opportunities to participate in democratic processes and institutions. That leads to providing the possibility of easy use of information technology in order to distribute government services to the customer without holidays, which increases people's satisfaction and participation in political and economic activities. The expansion of e-government services and its movement towards intelligentization has the ability to re-establish the relationship between the government and citizens and the elements and components of the government. Electronic government is the result of the use of information and communication technology (ICT), which by implementing it at the government level, in terms of the efficiency and effectiveness of government systems and the way of providing services, tremendous commercial changes are created, which brings people's satisfaction at the wide level will follow. The main level of electronic government services has become objectified today with the presence of artificial intelligence systems, which recent advances in artificial intelligence represent a revolution in the use of machines to support predictive decision-making and Classification of data. With the use of deep learning tools, artificial intelligence can mean a significant improvement in the delivery of services to citizens and uplift the work of public service professionals while also inspiring a new generation of technocrats to enter government. This smart revolution may put aside some functions of the government, change its components, and concepts such as governance, policymaking or democracy will change in front of artificial intelligence technology, and the top-down position in governance may face serious changes, and If governments delay in using artificial intelligence, the balance of power will change and private companies will monopolize everything with their pioneering in this field, and the world order will also depend on rich multinational companies and in fact, Algorithmic systems will become the ruling systems of the world. It can be said that currently, the revolution in information technology and biotechnology has been started by engineers, large economic companies, and scientists who are rarely aware of the political complexities of their decisions and certainly do not represent anyone. Therefore, it seems that if liberalism, nationalism, or any other religion wants to organize the world of 2050, it should not only rationalize the concept of artificial intelligence and complex data algorithm but also mix them in a new and meaningful narrative. Therefore, the changes caused by artificial intelligence in the political and economic order will lead to a major change in the way all countries deal with the phenomenon of digital globalization. In this paper, while debating the role and performance of e-government, we will discuss the efficiency and application of artificial intelligence in e-government, and we will consider the developments resulting from it in the new world and the concepts of governance.Keywords: electronic government, artificial intelligence, information and communication technology., system
Procedia PDF Downloads 9420 Artificial Intelligence: Reimagining Education
Authors: Silvia Zanazzi
Abstract:
Artificial intelligence (AI) has become an integral part of our world, transitioning from scientific exploration to practical applications that impact daily life. The emergence of generative AI is reshaping education, prompting new questions about the role of teachers, the nature of learning, and the overall purpose of schooling. While AI offers the potential for optimizing teaching and learning processes, concerns about discrimination and bias arising from training data and algorithmic decisions persist. There is a risk of a disconnect between the rapid development of AI and the goals of building inclusive educational environments. The prevailing discourse on AI in education often prioritizes efficiency and individual skill acquisition. This narrow focus can undermine the importance of collaborative learning and shared experiences. A growing body of research challenges this perspective, advocating for AI that enhances, rather than replaces, human interaction in education. This study aims to examine the relationship between AI and education critically. Reviewing existing research will identify both AI implementation’s potential benefits and risks. The goal is to develop a framework that supports the ethical and effective integration of AI into education, ensuring it serves the needs of all learners. The theoretical reflection will be developed based on a review of national and international scientific literature on artificial intelligence in education. The primary objective is to curate a selection of critical contributions from diverse disciplinary perspectives and/or an inter- and transdisciplinary viewpoint, providing a state-of-the-art overview and a critical analysis of potential future developments. Subsequently, the thematic analysis of these contributions will enable the creation of a framework for understanding and critically analyzing the role of artificial intelligence in schools and education, highlighting promising directions and potential pitfalls. The expected results are (1) a classification of the cognitive biases present in representations of AI in education and the associated risks and (2) a categorization of potentially beneficial interactions between AI applications and teaching and learning processes, including those already in use or under development. While not exhaustive, the proposed framework will serve as a guide for critically exploring the complexity of AI in education. It will help to reframe dystopian visions often associated with technology and facilitate discussions on fostering synergies that balance the ‘dream’ of quality education for all with the realities of AI implementation. The discourse on artificial intelligence in education, highlighting reductionist models rooted in fragmented and utilitarian views of knowledge, has the merit of stimulating the construction of alternative perspectives that can ‘return’ teaching and learning to education, human growth, and the well-being of individuals and communities.Keywords: education, artificial intelligence, teaching, learning
Procedia PDF Downloads 2019 Artificial Law: Legal AI Systems and the Need to Satisfy Principles of Justice, Equality and the Protection of Human Rights
Authors: Begum Koru, Isik Aybay, Demet Celik Ulusoy
Abstract:
The discipline of law is quite complex and has its own terminology. Apart from written legal rules, there is also living law, which refers to legal practice. Basic legal rules aim at the happiness of individuals in social life and have different characteristics in different branches such as public or private law. On the other hand, law is a national phenomenon. The law of one nation and the legal system applied on the territory of another nation may be completely different. People who are experts in a particular field of law in one country may have insufficient expertise in the law of another country. Today, in addition to the local nature of law, international and even supranational law rules are applied in order to protect basic human values and ensure the protection of human rights around the world. Systems that offer algorithmic solutions to legal problems using artificial intelligence (AI) tools will perhaps serve to produce very meaningful results in terms of human rights. However, algorithms to be used should not be developed by only computer experts, but also need the contribution of people who are familiar with law, values, judicial decisions, and even the social and political culture of the society to which it will provide solutions. Otherwise, even if the algorithm works perfectly, it may not be compatible with the values of the society in which it is applied. The latest developments involving the use of AI techniques in legal systems indicate that artificial law will emerge as a new field in the discipline of law. More AI systems are already being applied in the field of law, with examples such as predicting judicial decisions, text summarization, decision support systems, and classification of documents. Algorithms for legal systems employing AI tools, especially in the field of prediction of judicial decisions and decision support systems, have the capacity to create automatic decisions instead of judges. When the judge is removed from this equation, artificial intelligence-made law created by an intelligent algorithm on its own emerges, whether the domain is national or international law. In this work, the aim is to make a general analysis of this new topic. Such an analysis needs both a literature survey and a perspective from computer experts' and lawyers' point of view. In some societies, the use of prediction or decision support systems may be useful to integrate international human rights safeguards. In this case, artificial law can serve to produce more comprehensive and human rights-protective results than written or living law. In non-democratic countries, it may even be thought that direct decisions and artificial intelligence-made law would be more protective instead of a decision "support" system. Since the values of law are directed towards "human happiness or well-being", it requires that the AI algorithms should always be capable of serving this purpose and based on the rule of law, the principle of justice and equality, and the protection of human rights.Keywords: AI and law, artificial law, protection of human rights, AI tools for legal systems
Procedia PDF Downloads 7418 Ethical Artificial Intelligence: An Exploratory Study of Guidelines
Authors: Ahmad Haidar
Abstract:
The rapid adoption of Artificial Intelligence (AI) technology holds unforeseen risks like privacy violation, unemployment, and algorithmic bias, triggering research institutions, governments, and companies to develop principles of AI ethics. The extensive and diverse literature on AI lacks an analysis of the evolution of principles developed in recent years. There are two fundamental purposes of this paper. The first is to provide insights into how the principles of AI ethics have been changed recently, including concepts like risk management and public participation. In doing so, a NOISE (Needs, Opportunities, Improvements, Strengths, & Exceptions) analysis will be presented. Second, offering a framework for building Ethical AI linked to sustainability. This research adopts an explorative approach, more specifically, an inductive approach to address the theoretical gap. Consequently, this paper tracks the different efforts to have “trustworthy AI” and “ethical AI,” concluding a list of 12 documents released from 2017 to 2022. The analysis of this list unifies the different approaches toward trustworthy AI in two steps. First, splitting the principles into two categories, technical and net benefit, and second, testing the frequency of each principle, providing the different technical principles that may be useful for stakeholders considering the lifecycle of AI, or what is known as sustainable AI. Sustainable AI is the third wave of AI ethics and a movement to drive change throughout the entire lifecycle of AI products (i.e., idea generation, training, re-tuning, implementation, and governance) in the direction of greater ecological integrity and social fairness. In this vein, results suggest transparency, privacy, fairness, safety, autonomy, and accountability as recommended technical principles to include in the lifecycle of AI. Another contribution is to capture the different basis that aid the process of AI for sustainability (e.g., towards sustainable development goals). The results indicate data governance, do no harm, human well-being, and risk management as crucial AI for sustainability principles. This study’s last contribution clarifies how the principles evolved. To illustrate, in 2018, the Montreal declaration mentioned eight principles well-being, autonomy, privacy, solidarity, democratic participation, equity, and diversity. In 2021, notions emerged from the European Commission proposal, including public trust, public participation, scientific integrity, risk assessment, flexibility, benefit and cost, and interagency coordination. The study design will strengthen the validity of previous studies. Yet, we advance knowledge in trustworthy AI by considering recent documents, linking principles with sustainable AI and AI for sustainability, and shedding light on the evolution of guidelines over time.Keywords: artificial intelligence, AI for sustainability, declarations, framework, regulations, risks, sustainable AI
Procedia PDF Downloads 9317 A Comprehensive Finite Element Model for Incremental Launching of Bridges: Optimizing Construction and Design
Authors: Mohammad Bagher Anvari, Arman Shojaei
Abstract:
Incremental launching, a widely adopted bridge erection technique, offers numerous advantages for bridge designers. However, accurately simulating and modeling the dynamic behavior of the bridge during each step of the launching process proves to be tedious and time-consuming. The perpetual variation of internal forces within the deck during construction stages adds complexity, exacerbated further by considerations of other load cases, such as support settlements and temperature effects. As a result, there is an urgent need for a reliable, simple, economical, and fast algorithmic solution to model bridge construction stages effectively. This paper presents a novel Finite Element (FE) model that focuses on studying the static behavior of bridges during the launching process. Additionally, a simple method is introduced to normalize all quantities in the problem. The new FE model overcomes the limitations of previous models, enabling the simulation of all stages of launching, which conventional models fail to achieve due to underlying assumptions. By leveraging the results obtained from the new FE model, this study proposes solutions to improve the accuracy of conventional models, particularly for the initial stages of bridge construction that have been neglected in previous research. The research highlights the critical role played by the first span of the bridge during the initial stages, a factor often overlooked in existing studies. Furthermore, a new and simplified model termed the "semi-infinite beam" model, is developed to address this oversight. By utilizing this model alongside a simple optimization approach, optimal values for launching nose specifications are derived. The practical applications of this study extend to optimizing the nose-deck system of incrementally launched bridges, providing valuable insights for practical usage. In conclusion, this paper introduces a comprehensive Finite Element model for studying the static behavior of bridges during incremental launching. The proposed model addresses limitations found in previous approaches and offers practical solutions to enhance accuracy. The study emphasizes the importance of considering the initial stages and introduces the "semi-infinite beam" model. Through the developed model and optimization approach, optimal specifications for launching nose configurations are determined. This research holds significant practical implications and contributes to the optimization of incrementally launched bridges, benefiting both the construction industry and bridge designers.Keywords: incremental launching, bridge construction, finite element model, optimization
Procedia PDF Downloads 10216 An Adaptive Oversampling Technique for Imbalanced Datasets
Authors: Shaukat Ali Shahee, Usha Ananthakumar
Abstract:
A data set exhibits class imbalance problem when one class has very few examples compared to the other class, and this is also referred to as between class imbalance. The traditional classifiers fail to classify the minority class examples correctly due to its bias towards the majority class. Apart from between-class imbalance, imbalance within classes where classes are composed of a different number of sub-clusters with these sub-clusters containing different number of examples also deteriorates the performance of the classifier. Previously, many methods have been proposed for handling imbalanced dataset problem. These methods can be classified into four categories: data preprocessing, algorithmic based, cost-based methods and ensemble of classifier. Data preprocessing techniques have shown great potential as they attempt to improve data distribution rather than the classifier. Data preprocessing technique handles class imbalance either by increasing the minority class examples or by decreasing the majority class examples. Decreasing the majority class examples lead to loss of information and also when minority class has an absolute rarity, removing the majority class examples is generally not recommended. Existing methods available for handling class imbalance do not address both between-class imbalance and within-class imbalance simultaneously. In this paper, we propose a method that handles between class imbalance and within class imbalance simultaneously for binary classification problem. Removing between class imbalance and within class imbalance simultaneously eliminates the biases of the classifier towards bigger sub-clusters by minimizing the error domination of bigger sub-clusters in total error. The proposed method uses model-based clustering to find the presence of sub-clusters or sub-concepts in the dataset. The number of examples oversampled among the sub-clusters is determined based on the complexity of sub-clusters. The method also takes into consideration the scatter of the data in the feature space and also adaptively copes up with unseen test data using Lowner-John ellipsoid for increasing the accuracy of the classifier. In this study, neural network is being used as this is one such classifier where the total error is minimized and removing the between-class imbalance and within class imbalance simultaneously help the classifier in giving equal weight to all the sub-clusters irrespective of the classes. The proposed method is validated on 9 publicly available data sets and compared with three existing oversampling techniques that rely on the spatial location of minority class examples in the euclidean feature space. The experimental results show the proposed method to be statistically significantly superior to other methods in terms of various accuracy measures. Thus the proposed method can serve as a good alternative to handle various problem domains like credit scoring, customer churn prediction, financial distress, etc., that typically involve imbalanced data sets.Keywords: classification, imbalanced dataset, Lowner-John ellipsoid, model based clustering, oversampling
Procedia PDF Downloads 418