Search results for: artificial kidney
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2535

Search results for: artificial kidney

105 Seismotectonics and Seismology the North of Algeria

Authors: Djeddi Mabrouk

Abstract:

The slow coming together between the Afro-Eurasia plates seems to be the main cause of the active deformation in the whole of North Africa which in consequence come true in Algeria with a large zone of deformation in an enough large limited band, southern through Saharan atlas and northern through tell atlas. Maghrebin and Atlassian Chain along North Africa are the consequence of this convergence. In junction zone, we have noticed a compressive regime NW-SE with a creases-faults structure and structured overthrust. From a geological point of view the north part of Algeria is younger then Saharan platform, it’s changing so unstable and constantly in movement, it’s characterized by creases openly reversed, overthrusts and reversed faults, and undergo perpetually complex movement vertically and horizontally. On structural level the north of Algeria it's a part of erogenous alpine peri-Mediterranean and essentially the tertiary age It’s spread from east to the west of Algeria over 1200 km.This oogenesis is extended from east to west on broadband of 100 km.The alpine chain is shaped by 3 domains: tell atlas in north, high plateaus in mid and Saharan atlas in the south In extreme south we find the Saharan platform which is made of Precambrian bedrock recovered by Paleozoic practically not deformed. The Algerian north and the Saharan platform are separated by an important accident along of 2000km from Agadir (Morocco) to Gabes (Tunisian). The seismic activity is localized essentially in a coastal band in the north of Algeria shaped by tell atlas, high plateaus, Saharan atlas. Earthquakes are limited in the first 20km of the earth's crust; they are caused by movements along faults of inverted orientation NE-SW or sliding tectonic plates. The center region characterizes Strong Earthquake Activity who locates mainly in the basin of Mitidja (age Neogene).The southern periphery (Atlas Blidéen) constitutes the June, more Important seism genic sources in the city of Algiers and east (Boumerdes region). The North East Region is also part of the tellian area, but it is characterized by a different strain in other parts of northern Algeria. The deformation is slow and low to moderate seismic activity. Seismic activity is related to the tectonic-slip earthquake. The most pronounced is that of 27 October 1985 (Constantine) of seismic moment magnitude Mw = 5.9. North-West region is quite active and also artificial seismic hypocenters which do not exceed 20km. The deep seismicity is concentrated mainly a narrow strip along the edge of Quaternary and Neogene basins Intra Mountains along the coast. The most violent earthquakes in this region are the earthquake of Oran in 1790 and earthquakes Orléansville (El Asnam in 1954 and 1980).

Keywords: alpine chain, seismicity north Algeria, earthquakes in Algeria, geophysics, Earth

Procedia PDF Downloads 408
104 A Web and Cloud-Based Measurement System Analysis Tool for the Automotive Industry

Authors: C. A. Barros, Ana P. Barroso

Abstract:

Any industrial company needs to determine the amount of variation that exists within its measurement process and guarantee the reliability of their data, studying the performance of their measurement system, in terms of linearity, bias, repeatability and reproducibility and stability. This issue is critical for automotive industry suppliers, who are required to be certified by the 16949:2016 standard (replaces the ISO/TS 16949) of International Automotive Task Force, defining the requirements of a quality management system for companies in the automotive industry. Measurement System Analysis (MSA) is one of the mandatory tools. Frequently, the measurement system in companies is not connected to the equipment and do not incorporate the methods proposed by the Automotive Industry Action Group (AIAG). To address these constraints, an R&D project is in progress, whose objective is to develop a web and cloud-based MSA tool. This MSA tool incorporates Industry 4.0 concepts, such as, Internet of Things (IoT) protocols to assure the connection with the measuring equipment, cloud computing, artificial intelligence, statistical tools, and advanced mathematical algorithms. This paper presents the preliminary findings of the project. The web and cloud-based MSA tool is innovative because it implements all statistical tests proposed in the MSA-4 reference manual from AIAG as well as other emerging methods and techniques. As it is integrated with the measuring devices, it reduces the manual input of data and therefore the errors. The tool ensures traceability of all performed tests and can be used in quality laboratories and in the production lines. Besides, it monitors MSAs over time, allowing both the analysis of deviations from the variation of the measurements performed and the management of measurement equipment and calibrations. To develop the MSA tool a ten-step approach was implemented. Firstly, it was performed a benchmarking analysis of the current competitors and commercial solutions linked to MSA, concerning Industry 4.0 paradigm. Next, an analysis of the size of the target market for the MSA tool was done. Afterwards, data flow and traceability requirements were analysed in order to implement an IoT data network that interconnects with the equipment, preferably via wireless. The MSA web solution was designed under UI/UX principles and an API in python language was developed to perform the algorithms and the statistical analysis. Continuous validation of the tool by companies is being performed to assure real time management of the ‘big data’. The main results of this R&D project are: MSA Tool, web and cloud-based; Python API; New Algorithms to the market; and Style Guide of UI/UX of the tool. The MSA tool proposed adds value to the state of the art as it ensures an effective response to the new challenges of measurement systems, which are increasingly critical in production processes. Although the automotive industry has triggered the development of this innovative MSA tool, other industries would also benefit from it. Currently, companies from molds and plastics, chemical and food industry are already validating it.

Keywords: automotive Industry, industry 4.0, Internet of Things, IATF 16949:2016, measurement system analysis

Procedia PDF Downloads 214
103 Machine Learning and Internet of Thing for Smart-Hydrology of the Mantaro River Basin

Authors: Julio Jesus Salazar, Julio Jesus De Lama

Abstract:

the fundamental objective of hydrological studies applied to the engineering field is to determine the statistically consistent volumes or water flows that, in each case, allow us to size or design a series of elements or structures to effectively manage and develop a river basin. To determine these values, there are several ways of working within the framework of traditional hydrology: (1) Study each of the factors that influence the hydrological cycle, (2) Study the historical behavior of the hydrology of the area, (3) Study the historical behavior of hydrologically similar zones, and (4) Other studies (rain simulators or experimental basins). Of course, this range of studies in a certain basin is very varied and complex and presents the difficulty of collecting the data in real time. In this complex space, the study of variables can only be overcome by collecting and transmitting data to decision centers through the Internet of things and artificial intelligence. Thus, this research work implemented the learning project of the sub-basin of the Shullcas river in the Andean basin of the Mantaro river in Peru. The sensor firmware to collect and communicate hydrological parameter data was programmed and tested in similar basins of the European Union. The Machine Learning applications was programmed to choose the algorithms that direct the best solution to the determination of the rainfall-runoff relationship captured in the different polygons of the sub-basin. Tests were carried out in the mountains of Europe, and in the sub-basins of the Shullcas river (Huancayo) and the Yauli river (Jauja) with heights close to 5000 m.a.s.l., giving the following conclusions: to guarantee a correct communication, the distance between devices should not pass the 15 km. It is advisable to minimize the energy consumption of the devices and avoid collisions between packages, the distances oscillate between 5 and 10 km, in this way the transmission power can be reduced and a higher bitrate can be used. In case the communication elements of the devices of the network (internet of things) installed in the basin do not have good visibility between them, the distance should be reduced to the range of 1-3 km. The energy efficiency of the Atmel microcontrollers present in Arduino is not adequate to meet the requirements of system autonomy. To increase the autonomy of the system, it is recommended to use low consumption systems, such as the Ashton Raggatt McDougall or ARM Cortex L (Ultra Low Power) microcontrollers or even the Cortex M; and high-performance direct current (DC) to direct current (DC) converters. The Machine Learning System has initiated the learning of the Shullcas system to generate the best hydrology of the sub-basin. This will improve as machine learning and the data entered in the big data coincide every second. This will provide services to each of the applications of the complex system to return the best data of determined flows.

Keywords: hydrology, internet of things, machine learning, river basin

Procedia PDF Downloads 160
102 Design of a Small and Medium Enterprise Growth Prediction Model Based on Web Mining

Authors: Yiea Funk Te, Daniel Mueller, Irena Pletikosa Cvijikj

Abstract:

Small and medium enterprises (SMEs) play an important role in the economy of many countries. When the overall world economy is considered, SMEs represent 95% of all businesses in the world, accounting for 66% of the total employment. Existing studies show that the current business environment is characterized as highly turbulent and strongly influenced by modern information and communication technologies, thus forcing SMEs to experience more severe challenges in maintaining their existence and expanding their business. To support SMEs at improving their competitiveness, researchers recently turned their focus on applying data mining techniques to build risk and growth prediction models. However, data used to assess risk and growth indicators is primarily obtained via questionnaires, which is very laborious and time-consuming, or is provided by financial institutes, thus highly sensitive to privacy issues. Recently, web mining (WM) has emerged as a new approach towards obtaining valuable insights in the business world. WM enables automatic and large scale collection and analysis of potentially valuable data from various online platforms, including companies’ websites. While WM methods have been frequently studied to anticipate growth of sales volume for e-commerce platforms, their application for assessment of SME risk and growth indicators is still scarce. Considering that a vast proportion of SMEs own a website, WM bears a great potential in revealing valuable information hidden in SME websites, which can further be used to understand SME risk and growth indicators, as well as to enhance current SME risk and growth prediction models. This study aims at developing an automated system to collect business-relevant data from the Web and predict future growth trends of SMEs by means of WM and data mining techniques. The envisioned system should serve as an 'early recognition system' for future growth opportunities. In an initial step, we examine how structured and semi-structured Web data in governmental or SME websites can be used to explain the success of SMEs. WM methods are applied to extract Web data in a form of additional input features for the growth prediction model. The data on SMEs provided by a large Swiss insurance company is used as ground truth data (i.e. growth-labeled data) to train the growth prediction model. Different machine learning classification algorithms such as the Support Vector Machine, Random Forest and Artificial Neural Network are applied and compared, with the goal to optimize the prediction performance. The results are compared to those from previous studies, in order to assess the contribution of growth indicators retrieved from the Web for increasing the predictive power of the model.

Keywords: data mining, SME growth, success factors, web mining

Procedia PDF Downloads 269
101 The Quantum Theory of Music and Languages

Authors: Mballa Abanda Serge, Henda Gnakate Biba, Romaric Guemno Kuate, Akono Rufine Nicole, Petfiang Sidonie, Bella Sidonie

Abstract:

The main hypotheses proposed around the definition of the syllable and of music, of the common origin of music and language, should lead the reader to reflect on the cross-cutting questions raised by the debate on the notion of universals in linguistics and musicology. These are objects of controversy, and there lies its interest: the debate raises questions that are at the heart of theories on language. It is an inventive, original and innovative research thesis. The main hypotheses proposed around the definition of the syllable and of music, of the common origin of music and language, should lead the reader to reflect on the cross-cutting questions raised by the debate on the notion of universals in linguistics and musicology. These are objects of controversy, and there lies its interest: the debate raises questions that are at the heart of theories on language. It is an inventive, original and innovative research thesis. A contribution to the theoretical, musicological, ethno musicological and linguistic conceptualization of languages, giving rise to the practice of interlocution between the social and cognitive sciences, the activities of artistic creation and the question of modeling in the human sciences: mathematics, computer science, translation automation and artificial intelligence. When you apply this theory to any text of a folksong of a world-tone language, you do not only piece together the exact melody, rhythm, and harmonies of that song as if you knew it in advance but also the exact speaking of this language. The author believes that the issue of the disappearance of tonal languages and their preservation has been structurally resolved, as well as one of the greatest cultural equations related to the composition and creation of tonal, polytonal and random music. The experimentation confirming the theorization, It designed a semi-digital, semi-analog application which translates the tonal languages of Africa (about 2,100 languages) into blues, jazz, world music, polyphonic music, tonal and anatonal music and deterministic and random music). To test this application, I use a music reading and writing software that allows me to collect the data extracted from my mother tongue, which is already modeled in the musical staves saved in the ethnographic (semiotic) dictionary for automatic translation ( volume 2 of the book). Translation is done (from writing to writing, from writing to speech and from writing to music). Mode of operation: you type a text on your computer, a structured song (chorus-verse), and you command the machine a melody of blues, jazz and world music or variety etc. The software runs, giving you the option to choose harmonies, and then you select your melody.

Keywords: music, entanglement, langauge, science

Procedia PDF Downloads 82
100 Methodological Deficiencies in Knowledge Representation Conceptual Theories of Artificial Intelligence

Authors: Nasser Salah Eldin Mohammed Salih Shebka

Abstract:

Current problematic issues in AI fields are mainly due to those of knowledge representation conceptual theories, which in turn reflected on the entire scope of cognitive sciences. Knowledge representation methods and tools are driven from theoretical concepts regarding human scientific perception of the conception, nature, and process of knowledge acquisition, knowledge engineering and knowledge generation. And although, these theoretical conceptions were themselves driven from the study of the human knowledge representation process and related theories; some essential factors were overlooked or underestimated, thus causing critical methodological deficiencies in the conceptual theories of human knowledge and knowledge representation conceptions. The evaluation criteria of human cumulative knowledge from the perspectives of nature and theoretical aspects of knowledge representation conceptions are affected greatly by the very materialistic nature of cognitive sciences. This nature caused what we define as methodological deficiencies in the nature of theoretical aspects of knowledge representation concepts in AI. These methodological deficiencies are not confined to applications of knowledge representation theories throughout AI fields, but also exceeds to cover the scientific nature of cognitive sciences. The methodological deficiencies we investigated in our work are: - The Segregation between cognitive abilities in knowledge driven models.- Insufficiency of the two-value logic used to represent knowledge particularly on machine language level in relation to the problematic issues of semantics and meaning theories. - Deficient consideration of the parameters of (existence) and (time) in the structure of knowledge. The latter requires that we present a more detailed introduction of the manner in which the meanings of Existence and Time are to be considered in the structure of knowledge. This doesn’t imply that it’s easy to apply in structures of knowledge representation systems, but outlining a deficiency caused by the absence of such essential parameters, can be considered as an attempt to redefine knowledge representation conceptual approaches, or if proven impossible; constructs a perspective on the possibility of simulating human cognition on machines. Furthermore, a redirection of the aforementioned expressions is required in order to formulate the exact meaning under discussion. This redirection of meaning alters the role of Existence and time factors to the Frame Work Environment of knowledge structure; and therefore; knowledge representation conceptual theories. Findings of our work indicate the necessity to differentiate between two comparative concepts when addressing the relation between existence and time parameters, and between that of the structure of human knowledge. The topics presented throughout the paper can also be viewed as an evaluation criterion to determine AI’s capability to achieve its ultimate objectives. Ultimately, we argue some of the implications of our findings that suggests that; although scientific progress may have not reached its peak, or that human scientific evolution has reached a point where it’s not possible to discover evolutionary facts about the human Brain and detailed descriptions of how it represents knowledge, but it simply implies that; unless these methodological deficiencies are properly addressed; the future of AI’s qualitative progress remains questionable.

Keywords: cognitive sciences, knowledge representation, ontological reasoning, temporal logic

Procedia PDF Downloads 113
99 Development of an Interface between BIM-model and an AI-based Control System for Building Facades with Integrated PV Technology

Authors: Moser Stephan, Lukasser Gerald, Weitlaner Robert

Abstract:

Urban structures will be used more intensively in the future through redensification or new planned districts with high building densities. Especially, to achieve positive energy balances like requested for Positive Energy Districts (PED) the single use of roofs is not sufficient for dense urban areas. However, the increasing share of window significantly reduces the facade area available for use in PV generation. Through the use of PV technology at other building components, such as external venetian blinds, onsite generation can be maximized and standard functionalities of this product can be positively extended. While offering advantages in terms of infrastructure, sustainability in the use of resources and efficiency, these systems require an increased optimization in planning and control strategies of buildings. External venetian blinds with PV technology require an intelligent control concept to meet the required demands such as maximum power generation, glare prevention, high daylight autonomy, avoidance of summer overheating but also use of passive solar gains in wintertime. Today, geometric representation of outdoor spaces and at the building level, three-dimensional geometric information is available for planning with Building Information Modeling (BIM). In a research project, a web application which is called HELLA DECART was developed to provide this data structure to extract the data required for the simulation from the BIM models and to make it usable for the calculations and coupled simulations. The investigated object is uploaded as an IFC file to this web application and includes the object as well as the neighboring buildings and possible remote shading. This tool uses a ray tracing method to determine possible glare from solar reflections of a neighboring building as well as near and far shadows per window on the object. Subsequently, an annual estimate of the sunlight per window is calculated by taking weather data into account. This optimized daylight assessment per window provides the ability to calculate an estimation of the potential power generation at the integrated PV on the venetian blind but also for the daylight and solar entry. As a next step, these results of the calculations as well as all necessary parameters for the thermal simulation can be provided. The overall aim of this workflow is to advance the coordination between the BIM model and coupled building simulation with the resulting shading and daylighting system with the artificial lighting system and maximum power generation in a control system. In the research project Powershade, an AI based control concept for PV integrated façade elements with coupled simulation results is investigated. The developed automated workflow concept in this paper is tested by using an office living lab at the HELLA company.

Keywords: BIPV, building simulation, optimized control strategy, planning tool

Procedia PDF Downloads 110
98 Prevention of Preterm Birth and Management of Uterine Contractions with Traditional Korean Medicine: Integrative Approach

Authors: Eun-Seop Kim, Eun-Ha Jang, Rana R. Kim, Sae-Byul Jang

Abstract:

Objective: Preterm labor is the most common antecedent of preterm birth(PTB), which is characterized by regular uterine contraction before 37 weeks of pregnancy and cervical change. In acute preterm labor, tocolytics are administered as the first-line medication to suppress uterine contractions but rarely delay pregnancy to 37 weeks of gestation. On the other hand, according to the Korean Traditional Medicine, PTB is caused by the deficiency of Qi and unnecessary energy in the body of the mother. The aim of this study was to demonstrate the benefit of Traditional Korean Medicine as an adjuvant therapy in management of early uterine contractions and the prevention of PTB. Methods: It is a case report of a 38-year-old woman (0-0-6-0) hospitalized for irregular uterine contractions and cervical change at 33+3/7 weeks of gestation. Past history includes chemical pregnancies achieved by Artificial Rroductive Technology(ART), one stillbirth (at 7 weeks) and a laparoscopic surgery for endometriosis. After seven trials of IVF and articificial insemination, she had succeeded in conception via in-vitro fertilization (IVF) with help of Traditional Korean Medicine (TKM) treatments. Due to irregular uterine contractions and cervical changes, 2 TKM were prescribed: Gami-Dangguisan, and Antae-eum, known to nourish blood and clear away heat. 120ml of Gami-Dangguisan was given twice a day monring and evening along with same amount of Antae-eum once a day from 31 August 2013 to 28 November 2013. Tocolytics (Ritodrine) was administered as a first aid for maintenance of pregnancy. Information regarding progress until the delivery was collected during the patient’s visit. Results: On admission, the cervix of 15mm in length and cervical os with 0.5cm-dilated were observed via ultrasonography. 50% cervical effacement was also detected in physical examination. Tocolysis had been temporarily maintained. As a supportive therapy, TKM herbal preparations(gami-dangguisan and Antae-eum) were concomitantly given. As of 34+2/7 weeks of gestation, however intermittent uterine contractions appeared (5-12min) on cardiotocography and vaginal bleeding was also smeared at 34+3/7 weeks. However, enhanced tocolytics and continuous administration of herbal medicine sustained the pregnancy to term. At 37+2/7 weeks, no sign of labor with restored cervical length was confirmed. The woman gave a term birth to a healthy infant via vaginal delivery at 39+3/7 gestational weeks. Conclusions: This is the first successful case report about a preter labor patient administered with conventional tocolytic agents as well as TKM herbal decoctions, delaying delivery to term. This case deserves attention considering it is rare to maintain gestation to term only with tocolytic intervention. Our report implies the potential of herbal medicine as an adjuvant therapy for preterm labor treatment. Further studies are needed to assess the safety and efficacy of TKM herbal medicine as a therapeutic alternative for curing preterm birth.

Keywords: preterm labor, traditional Korean medicine, herbal medicine, integrative treatment, complementary and alternative medicine

Procedia PDF Downloads 372
97 Contribution to the Study of Automatic Epileptiform Pattern Recognition in Long Term EEG Signals

Authors: Christine F. Boos, Fernando M. Azevedo

Abstract:

Electroencephalogram (EEG) is a record of the electrical activity of the brain that has many applications, such as monitoring alertness, coma and brain death; locating damaged areas of the brain after head injury, stroke and tumor; monitoring anesthesia depth; researching physiology and sleep disorders; researching epilepsy and localizing the seizure focus. Epilepsy is a chronic condition, or a group of diseases of high prevalence, still poorly explained by science and whose diagnosis is still predominantly clinical. The EEG recording is considered an important test for epilepsy investigation and its visual analysis is very often applied for clinical confirmation of epilepsy diagnosis. Moreover, this EEG analysis can also be used to help define the types of epileptic syndrome, determine epileptiform zone, assist in the planning of drug treatment and provide additional information about the feasibility of surgical intervention. In the context of diagnosis confirmation the analysis is made using long term EEG recordings with at least 24 hours long and acquired by a minimum of 24 electrodes in which the neurophysiologists perform a thorough visual evaluation of EEG screens in search of specific electrographic patterns called epileptiform discharges. Considering that the EEG screens usually display 10 seconds of the recording, the neurophysiologist has to evaluate 360 screens per hour of EEG or a minimum of 8,640 screens per long term EEG recording. Analyzing thousands of EEG screens in search patterns that have a maximum duration of 200 ms is a very time consuming, complex and exhaustive task. Because of this, over the years several studies have proposed automated methodologies that could facilitate the neurophysiologists’ task of identifying epileptiform discharges and a large number of methodologies used neural networks for the pattern classification. One of the differences between all of these methodologies is the type of input stimuli presented to the networks, i.e., how the EEG signal is introduced in the network. Five types of input stimuli have been commonly found in literature: raw EEG signal, morphological descriptors (i.e. parameters related to the signal’s morphology), Fast Fourier Transform (FFT) spectrum, Short-Time Fourier Transform (STFT) spectrograms and Wavelet Transform features. This study evaluates the application of these five types of input stimuli and compares the classification results of neural networks that were implemented using each of these inputs. The performance of using raw signal varied between 43 and 84% efficiency. The results of FFT spectrum and STFT spectrograms were quite similar with average efficiency being 73 and 77%, respectively. The efficiency of Wavelet Transform features varied between 57 and 81% while the descriptors presented efficiency values between 62 and 93%. After simulations we could observe that the best results were achieved when either morphological descriptors or Wavelet features were used as input stimuli.

Keywords: Artificial neural network, electroencephalogram signal, pattern recognition, signal processing

Procedia PDF Downloads 530
96 The Effect of Soil-Structure Interaction on the Post-Earthquake Fire Performance of Structures

Authors: A. T. Al-Isawi, P. E. F. Collins

Abstract:

The behaviour of structures exposed to fire after an earthquake is not a new area of engineering research, but there remain a number of areas where further work is required. Such areas relate to the way in which seismic excitation is applied to a structure, taking into account the effect of soil-structure interaction (SSI) and the method of analysis, in addition to identifying the excitation load properties. The selection of earthquake data input for use in nonlinear analysis and the method of analysis are still challenging issues. Thus, realistic artificial ground motion input data must be developed to certify that site properties parameters adequately describe the effects of the nonlinear inelastic behaviour of the system and that the characteristics of these parameters are coherent with the characteristics of the target parameters. Conversely, ignoring the significance of some attributes, such as frequency content, soil site properties and earthquake parameters may lead to misleading results, due to the misinterpretation of required input data and the incorrect synthesise of analysis hypothesis. This paper presents a study of the post-earthquake fire (PEF) performance of a multi-storey steel-framed building resting on soft clay, taking into account the effects of the nonlinear inelastic behaviour of the structure and soil, and the soil-structure interaction (SSI). Structures subjected to an earthquake may experience various levels of damage; the geometrical damage, which indicates the change in the initial structure’s geometry due to the residual deformation as a result of plastic behaviour, and the mechanical damage which identifies the degradation of the mechanical properties of the structural elements involved in the plastic range of deformation. Consequently, the structure presumably experiences partial structural damage but is then exposed to fire under its new residual material properties, which may result in building failure caused by a decrease in fire resistance. This scenario would be more complicated if SSI was also considered. Indeed, most earthquake design codes ignore the probability of PEF as well as the effect that SSI has on the behaviour of structures, in order to simplify the analysis procedure. Therefore, the design of structures based on existing codes which neglect the importance of PEF and SSI can create a significant risk of structural failure. In order to examine the criteria for the behaviour of a structure under PEF conditions, a two-dimensional nonlinear elasto-plastic model is developed using ABAQUS software; the effects of SSI are included. Both geometrical and mechanical damages have been taken into account after the earthquake analysis step. For comparison, an identical model is also created, which does not include the effects of soil-structure interaction. It is shown that damage to structural elements is underestimated if SSI is not included in the analysis, and the maximum percentage reduction in fire resistance is detected in the case when SSI is included in the scenario. The results are validated using the literature.

Keywords: Abaqus Software, Finite Element Analysis, post-earthquake fire, seismic analysis, soil-structure interaction

Procedia PDF Downloads 123
95 Geovisualisation for Defense Based on a Deep Learning Monocular Depth Reconstruction Approach

Authors: Daniel R. dos Santos, Mateus S. Maldonado, Estevão J. R. Batista

Abstract:

The military commanders increasingly dependent on spatial awareness, as knowing where enemy are, understanding how war battle scenarios change over time, and visualizing these trends in ways that offer insights for decision-making. Thanks to advancements in geospatial technologies and artificial intelligence algorithms, the commanders are now able to modernize military operations on a universal scale. Thus, geovisualisation has become an essential asset in the defense sector. It has become indispensable for better decisionmaking in dynamic/temporal scenarios, operation planning and management for the war field, situational awareness, effective planning, monitoring, and others. For example, a 3D visualization of war field data contributes to intelligence analysis, evaluation of postmission outcomes, and creation of predictive models to enhance decision-making and strategic planning capabilities. However, old-school visualization methods are slow, expensive, and unscalable. Despite modern technologies in generating 3D point clouds, such as LIDAR and stereo sensors, monocular depth values based on deep learning can offer a faster and more detailed view of the environment, transforming single images into visual information for valuable insights. We propose a dedicated monocular depth reconstruction approach via deep learning techniques for 3D geovisualisation of satellite images. It introduces scalability in terrain reconstruction and data visualization. First, a dataset with more than 7,000 satellite images and associated digital elevation model (DEM) is created. It is based on high resolution optical and radar imageries collected from Planet and Copernicus, on which we fuse highresolution topographic data obtained using technologies such as LiDAR and the associated geographic coordinates. Second, we developed an imagery-DEM fusion strategy that combine feature maps from two encoder-decoder networks. One network is trained with radar and optical bands, while the other is trained with DEM features to compute dense 3D depth. Finally, we constructed a benchmark with sparse depth annotations to facilitate future research. To demonstrate the proposed method's versatility, we evaluated its performance on no annotated satellite images and implemented an enclosed environment useful for Geovisualisation applications. The algorithms were developed in Python 3.0, employing open-source computing libraries, i.e., Open3D, TensorFlow, and Pythorch3D. The proposed method provides fast and accurate decision-making with GIS for localization of troops, position of the enemy, terrain and climate conditions. This analysis enhances situational consciousness, enabling commanders to fine-tune the strategies and distribute the resources proficiently.

Keywords: depth, deep learning, geovisualisation, satellite images

Procedia PDF Downloads 13
94 Classification of ECG Signal Based on Mixture of Linear and Non-Linear Features

Authors: Mohammad Karimi Moridani, Mohammad Abdi Zadeh, Zahra Shahiazar Mazraeh

Abstract:

In recent years, the use of intelligent systems in biomedical engineering has increased dramatically, especially in the diagnosis of various diseases. Also, due to the relatively simple recording of the electrocardiogram signal (ECG), this signal is a good tool to show the function of the heart and diseases associated with it. The aim of this paper is to design an intelligent system for automatically detecting a normal electrocardiogram signal from abnormal one. Using this diagnostic system, it is possible to identify a person's heart condition in a very short time and with high accuracy. The data used in this article are from the Physionet database, available in 2016 for use by researchers to provide the best method for detecting normal signals from abnormalities. Data is of both genders and the data recording time varies between several seconds to several minutes. All data is also labeled normal or abnormal. Due to the low positional accuracy and ECG signal time limit and the similarity of the signal in some diseases with the normal signal, the heart rate variability (HRV) signal was used. Measuring and analyzing the heart rate variability with time to evaluate the activity of the heart and differentiating different types of heart failure from one another is of interest to the experts. In the preprocessing stage, after noise cancelation by the adaptive Kalman filter and extracting the R wave by the Pan and Tampkinz algorithm, R-R intervals were extracted and the HRV signal was generated. In the process of processing this paper, a new idea was presented that, in addition to using the statistical characteristics of the signal to create a return map and extraction of nonlinear characteristics of the HRV signal due to the nonlinear nature of the signal. Finally, the artificial neural networks widely used in the field of ECG signal processing as well as distinctive features were used to classify the normal signals from abnormal ones. To evaluate the efficiency of proposed classifiers in this paper, the area under curve ROC was used. The results of the simulation in the MATLAB environment showed that the AUC of the MLP and SVM neural network was 0.893 and 0.947, respectively. As well as, the results of the proposed algorithm in this paper indicated that the more use of nonlinear characteristics in normal signal classification of the patient showed better performance. Today, research is aimed at quantitatively analyzing the linear and non-linear or descriptive and random nature of the heart rate variability signal, because it has been shown that the amount of these properties can be used to indicate the health status of the individual's heart. The study of nonlinear behavior and dynamics of the heart's neural control system in the short and long-term provides new information on how the cardiovascular system functions, and has led to the development of research in this field. Given that the ECG signal contains important information and is one of the common tools used by physicians to diagnose heart disease, but due to the limited accuracy of time and the fact that some information about this signal is hidden from the viewpoint of physicians, the design of the intelligent system proposed in this paper can help physicians with greater speed and accuracy in the diagnosis of normal and patient individuals and can be used as a complementary system in the treatment centers.

Keywords: neart rate variability, signal processing, linear and non-linear features, classification methods, ROC Curve

Procedia PDF Downloads 264
93 Predicting Provider Service Time in Outpatient Clinics Using Artificial Intelligence-Based Models

Authors: Haya Salah, Srinivas Sharan

Abstract:

Healthcare facilities use appointment systems to schedule their appointments and to manage access to their medical services. With the growing demand for outpatient care, it is now imperative to manage physician's time effectively. However, high variation in consultation duration affects the clinical scheduler's ability to estimate the appointment duration and allocate provider time appropriately. Underestimating consultation times can lead to physician's burnout, misdiagnosis, and patient dissatisfaction. On the other hand, appointment durations that are longer than required lead to doctor idle time and fewer patient visits. Therefore, a good estimation of consultation duration has the potential to improve timely access to care, resource utilization, quality of care, and patient satisfaction. Although the literature on factors influencing consultation length abound, little work has done to predict it using based data-driven approaches. Therefore, this study aims to predict consultation duration using supervised machine learning algorithms (ML), which predicts an outcome variable (e.g., consultation) based on potential features that influence the outcome. In particular, ML algorithms learn from a historical dataset without explicitly being programmed and uncover the relationship between the features and outcome variable. A subset of the data used in this study has been obtained from the electronic medical records (EMR) of four different outpatient clinics located in central Pennsylvania, USA. Also, publicly available information on doctor's characteristics such as gender and experience has been extracted from online sources. This research develops three popular ML algorithms (deep learning, random forest, gradient boosting machine) to predict the treatment time required for a patient and conducts a comparative analysis of these algorithms with respect to predictive performance. The findings of this study indicate that ML algorithms have the potential to predict the provider service time with superior accuracy. While the current approach of experience-based appointment duration estimation adopted by the clinic resulted in a mean absolute percentage error of 25.8%, the Deep learning algorithm developed in this study yielded the best performance with a MAPE of 12.24%, followed by gradient boosting machine (13.26%) and random forests (14.71%). Besides, this research also identified the critical variables affecting consultation duration to be patient type (new vs. established), doctor's experience, zip code, appointment day, and doctor's specialty. Moreover, several practical insights are obtained based on the comparative analysis of the ML algorithms. The machine learning approach presented in this study can serve as a decision support tool and could be integrated into the appointment system for effectively managing patient scheduling.

Keywords: clinical decision support system, machine learning algorithms, patient scheduling, prediction models, provider service time

Procedia PDF Downloads 122
92 Hiveopolis - Honey Harvester System

Authors: Erol Bayraktarov, Asya Ilgun, Thomas Schickl, Alexandre Campo, Nicolis Stamatios

Abstract:

Traditional means of harvesting honey are often stressful for honeybees. Each time honey is collected a portion of the colony can die. In consequence, the colonies’ resilience to environmental stressors will decrease and this ultimately contributes to the global problem of honeybee colony losses. As part of the project HIVEOPOLIS, we design and build a different kind of beehive, incorporating technology to reduce negative impacts of beekeeping procedures, including honey harvesting. A first step in maintaining more sustainable honey harvesting practices is to design honey storage frames that can automate the honey collection procedures. This way, beekeepers save time, money, and labor by not having to open the hive and remove frames, and the honeybees' nest stays undisturbed.This system shows promising features, e.g., high reliability which could be a key advantage compared to current honey harvesting technologies.Our original concept of fractional honey harvesting has been to encourage the removal of honey only from "safe" locations and at levels that would leave the bees enough high-nutritional-value honey. In this abstract, we describe the current state of our honey harvester, its technology and areas to improve. The honey harvester works by separating the honeycomb cells away from the comb foundation; the movement and the elastic nature of honey supports this functionality. The honey sticks to the foundation, because of the surface tension forces amplified by the geometry. In the future, by monitoring the weight and therefore the capped honey cells on our honey harvester frames, we will be able to remove honey as soon as the weight measuring system reports that the comb is ready for harvesting. Higher viscosity honey or crystalized honey cause challenges in temperate locations when a smooth flow of honey is required. We use resistive heaters to soften the propolis and wax to unglue the moving parts during extraction. These heaters can also melt the honey slightly to the needed flow state. Precise control of these heaters allows us to operate the device for several purposes. We use ‘Nitinol’ springs that are activated by heat as an actuation method. Unlike conventional stepper or servo motors, which we also evaluated throughout development, the springs and heaters take up less space and reduce the overall system complexity. Honeybee acceptance was unknown until we actually inserted a device inside a hive. We not only observed bees walking on the artificial comb but also building wax, filling gaps with propolis and storing honey. This also shows that bees don’t mind living in spaces and hives built from 3D printed materials. We do not have data yet to prove that the plastic materials do not affect the chemical composition of the honey. We succeeded in automatically extracting stored honey from the device, demonstrating a useful extraction flow and overall effective operation this way.

Keywords: honey harvesting, honeybee, hiveopolis, nitinol

Procedia PDF Downloads 109
91 Navigating AI in Higher Education: Exploring Graduate Students’ Perspectives on Teacher-Provided AI Guidelines

Authors: Mamunur Rashid, Jialin Yan

Abstract:

The current years have witnessed a rapid evolution and integration of artificial intelligence (AI) in various fields, prominently influencing the education industry. Acknowledging this transformative wave, AI tools like ChatGPT and Grammarly have undeniably introduced perspectives and skills, enriching the educational experiences of higher education students. The prevalence of AI utilization in higher education also drives an increasing number of researchers' attention in various dimensions. Departments, offices, and professors in universities also designed and released a set of policies and guidelines on using AI effectively. In regard to this, the study targets exploring and analyzing graduate students' perspectives regarding AI guidelines set by teachers. A mixed-methods study will be mainly conducted in this study, employing in-depth interviews and focus groups to investigate and collect students' perspectives. Relevant materials, such as syllabi and course instructions, will also be analyzed through the documentary analysis to facilitate understanding of the study. Surveys will also be used for data collection and students' background statistics. The integration of both interviews and surveys will provide a comprehensive array of student perspectives across various academic disciplines. The study is anchored in the theoretical framework of self-determination theory (SDT), which emphasizes and explains the students' perspective under the AI guidelines through three core needs: autonomy, competence, and relatedness. This framework is instrumental in understanding how AI guidelines influence students' intrinsic motivation and sense of empowerment in their learning environments. Through qualitative analysis, the study reveals a sense of confusion and uncertainty among students regarding the appropriate application and ethical considerations of AI tools, indicating potential challenges in meeting their needs for competence and autonomy. The quantitative data further elucidates these findings, highlighting a significant communication gap between students and educators in the formulation and implementation of AI guidelines. The critical findings of this study mainly come from two aspects: First, the majority of graduate students are uncertain and confused about relevant AI guidelines given by teachers. Second, this study also demonstrates that the design and effectiveness of course materials, such as the syllabi and instructions, also need to adapt in regard to AI policies. It indicates that certain of the existing guidelines provided by teachers lack consideration of students' perspectives, leading to a misalignment with students' needs for autonomy, competence, and relatedness. More emphasize and efforts need to be dedicated to both teacher and student training on AI policies and ethical considerations. To conclude, in this study, graduate students' perspectives on teacher-provided AI guidelines are explored and reflected upon, calling for additional training and strategies to improve how these guidelines can be better disseminated for their effective integration and adoption. Although AI guidelines provided by teachers may be helpful and provide new insights for students, educational institutions should take a more anchoring role to foster a motivating, empowering, and student-centered learning environment. The study also provides some relevant recommendations, including guidance for students on the ethical use of AI and AI policy training for teachers in higher education.

Keywords: higher education policy, graduate students’ perspectives, higher education teacher, AI guidelines, AI in education

Procedia PDF Downloads 76
90 Digital Skepticism In A Legal Philosophical Approach

Authors: dr. Bendes Ákos

Abstract:

Digital skepticism, a critical stance towards digital technology and its pervasive influence on society, presents significant challenges when analyzed from a legal philosophical perspective. This abstract aims to explore the intersection of digital skepticism and legal philosophy, emphasizing the implications for justice, rights, and the rule of law in the digital age. Digital skepticism arises from concerns about privacy, security, and the ethical implications of digital technology. It questions the extent to which digital advancements enhance or undermine fundamental human values. Legal philosophy, which interrogates the foundations and purposes of law, provides a framework for examining these concerns critically. One key area where digital skepticism and legal philosophy intersect is in the realm of privacy. Digital technologies, particularly data collection and surveillance mechanisms, pose substantial threats to individual privacy. Legal philosophers must grapple with questions about the limits of state power and the protection of personal autonomy. They must consider how traditional legal principles, such as the right to privacy, can be adapted or reinterpreted in light of new technological realities. Security is another critical concern. Digital skepticism highlights vulnerabilities in cybersecurity and the potential for malicious activities, such as hacking and cybercrime, to disrupt legal systems and societal order. Legal philosophy must address how laws can evolve to protect against these new forms of threats while balancing security with civil liberties. Ethics plays a central role in this discourse. Digital technologies raise ethical dilemmas, such as the development and use of artificial intelligence and machine learning algorithms that may perpetuate biases or make decisions without human oversight. Legal philosophers must evaluate the moral responsibilities of those who design and implement these technologies and consider the implications for justice and fairness. Furthermore, digital skepticism prompts a reevaluation of the concept of the rule of law. In an increasingly digital world, maintaining transparency, accountability, and fairness becomes more complex. Legal philosophers must explore how legal frameworks can ensure that digital technologies serve the public good and do not entrench power imbalances or erode democratic principles. Finally, the intersection of digital skepticism and legal philosophy has practical implications for policy-making. Legal scholars and practitioners must work collaboratively to develop regulations and guidelines that address the challenges posed by digital technology. This includes crafting laws that protect individual rights, ensure security, and promote ethical standards in technology development and deployment. In conclusion, digital skepticism provides a crucial lens for examining the impact of digital technology on law and society. A legal philosophical approach offers valuable insights into how legal systems can adapt to protect fundamental values in the digital age. By addressing privacy, security, ethics, and the rule of law, legal philosophers can help shape a future where digital advancements enhance, rather than undermine, justice and human dignity.

Keywords: legal philosophy, privacy, security, ethics, digital skepticism

Procedia PDF Downloads 45
89 Case-Based Reasoning for Modelling Random Variables in the Reliability Assessment of Existing Structures

Authors: Francesca Marsili

Abstract:

The reliability assessment of existing structures with probabilistic methods is becoming an increasingly important and frequent engineering task. However probabilistic reliability methods are based on an exhaustive knowledge of the stochastic modeling of the variables involved in the assessment; at the moment standards for the modeling of variables are absent, representing an obstacle to the dissemination of probabilistic methods. The framework according to probability distribution functions (PDFs) are established is represented by the Bayesian statistics, which uses Bayes Theorem: a prior PDF for the considered parameter is established based on information derived from the design stage and qualitative judgments based on the engineer past experience; then, the prior model is updated with the results of investigation carried out on the considered structure, such as material testing, determination of action and structural properties. The application of Bayesian statistics arises two different kind of problems: 1. The results of the updating depend on the engineer previous experience; 2. The updating of the prior PDF can be performed only if the structure has been tested, and quantitative data that can be statistically manipulated have been collected; performing tests is always an expensive and time consuming operation; furthermore, if the considered structure is an ancient building, destructive tests could compromise its cultural value and therefore should be avoided. In order to solve those problems, an interesting research path is represented by investigating Artificial Intelligence (AI) techniques that can be useful for the automation of the modeling of variables and for the updating of material parameters without performing destructive tests. Among the others, one that raises particular attention in relation to the object of this study is constituted by Case-Based Reasoning (CBR). In this application, cases will be represented by existing buildings where material tests have already been carried out and an updated PDFs for the material mechanical parameters has been computed through a Bayesian analysis. Then each case will be composed by a qualitative description of the material under assessment and the posterior PDFs that describe its material properties. The problem that will be solved is the definition of PDFs for material parameters involved in the reliability assessment of the considered structure. A CBR system represent a good candi¬date in automating the modelling of variables because: 1. Engineers already draw an estimation of the material properties based on the experience collected during the assessment of similar structures, or based on similar cases collected in literature or in data-bases; 2. Material tests carried out on structure can be easily collected from laboratory database or from literature; 3. The system will provide the user of a reliable probabilistic description of the variables involved in the assessment that will also serve as a tool in support of the engineer’s qualitative judgments. Automated modeling of variables can help in spreading probabilistic reliability assessment of existing buildings in the common engineering practice, and target at the best intervention and further tests on the structure; CBR represents a technique which may help to achieve this.

Keywords: reliability assessment of existing buildings, Bayesian analysis, case-based reasoning, historical structures

Procedia PDF Downloads 339
88 3D Design of Orthotic Braces and Casts in Medical Applications Using Microsoft Kinect Sensor

Authors: Sanjana S. Mallya, Roshan Arvind Sivakumar

Abstract:

Orthotics is the branch of medicine that deals with the provision and use of artificial casts or braces to alter the biomechanical structure of the limb and provide support for the limb. Custom-made orthoses provide more comfort and can correct issues better than those available over-the-counter. However, they are expensive and require intricate modelling of the limb. Traditional methods of modelling involve creating a plaster of Paris mould of the limb. Lately, CAD/CAM and 3D printing processes have improved the accuracy and reduced the production time. Ordinarily, digital cameras are used to capture the features of the limb from different views to create a 3D model. We propose a system to model the limb using Microsoft Kinect2 sensor. The Kinect can capture RGB and depth frames simultaneously up to 30 fps with sufficient accuracy. The region of interest is captured from three views, each shifted by 90 degrees. The RGB and depth data are fused into a single RGB-D frame. The resolution of the RGB frame is 1920px x 1080px while the resolution of the Depth frame is 512px x 424px. As the resolution of the frames is not equal, RGB pixels are mapped onto the Depth pixels to make sure data is not lost even if the resolution is lower. The resulting RGB-D frames are collected and using the depth coordinates, a three dimensional point cloud is generated for each view of the Kinect sensor. A common reference system was developed to merge the individual point clouds from the Kinect sensors. The reference system consisted of 8 coloured cubes, connected by rods to form a skeleton-cube with the coloured cubes at the corners. For each Kinect, the region of interest is the square formed by the centres of the four cubes facing the Kinect. The point clouds are merged by considering one of the cubes as the origin of a reference system. Depending on the relative distance from each cube, the three dimensional coordinate points from each point cloud is aligned to the reference frame to give a complete point cloud. The RGB data is used to correct for any errors in depth data for the point cloud. A triangular mesh is generated from the point cloud by applying Delaunay triangulation which generates the rough surface of the limb. This technique forms an approximation of the surface of the limb. The mesh is smoothened to obtain a smooth outer layer to give an accurate model of the limb. The model of the limb is used as a base for designing the custom orthotic brace or cast. It is transferred to a CAD/CAM design file to design of the brace above the surface of the limb. The proposed system would be more cost effective than current systems that use MRI or CT scans for generating 3D models and would be quicker than using traditional plaster of Paris cast modelling and the overall setup time is also low. Preliminary results indicate that the accuracy of the Kinect2 is satisfactory to perform modelling.

Keywords: 3d scanning, mesh generation, Microsoft kinect, orthotics, registration

Procedia PDF Downloads 191
87 An Agent-Based Approach to Examine Interactions of Firms for Investment Revival

Authors: Ichiro Takahashi

Abstract:

One conundrum that macroeconomic theory faces is to explain how an economy can revive from depression, in which the aggregate demand has fallen substantially below its productive capacity. This paper examines an autonomous stabilizing mechanism using an agent-based Wicksell-Keynes macroeconomic model. This paper focuses on the effects of the number of firms and the length of the gestation period for investment that are often assumed to be one in a mainstream macroeconomic model. The simulations found the virtual economy was highly unstable, or more precisely, collapsing when these parameters are fixed at one. This finding may even suggest us to question the legitimacy of these common assumptions. A perpetual decline in capital stock will eventually encourage investment if the capital stock is short-lived because an inactive investment will result in insufficient productive capacity. However, for an economy characterized by a roundabout production method, a gradual decline in productive capacity may not be able to fall below the aggregate demand that is also shrinking. Naturally, one would then ask if our economy cannot rely on an external stimulus such as population growth and technological progress to revive investment, what factors would provide such a buoyancy for stimulating investments? The current paper attempts to answer this question by employing the artificial macroeconomic model mentioned above. The baseline model has the following three features: (1) the multi-period gestation for investment, (2) a large number of heterogeneous firms, (3) demand-constrained firms. The instability is a consequence of the following dynamic interactions. (a) A multiple-period gestation period means that once a firm starts a new investment, it continues to invest over some subsequent periods. During these gestation periods, the excess demand created by the investing firm will spill over to ignite new investment of other firms that are supplying investment goods: the presence of multi-period gestation for investment provides a field for investment interactions. Conversely, the excess demand for investment goods tends to fade away before it develops into a full-fledged boom if the gestation period of investment is short. (b) A strong demand in the goods market tends to raise the price level, thereby lowering real wages. This reduction of real wages creates two opposing effects on the aggregate demand through the following two channels: (1) a reduction in the real labor income, and (2) an increase in the labor demand due to the principle of equality between the marginal labor productivity and real wage (referred as the Walrasian labor demand). If there is only a single firm, a lower real wage will increase its Walrasian labor demand, thereby an actual labor demand tends to be determined by the derived labor demand. Thus, the second positive effect would not work effectively. In contrast, for an economy with a large number of firms, Walrasian firms will increase employment. This interaction among heterogeneous firms is a key for stability. A single firm cannot expect the benefit of such an increased aggregate demand from other firms.

Keywords: agent-based macroeconomic model, business cycle, demand constraint, gestation period, representative agent model, stability

Procedia PDF Downloads 163
86 Foslip Loaded and CEA-Affimer Functionalised Silica Nanoparticles for Fluorescent Imaging of Colorectal Cancer Cells

Authors: Yazan S. Khaled, Shazana Shamsuddin, Jim Tiernan, Mike McPherson, Thomas Hughes, Paul Millner, David G. Jayne

Abstract:

Introduction: There is a need for real-time imaging of colorectal cancer (CRC) to allow tailored surgery to the disease stage. Fluorescence guided laparoscopic imaging of primary colorectal cancer and the draining lymphatics would potentially bring stratified surgery into clinical practice and realign future CRC management to the needs of patients. Fluorescent nanoparticles can offer many advantages in terms of intra-operative imaging and therapy (theranostic) in comparison with traditional soluble reagents. Nanoparticles can be functionalised with diverse reagents and then targeted to the correct tissue using an antibody or Affimer (artificial binding protein). We aimed to develop and test fluorescent silica nanoparticles and targeted against CRC using an anti-carcinoembryonic antigen (CEA) Affimer (Aff). Methods: Anti-CEA and control Myoglobin Affimer binders were subcloned into the expressing vector pET11 followed by transformation into BL21 Star™ (DE3) E.coli. The expression of Affimer binders was induced using 0.1 mM isopropyl β-D-1-thiogalactopyranoside (IPTG). Cells were harvested, lysed and purified using nickle chelating affinity chromatography. The photosensitiser Foslip (soluble analogue of 5,10,15,20-Tetra(m-hydroxyphenyl) chlorin) was incorporated into the core of silica nanoparticles using water-in-oil microemulsion technique. Anti-CEA or control Affs were conjugated to silica nanoparticles surface using sulfosuccinimidyl-4-(N-maleimidomethyl) cyclohexane-1-carboxylate (sulfo SMCC) chemical linker. Binding of CEA-Aff or control nanoparticles to colorectal cancer cells (LoVo, LS174T and HC116) was quantified in vitro using confocal microscopy. Results: The molecular weights of the obtained band of Affimers were ~12.5KDa while the diameter of functionalised silica nanoparticles was ~80nm. CEA-Affimer targeted nanoparticles demonstrated 9.4, 5.8 and 2.5 fold greater fluorescence than control in, LoVo, LS174T and HCT116 cells respectively (p < 0.002) for the single slice analysis. A similar pattern of successful CEA-targeted fluorescence was observed in the maximum image projection analysis, with CEA-targeted nanoparticles demonstrating 4.1, 2.9 and 2.4 fold greater fluorescence than control particles in LoVo, LS174T, and HCT116 cells respectively (p < 0.0002). There was no significant difference in fluorescence for CEA-Affimer vs. CEA-Antibody targeted nanoparticles. Conclusion: We are the first to demonstrate that Foslip-doped silica nanoparticles conjugated to anti-CEA Affimers via SMCC allowed tumour cell-specific fluorescent targeting in vitro, and had shown sufficient promise to justify testing in an animal model of colorectal cancer. CEA-Affimer appears to be a suitable targeting molecule to replace CEA-Antibody. Targeted silica nanoparticles loaded with Foslip photosensitiser is now being optimised to drive photodynamic killing, via reactive oxygen generation.

Keywords: colorectal cancer, silica nanoparticles, Affimers, antibodies, imaging

Procedia PDF Downloads 240
85 Row Detection and Graph-Based Localization in Tree Nurseries Using a 3D LiDAR

Authors: Ionut Vintu, Stefan Laible, Ruth Schulz

Abstract:

Agricultural robotics has been developing steadily over recent years, with the goal of reducing and even eliminating pesticides used in crops and to increase productivity by taking over human labor. The majority of crops are arranged in rows. The first step towards autonomous robots, capable of driving in fields and performing crop-handling tasks, is for robots to robustly detect the rows of plants. Recent work done towards autonomous driving between plant rows offers big robotic platforms equipped with various expensive sensors as a solution to this problem. These platforms need to be driven over the rows of plants. This approach lacks flexibility and scalability when it comes to the height of plants or distance between rows. This paper proposes instead an algorithm that makes use of cheaper sensors and has a higher variability. The main application is in tree nurseries. Here, plant height can range from a few centimeters to a few meters. Moreover, trees are often removed, leading to gaps within the plant rows. The core idea is to combine row detection algorithms with graph-based localization methods as they are used in SLAM. Nodes in the graph represent the estimated pose of the robot, and the edges embed constraints between these poses or between the robot and certain landmarks. This setup aims to improve individual plant detection and deal with exception handling, like row gaps, which are falsely detected as an end of rows. Four methods were developed for detecting row structures in the fields, all using a point cloud acquired with a 3D LiDAR as an input. Comparing the field coverage and number of damaged plants, the method that uses a local map around the robot proved to perform the best, with 68% covered rows and 25% damaged plants. This method is further used and combined with a graph-based localization algorithm, which uses the local map features to estimate the robot’s position inside the greater field. Testing the upgraded algorithm in a variety of simulated fields shows that the additional information obtained from localization provides a boost in performance over methods that rely purely on perception to navigate. The final algorithm achieved a row coverage of 80% and an accuracy of 27% damaged plants. Future work would focus on achieving a perfect score of 100% covered rows and 0% damaged plants. The main challenges that the algorithm needs to overcome are fields where the height of the plants is too small for the plants to be detected and fields where it is hard to distinguish between individual plants when they are overlapping. The method was also tested on a real robot in a small field with artificial plants. The tests were performed using a small robot platform equipped with wheel encoders, an IMU and an FX10 3D LiDAR. Over ten runs, the system achieved 100% coverage and 0% damaged plants. The framework built within the scope of this work can be further used to integrate data from additional sensors, with the goal of achieving even better results.

Keywords: 3D LiDAR, agricultural robots, graph-based localization, row detection

Procedia PDF Downloads 140
84 Development and Adaptation of a LGBM Machine Learning Model, with a Suitable Concept Drift Detection and Adaptation Technique, for Barcelona Household Electric Load Forecasting During Covid-19 Pandemic Periods (Pre-Pandemic and Strict Lockdown)

Authors: Eric Pla Erra, Mariana Jimenez Martinez

Abstract:

While aggregated loads at a community level tend to be easier to predict, individual household load forecasting present more challenges with higher volatility and uncertainty. Furthermore, the drastic changes that our behavior patterns have suffered due to the COVID-19 pandemic have modified our daily electrical consumption curves and, therefore, further complicated the forecasting methods used to predict short-term electric load. Load forecasting is vital for the smooth and optimized planning and operation of our electric grids, but it also plays a crucial role for individual domestic consumers that rely on a HEMS (Home Energy Management Systems) to optimize their energy usage through self-generation, storage, or smart appliances management. An accurate forecasting leads to higher energy savings and overall energy efficiency of the household when paired with a proper HEMS. In order to study how COVID-19 has affected the accuracy of forecasting methods, an evaluation of the performance of a state-of-the-art LGBM (Light Gradient Boosting Model) will be conducted during the transition between pre-pandemic and lockdowns periods, considering day-ahead electric load forecasting. LGBM improves the capabilities of standard Decision Tree models in both speed and reduction of memory consumption, but it still offers a high accuracy. Even though LGBM has complex non-linear modelling capabilities, it has proven to be a competitive method under challenging forecasting scenarios such as short series, heterogeneous series, or data patterns with minimal prior knowledge. An adaptation of the LGBM model – called “resilient LGBM” – will be also tested, incorporating a concept drift detection technique for time series analysis, with the purpose to evaluate its capabilities to improve the model’s accuracy during extreme events such as COVID-19 lockdowns. The results for the LGBM and resilient LGBM will be compared using standard RMSE (Root Mean Squared Error) as the main performance metric. The models’ performance will be evaluated over a set of real households’ hourly electricity consumption data measured before and during the COVID-19 pandemic. All households are located in the city of Barcelona, Spain, and present different consumption profiles. This study is carried out under the ComMit-20 project, financed by AGAUR (Agència de Gestiód’AjutsUniversitaris), which aims to determine the short and long-term impacts of the COVID-19 pandemic on building energy consumption, incrementing the resilience of electrical systems through the use of tools such as HEMS and artificial intelligence.

Keywords: concept drift, forecasting, home energy management system (HEMS), light gradient boosting model (LGBM)

Procedia PDF Downloads 106
83 Review of the Nutritional Value of Spirulina as a Potential Replacement of Fishmeal in Aquafeed

Authors: Onada Olawale Ahmed

Abstract:

As the intensification of aquaculture production increases on global scale, the growing concern of fish farmers around the world is related to cost of fish production, where cost of feeding takes substantial percentage. Fishmeal (FM) is one of the most expensive ingredients, and its high dependence in aqua-feed production translates to high cost of feeding of stocked fish. However, to reach a sustainable aquaculture, new alternative protein sources including cheaper plant or animal origin proteins are needed to be introduced for stable aqua-feed production. Spirulina is a cyanobacterium that has good nutrient profile that could be useful in aquaculture. This review therefore emphasizes on the nutritional value of Spirulina as a potential replacement of FM in aqua-feed. Spirulina is a planktonic photosynthetic filamentous cyanobacterium that forms massive populations in tropical and subtropical bodies of water with high levels of carbonate and bicarbonate. Spirulina grows naturally in nutrient rich alkaline lake with water salinity ( > 30 g/l) and high pH (8.5–11.0). Its artificial production requires luminosity (photo-period 12/12, 4 luxes), temperature (30 °C), inoculum, water stirring device, dissolved solids (10–60 g/litre), pH (8.5– 10.5), good water quality, and macro and micronutrient presence (C, N, P, K, S, Mg, Na, Cl, Ca and Fe, Zn, Cu, Ni, Co, Se). Spirulina has also been reported to grow on agro-industrial waste such as sugar mill waste effluent, poultry industry waste, fertilizer factory waste, and urban waste and organic matter. Chemical composition of Spirulina indicates that it has high nutritional value due to its content of 55-70% protein, 14-19% soluble carbohydrate, high amount of polyunsaturated fatty acids (PUFAs), 1.5–2.0 percent of 5–6 percent total lipid, all the essential minerals are available in spirulina which contributes about 7 percent (average range 2.76–3.00 percent of total weight) under laboratory conditions, β-carotene, B-group vitamin, vitamin E, iron, potassium and chlorophyll are also available in spirulina. Spirulina protein has a balanced composition of amino acids with concentration of methionine, tryptophan and other amino acids almost similar to those of casein, although, this depends upon the culture media used. Positive effects of spirulina on growth, feed utilization and stress and disease resistance of cultured fish have been reported in earlier studies. Spirulina was reported to replace up to 40% of fishmeal protein in tilapia (Oreochromis mossambicus) diet and even higher replacement of fishmeal was possible in common carp (Cyprinus carpio), partial replacement of fish meal with spirulina in diets for parrot fish (Oplegnathus fasciatus) and Tilapia (Orechromis niloticus) has also been conducted. Spirulina have considerable potential for development, especially as a small-scale crop for nutritional enhancement and health improvement of fish. It is important therefore that more research needs to be conducted on its production, inclusion level in aqua-feed and its possible potential use of aquaculture.

Keywords: aquaculture, spirulina, fish nutrition, fish feed

Procedia PDF Downloads 523
82 Near-Miss Deep Learning Approach for Neuro-Fuzzy Risk Assessment in Pipelines

Authors: Alexander Guzman Urbina, Atsushi Aoyama

Abstract:

The sustainability of traditional technologies employed in energy and chemical infrastructure brings a big challenge for our society. Making decisions related with safety of industrial infrastructure, the values of accidental risk are becoming relevant points for discussion. However, the challenge is the reliability of the models employed to get the risk data. Such models usually involve large number of variables and with large amounts of uncertainty. The most efficient techniques to overcome those problems are built using Artificial Intelligence (AI), and more specifically using hybrid systems such as Neuro-Fuzzy algorithms. Therefore, this paper aims to introduce a hybrid algorithm for risk assessment trained using near-miss accident data. As mentioned above the sustainability of traditional technologies related with energy and chemical infrastructure constitutes one of the major challenges that today’s societies and firms are facing. Besides that, the adaptation of those technologies to the effects of the climate change in sensible environments represents a critical concern for safety and risk management. Regarding this issue argue that social consequences of catastrophic risks are increasing rapidly, due mainly to the concentration of people and energy infrastructure in hazard-prone areas, aggravated by the lack of knowledge about the risks. Additional to the social consequences described above, and considering the industrial sector as critical infrastructure due to its large impact to the economy in case of a failure the relevance of industrial safety has become a critical issue for the current society. Then, regarding the safety concern, pipeline operators and regulators have been performing risk assessments in attempts to evaluate accurately probabilities of failure of the infrastructure, and consequences associated with those failures. However, estimating accidental risks in critical infrastructure involves a substantial effort and costs due to number of variables involved, complexity and lack of information. Therefore, this paper aims to introduce a well trained algorithm for risk assessment using deep learning, which could be capable to deal efficiently with the complexity and uncertainty. The advantage point of the deep learning using near-miss accidents data is that it could be employed in risk assessment as an efficient engineering tool to treat the uncertainty of the risk values in complex environments. The basic idea of using a Near-Miss Deep Learning Approach for Neuro-Fuzzy Risk Assessment in Pipelines is focused in the objective of improve the validity of the risk values learning from near-miss accidents and imitating the human expertise scoring risks and setting tolerance levels. In summary, the method of Deep Learning for Neuro-Fuzzy Risk Assessment involves a regression analysis called group method of data handling (GMDH), which consists in the determination of the optimal configuration of the risk assessment model and its parameters employing polynomial theory.

Keywords: deep learning, risk assessment, neuro fuzzy, pipelines

Procedia PDF Downloads 292
81 How Whatsappization of the Chatbot Affects User Satisfaction, Trust, and Acceptance in a Drive-Sharing Task

Authors: Nirit Gavish, Rotem Halutz, Liad Neta

Abstract:

Nowadays, chatbots are gaining more and more attention due to the advent of large language models. One of the important considerations in chatbot design is how to create an interface to achieve high user satisfaction, trust, and acceptance. Since WhatsApp conversations sometimes substitute for face-to-face communication, we studied whether WhatsAppization of the chatbot -making the conversation resemble a WhatsApp conversation more- will improve user satisfaction, trust, and acceptance, or whether the opposite will occur due to the Uncanny Valley (UV) effect. The task was a drive-sharing task, in which participants communicated with a textual chatbot via WhatsApp and could decide whether to participate in a ride to college with a driver suggested by the chatbot. WhatsAppization of the chatbot was done in two ways: By a dialog-style conversation (Dialog versus No Dialog), and by adding WhatsApp indicators – “Last Seen”, “Connected”, “Read Receipts”, and “Typing…” (Indicators versus No Indicators). Our 120 participants were randomly assigned to one of the four 2 by 2 design groups, with 30 participants in each. They interacted with the WhatsApp chatbot and then filled out a questionnaire. The results demonstrated that, as expected from the manipulation, the interaction with the chatbot was longer for the dialog condition compared to the no dialog. This extra interaction, however, did not lead to higher acceptance -quite the opposite, since participants in the dialog condition were less willing to implement the decision made at the end of the conversation with the chatbot and continue the interaction with the driver they chose. The results are even more striking when considering the Indicators condition. Both for the satisfaction measures and the trust measures, participants’ ratings were lower in the Indicators condition compared to the No Indicators. Participants in the Indicators condition felt that the ride search process was harder to operate, and slower (even though the actual interaction time was similar). They were less convinced that the chatbot suggested real trips and they trusted the person offering the ride and referred to them by the chatbot less. These effects were more evident for participants who preferred to share their rides using WhatsApp compared to participants who preferred chatbots for that purpose. Considering our findings, we can say that the WhatsAppization of the chatbot was detrimental. This is true for the both chatbot WhatsAppization methods – by making the conversation more a dialog and adding WhatsApp indicators. For the chosen drive-sharing task, the results were, in addition to lower satisfaction, less trust in the chatbot’s suggestion and even in the driver suggested by the chatbot, and lower willingness to actually undertake the suggested ride. In addition, it seems that the most problematic WhatsAppization method was using WhatsApp’s indicators during the interaction with the chatbot. The current study suggests that a conversation with an artificial agent should also not imitate a WhatsApp conversation very closely. With the proliferation of WhatsApp use, the emotional and social aspect of face-to face commination are moving to WhatsApp communication. Based on the current study’s findings, it is possible that the UV effect also occurs in WhatsAppization, and not only in humanization, of the chatbot, with a similar feeling of eeriness, and is more pronounced for people who prefer to use WhatsApp over chatbots. The current research can serve as a starting point to study the very interesting and important topic of chatbots WhatsAppization. More methods of WhatsAppization and other tasks could be the focus of further studies.

Keywords: chatbot, WhatsApp, humanization, Uncanny Valley, drive sharing

Procedia PDF Downloads 49
80 The Effect of Artificial Intelligence on Mobile Phones and Communication Systems

Authors: Ibram Khalafalla Roshdy Shokry

Abstract:

This paper gives service feel multiple get entry to (CSMA) verbal exchange model based totally totally on SoC format method. Such model can be used to guide the modelling of the complex c084d04ddacadd4b971ae3d98fecfb2a communique systems, consequently use of such communication version is an crucial method in the creation of excessive general overall performance conversation. SystemC has been selected as it gives a homogeneous format drift for complicated designs (i.e. SoC and IP based format). We use a swarm device to validate CSMA designed version and to expose how advantages of incorporating communication early within the layout process. The wireless conversation created via the modeling of CSMA protocol that may be used to attain conversation among all of the retailers and to coordinate get proper of entry to to the shared medium (channel).The device of automobiles with wi-fiwireless communique abilities is expected to be the important thing to the evolution to next era intelligent transportation systems (ITS). The IEEE network has been continuously operating at the development of an wireless vehicular communication protocol for the enhancement of wi-fi get admission to in Vehicular surroundings (WAVE). Vehicular verbal exchange systems, known as V2X, help car to car (V2V) and automobile to infrastructure (V2I) communications. The wi-ficiencywireless of such communication systems relies upon on several elements, amongst which the encircling surroundings and mobility are prominent. as a result, this observe makes a speciality of the evaluation of the actual performance of vehicular verbal exchange with unique cognizance on the effects of the actual surroundings and mobility on V2X verbal exchange. It begins by wi-fi the actual most range that such conversation can guide and then evaluates V2I and V2V performances. The Arada LocoMate OBU transmission device changed into used to check and evaluate the effect of the transmission range in V2X verbal exchange. The evaluation of V2I and V2V communique takes the real effects of low and excessive mobility on transmission under consideration.Multiagent systems have received sizeable attention in numerous wi-fields, which include robotics, independent automobiles, and allotted computing, where a couple of retailers cooperate and speak to reap complicated duties. wi-figreen communication among retailers is a critical thing of these systems, because it directly influences their usual performance and scalability. This scholarly work gives an exploration of essential communication factors and conducts a comparative assessment of diverse protocols utilized in multiagent systems. The emphasis lies in scrutinizing the strengths, weaknesses, and applicability of those protocols across diverse situations. The studies additionally sheds light on rising tendencies within verbal exchange protocols for multiagent systems, together with the incorporation of device mastering strategies and the adoption of blockchain-based totally solutions to make sure comfy communique. those developments offer valuable insights into the evolving landscape of multiagent structures and their verbal exchange protocols.

Keywords: communication, multi-agent systems, protocols, consensussystemC, modelling, simulation, CSMA

Procedia PDF Downloads 28
79 Hardware Implementation for the Contact Force Reconstruction in Tactile Sensor Arrays

Authors: María-Luisa Pinto-Salamanca, Wilson-Javier Pérez-Holguín

Abstract:

Reconstruction of contact forces is a fundamental technique for analyzing the properties of a touched object and is essential for regulating the grip force in slip control loops. This is based on the processing of the distribution, intensity, and direction of the forces during the capture of the sensors. Currently, efficient hardware alternatives have been used more frequently in different fields of application, allowing the implementation of computationally complex algorithms, as is the case with tactile signal processing. The use of hardware for smart tactile sensing systems is a research area that promises to improve the processing time and portability requirements of applications such as artificial skin and robotics, among others. The literature review shows that hardware implementations are present today in almost all stages of smart tactile detection systems except in the force reconstruction process, a stage in which they have been less applied. This work presents a hardware implementation of a model-driven reported in the literature for the contact force reconstruction of flat and rigid tactile sensor arrays from normal stress data. From the analysis of a software implementation of such a model, this implementation proposes the parallelization of tasks that facilitate the execution of matrix operations and a two-dimensional optimization function to obtain a vector force by each taxel in the array. This work seeks to take advantage of the parallel hardware characteristics of Field Programmable Gate Arrays, FPGAs, and the possibility of applying appropriate techniques for algorithms parallelization using as a guide the rules of generalization, efficiency, and scalability in the tactile decoding process and considering the low latency, low power consumption, and real-time execution as the main parameters of design. The results show a maximum estimation error of 32% in the tangential forces and 22% in the normal forces with respect to the simulation by the Finite Element Modeling (FEM) technique of Hertzian and non-Hertzian contact events, over sensor arrays of 10×10 taxels of different sizes. The hardware implementation was carried out on an MPSoC XCZU9EG-2FFVB1156 platform of Xilinx® that allows the reconstruction of force vectors following a scalable approach, from the information captured by means of tactile sensor arrays composed of up to 48 × 48 taxels that use various transduction technologies. The proposed implementation demonstrates a reduction in estimation time of x / 180 compared to software implementations. Despite the relatively high values of the estimation errors, the information provided by this implementation on the tangential and normal tractions and the triaxial reconstruction of forces allows to adequately reconstruct the tactile properties of the touched object, which are similar to those obtained in the software implementation and in the two FEM simulations taken as reference. Although errors could be reduced, the proposed implementation is useful for decoding contact forces for portable tactile sensing systems, thus helping to expand electronic skin applications in robotic and biomedical contexts.

Keywords: contact forces reconstruction, forces estimation, tactile sensor array, hardware implementation

Procedia PDF Downloads 196
78 Seismic Perimeter Surveillance System (Virtual Fence) for Threat Detection and Characterization Using Multiple ML Based Trained Models in Weighted Ensemble Voting

Authors: Vivek Mahadev, Manoj Kumar, Neelu Mathur, Brahm Dutt Pandey

Abstract:

Perimeter guarding and protection of critical installations require prompt intrusion detection and assessment to take effective countermeasures. Currently, visual and electronic surveillance are the primary methods used for perimeter guarding. These methods can be costly and complicated, requiring careful planning according to the location and terrain. Moreover, these methods often struggle to detect stealthy and camouflaged insurgents. The object of the present work is to devise a surveillance technique using seismic sensors that overcomes the limitations of existing systems. The aim is to improve intrusion detection, assessment, and characterization by utilizing seismic sensors. Most of the similar systems have only two types of intrusion detection capability viz., human or vehicle. In our work we could even categorize further to identify types of intrusion activity such as walking, running, group walking, fence jumping, tunnel digging and vehicular movements. A virtual fence of 60 meters at GCNEP, Bahadurgarh, Haryana, India, was created by installing four underground geophones at a distance of 15 meters each. The signals received from these geophones are then processed to find unique seismic signatures called features. Various feature optimization and selection methodologies, such as LightGBM, Boruta, Random Forest, Logistics, Recursive Feature Elimination, Chi-2 and Pearson Ratio were used to identify the best features for training the machine learning models. The trained models were developed using algorithms such as supervised support vector machine (SVM) classifier, kNN, Decision Tree, Logistic Regression, Naïve Bayes, and Artificial Neural Networks. These models were then used to predict the category of events, employing weighted ensemble voting to analyze and combine their results. The models were trained with 1940 training events and results were evaluated with 831 test events. It was observed that using the weighted ensemble voting increased the efficiency of predictions. In this study we successfully developed and deployed the virtual fence using geophones. Since these sensors are passive, do not radiate any energy and are installed underground, it is impossible for intruders to locate and nullify them. Their flexibility, quick and easy installation, low costs, hidden deployment and unattended surveillance make such systems especially suitable for critical installations and remote facilities with difficult terrain. This work demonstrates the potential of utilizing seismic sensors for creating better perimeter guarding and protection systems using multiple machine learning models in weighted ensemble voting. In this study the virtual fence achieved an intruder detection efficiency of over 97%.

Keywords: geophone, seismic perimeter surveillance, machine learning, weighted ensemble method

Procedia PDF Downloads 81
77 From Shelf to Shell - The Corporate Form in the Era of Over-Regulation

Authors: Chrysthia Papacleovoulou

Abstract:

The era of de-regulation, off-shore and tax haven jurisdictions, and shelf companies has come to an end. The usage of complex corporate structures involving trust instruments, special purpose vehicles, holding-subsidiaries in offshore haven jurisdictions, and taking advantage of tax treaties is soaring. States which raced to introduce corporate friendly legislation, tax incentives, and creative international trust law in order to attract greater FDI are now faced with regulatory challenges and are forced to revisit the corporate form and its tax treatment. The fiduciary services industry, which dominated over the last 3 decades, is now striving to keep up with the new regulatory framework as a result of a number of European and international legislative measures. This article considers the challenges to the company and the corporate form as a result of the legislative measures on tax planning and tax avoidance, CRS reporting, FATCA, CFC rules, OECD’s BEPS, the EU Commission's new transparency rules for intermediaries that extends to tax advisors, accountants, banks & lawyers who design and promote tax planning schemes for their clients, new EU rules to block artificial tax arrangements and new transparency requirements for financial accounts, tax rulings and multinationals activities (DAC 6), G20's decision for a global 15% minimum corporate tax and banking regulation. As a result, states are found in a race of over-regulation and compliance. These legislative measures constitute a global up-side down tax-harmonisation. Through the adoption of the OECD’s BEPS, states agreed to an international collaboration to end tax avoidance and reform international taxation rules. Whilst the idea was to ensure that multinationals would pay their fair share of tax everywhere they operate, an indirect result of the aforementioned regulatory measures was to attack private clients-individuals who -over the past 3 decades- used the international tax system and jurisdictions such as Marshal Islands, Cayman Islands, British Virgin Islands, Bermuda, Seychelles, St. Vincent, Jersey, Guernsey, Liechtenstein, Monaco, Cyprus, and Malta, to name but a few, to engage in legitimate tax planning and tax avoidance. Companies can no longer maintain bank accounts without satisfying the real substance test. States override the incorporation doctrine theory and apply a real seat or real substance test in taxing companies and their activities, targeting even the beneficial owners personally with tax liability. Tax authorities in civil law jurisdictions lift the corporate veil through the public registries of UBO Registries and Trust Registries. As a result, the corporate form and the doctrine of limited liability are challenged in their core. Lastly, this article identifies the development of new instruments, such as funds and private placement insurance policies, and the trend of digital nomad workers. The baffling question is whether industry and states can meet somewhere in the middle and exit this over-regulation frenzy.

Keywords: company, regulation, TAX, corporate structure, trust vehicles, real seat

Procedia PDF Downloads 140
76 Knowledge Based Software Model for the Management and Treatment of Malaria Patients: A Case of Kalisizo General Hospital

Authors: Mbonigaba Swale

Abstract:

Malaria is an infection or disease caused by parasites (Plasmodium Falciparum — causes severe Malaria, plasmodium Vivax, Plasmodium Ovale, and Plasmodium Malariae), transmitted by bites of infected anopheles (female) mosquitoes to humans. These vectors comprise of two types in Africa, particularly in Uganda, i.e. anopheles fenestus and Anopheles gambaie (‘example Anopheles arabiensis,,); feeds on man inside the house mainly at dusk, mid-night and dawn and rests indoors and makes them effective transmitters (vectors) of the disease. People in both urban and rural areas have consistently become prone to repetitive attacks of malaria, causing a lot of deaths and significantly increasing the poverty levels of the rural poor. Malaria is a national problem; it causes a lot of maternal pre-natal and antenatal disorders, anemia in pregnant mothers, low birth weights for the newly born, convulsions and epilepsy among the infants. Cumulatively, it kills about one million children every year in sub-Saharan Africa. It has been estimated to account for 25-35% of all outpatient visits, 20-45% of acute hospital admissions and 15-35% of hospital deaths. Uganda is the leading victim country, for which Rakai and Masaka districts are the most affected. So, it is not clear whether these abhorrent situations and episodes of recurrences and failure to cure from the disease are a result of poor diagnosis, prescription and dosing, treatment habits and compliance of the patients to the drugs or the ethical domain of the stake holders in relation to the main stream methodology of malaria management. The research is aimed at offering an alternative approach to manage and deal absolutely with problem by using a knowledge based software model of Artificial Intelligence (Al) that is capable of performing common-sense and cognitive reasoning so as to take decisions like the human brain would do to provide instantaneous expert solutions so as to avoid speculative simulation of the problem during differential diagnosis in the most accurate and literal inferential aspect. This system will assist physicians in many kinds of medical diagnosis, prescribing treatments and doses, and in monitoring patient responses, basing on the body weight and age group of the patient, it will be able to provide instantaneous and timely information options, alternative ways and approaches to influence decision making during case analysis. The computerized system approach, a new model in Uganda termed as “Software Aided Treatment” (SAT) will try to change the moral and ethical approach and influence conduct so as to improve the skills, experience and values (social and ethical) in the administration and management of the disease and drugs (combination therapy and generics) by both the patient and the health worker.

Keywords: knowledge based software, management, treatment, diagnosis

Procedia PDF Downloads 57