Search results for: unified theory of acceptance and use of technology (UTAUT) model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 26053

Search results for: unified theory of acceptance and use of technology (UTAUT) model

20323 Message Framework for Disaster Management: An Application Model for Mines

Authors: A. Baloglu, A. Çınar

Abstract:

Different tools and technologies were implemented for Crisis Response and Management (CRM) which is generally using available network infrastructure for information exchange. Depending on type of disaster or crisis, network infrastructure could be affected and it could not be able to provide reliable connectivity. Thus any tool or technology that depends on the connectivity could not be able to fulfill its functionalities. As a solution, a new message exchange framework has been developed. Framework provides offline/online information exchange platform for CRM Information Systems (CRMIS) and it uses XML compression and packet prioritization algorithms and is based on open source web technologies. By introducing offline capabilities to the web technologies, framework will be able to perform message exchange on unreliable networks. The experiments done on the simulation environment provide promising results on low bandwidth networks (56kbps and 28.8 kbps) with up to 50% packet loss and the solution is to successfully transfer all the information on these low quality networks where the traditional 2 and 3 tier applications failed.

Keywords: crisis response and management, XML messaging, web services, XML compression, mining

Procedia PDF Downloads 325
20322 The Shona and isiXhosa Linguistic Matrimony Through Code-Switching in Cape Town

Authors: John Mambambo

Abstract:

Debates on the link between Bantu languages are often epitomized by animated theoretical critiques, including the language zoning and groupings. This evaluative, qualitative inquiry hovers above theoretical critiques to offer the sparsely studied ChiShona and isiXhosa code-switching nexus, a yawning gap in scholarship. Using interviews, questionnaires and observations, data germane to the study were collected from a purposively selected group of Shona speakers who had resided in Xhosa-speaking communities for not less than a year. Deploying Myers-Scotton’s Markedness theory, the paper gazes into the pragmatic linguistic affinity that is affirmed through the Shona-Xhosa code-switching in Cape Town. The assorted social variables motivating bilingual speakers to code-switch in Cape Town are also explored in this study. The study unveils that Shona speakers are motivated to code-switch by the linguistic affinity between ChiShona and isiXhosa. Other socio-political justifications also give an impetus to this phenomenon. The Matrix Language Frame Model affirms that ChiShona is the base while isiXhosa is the embedded language during code-switching. This paper is a momentous advancement of the extant literature on code-switching. It is a unique contribution to the nexus between ChiShona and isiXhosa languages, providing fresh insights into the discourse on African language comparison studies.

Keywords: code-switching, chishona, isiXhosa, bilingualism

Procedia PDF Downloads 91
20321 An Inverse Heat Transfer Algorithm for Predicting the Thermal Properties of Tumors during Cryosurgery

Authors: Mohamed Hafid, Marcel Lacroix

Abstract:

This study aimed at developing an inverse heat transfer approach for predicting the time-varying freezing front and the temperature distribution of tumors during cryosurgery. Using a temperature probe pressed against the layer of tumor, the inverse approach is able to predict simultaneously the metabolic heat generation and the blood perfusion rate of the tumor. Once these parameters are predicted, the temperature-field and time-varying freezing fronts are determined with the direct model. The direct model rests on one-dimensional Pennes bioheat equation. The phase change problem is handled with the enthalpy method. The Levenberg-Marquardt Method (LMM) combined to the Broyden Method (BM) is used to solve the inverse model. The effect (a) of the thermal properties of the diseased tissues; (b) of the initial guesses for the unknown thermal properties; (c) of the data capture frequency; and (d) of the noise on the recorded temperatures is examined. It is shown that the proposed inverse approach remains accurate for all the cases investigated.

Keywords: cryosurgery, inverse heat transfer, Levenberg-Marquardt method, thermal properties, Pennes model, enthalpy method

Procedia PDF Downloads 187
20320 Application of Deep Learning Algorithms in Agriculture: Early Detection of Crop Diseases

Authors: Manaranjan Pradhan, Shailaja Grover, U. Dinesh Kumar

Abstract:

Farming community in India, as well as other parts of the world, is one of the highly stressed communities due to reasons such as increasing input costs (cost of seeds, fertilizers, pesticide), droughts, reduced revenue leading to farmer suicides. Lack of integrated farm advisory system in India adds to the farmers problems. Farmers need right information during the early stages of crop’s lifecycle to prevent damage and loss in revenue. In this paper, we use deep learning techniques to develop an early warning system for detection of crop diseases using images taken by farmers using their smart phone. The research work leads to building a smart assistant using analytics and big data which could help the farmers with early diagnosis of the crop diseases and corrective actions. The classical approach for crop disease management has been to identify diseases at crop level. Recently, ImageNet Classification using the convolutional neural network (CNN) has been successfully used to identify diseases at individual plant level. Our model uses convolution filters, max pooling, dense layers and dropouts (to avoid overfitting). The models are built for binary classification (healthy or not healthy) and multi class classification (identifying which disease). Transfer learning is used to modify the weights of parameters learnt through ImageNet dataset and apply them on crop diseases, which reduces number of epochs to learn. One shot learning is used to learn from very few images, while data augmentation techniques are used to improve accuracy with images taken from farms by using techniques such as rotation, zoom, shift and blurred images. Models built using combination of these techniques are more robust for deploying in the real world. Our model is validated using tomato crop. In India, tomato is affected by 10 different diseases. Our model achieves an accuracy of more than 95% in correctly classifying the diseases. The main contribution of our research is to create a personal assistant for farmers for managing plant disease, although the model was validated using tomato crop, it can be easily extended to other crops. The advancement of technology in computing and availability of large data has made possible the success of deep learning applications in computer vision, natural language processing, image recognition, etc. With these robust models and huge smartphone penetration, feasibility of implementation of these models is high resulting in timely advise to the farmers and thus increasing the farmers' income and reducing the input costs.

Keywords: analytics in agriculture, CNN, crop disease detection, data augmentation, image recognition, one shot learning, transfer learning

Procedia PDF Downloads 106
20319 Modeling of Long Wave Generation and Propagation via Seabed Deformation

Authors: Chih-Hua Chang

Abstract:

This study uses a three-dimensional (3D) fully nonlinear model to simulate the wave generation problem caused by the movement of the seabed. The numerical model is first simplified into two dimensions and then compared with the existing two-dimensional (2D) experimental data and the 2D numerical results of other shallow-water wave models. Results show that this model is different from the earlier shallow-water wave models, with the phase being closer to the experimental results of wave propagation. The results of this study are also compared with those of the 3D experimental results of other researchers. Satisfactory results can be obtained in both the waveform and the flow field. This study assesses the application of the model to simulate the wave caused by the circular (radius r0) terrain rising or falling (moving distance bm). The influence of wave-making parameters r0 and bm are discussed. This study determines that small-range (e.g., r0 = 2, normalized by the static water depth), rising, or sinking terrain will produce significant wave groups in the far field. For large-scale moving terrain (e.g., r0 = 10), uplift and deformation will potentially generate the leading solitary-like waves in the far field.

Keywords: seismic wave, wave generation, far-field waves, seabed deformation

Procedia PDF Downloads 71
20318 Estimation of the State of Charge of the Battery Using EFK and Sliding Mode Observer in MATLAB-Arduino/Labview

Authors: Mouna Abarkan, Abdelillah Byou, Nacer M'Sirdi, El Hossain Abarkan

Abstract:

This paper presents the estimation of the state of charge of the battery using two types of observers. The battery model used is the combination of a voltage source, which is the open circuit battery voltage of a strength corresponding to the connection of resistors and electrolyte and a series of parallel RC circuits representing charge transfer phenomena and diffusion. An adaptive observer applied to this model is proposed, this observer to estimate the battery state of charge of the battery is based on EFK and sliding mode that is known for their robustness and simplicity implementation. The results are validated by simulation under MATLAB/Simulink and implemented in Arduino-LabView.

Keywords: model of the battery, adaptive sliding mode observer, the EFK observer, estimation of state of charge, SOC, implementation in Arduino/LabView

Procedia PDF Downloads 289
20317 The Impact of Political Leadership on Cameroon’s Economic Development From 2000 to 2023

Authors: Okpu Enoh Ndip Nkongho

Abstract:

The type of political leadership in place impacts a state's economic development or underdevelopment directly and indirectly. One of the main challenges to Cameroon's economic development may be ineffective or misguided political leadership. The economy of the Cameroon state has declined significantly due to a number of factors, including a lack of effective and feasible economic policies, a reliance on crude oil that is excessive, tribal politics, the threat of insurgency, bribery, and corruption, violations of human rights, neglect of other sectors like science, technology, education, and transportation, and a careless attitude on the part of the administrators toward the general public. As a result, the standard of living has decreased, foreign exchange has decreased, and the value of the Cameroonian currency has depreciated. Therefore, from 2000 to 2023, this paper focused on the relationship between political leadership and economic development in Cameroon and offered suggestions for improving political leadership that will, in turn, lead to the country's economy getting back on track. The study employed a qualitative technique, with the framework for the investigation derived from the trait theory of leadership. According to the information provided above, the paper was able to conclude that there is a lack of cooperation between the three branches of government in Cameroon. This is shown in situations when one branch operates independently of the others and refuses to function as a backup when needed. The study recommended that the Executive collaborate closely with the National Assembly to speed action on some key legislation required to stimulate economic development. On the other hand, there is a need for more clarity and consistency in the government's policy orientation. There is no doubt that our current economic troubles are at least partially the result of a lack of economic policy leadership and confidence.

Keywords: politics, leadership, economic, development, Cameroon

Procedia PDF Downloads 34
20316 Modeling of the Heat and Mass Transfer in Fluids through Thermal Pollution in Pipelines

Authors: V. Radulescu, S. Dumitru

Abstract:

Introduction: Determination of the temperature field inside a fluid in motion has many practical issues, especially in the case of turbulent flow. The phenomenon is greater when the solid walls have a different temperature than the fluid. The turbulent heat and mass transfer have an essential role in case of the thermal pollution, as it was the recorded during the damage of the Thermoelectric Power-plant Oradea (closed even today). Basic Methods: Solving the theoretical turbulent thermal pollution represents a particularly difficult problem. By using the semi-empirical theories or by simplifying the made assumptions, based on the experimental measurements may be assured the elaboration of the mathematical model for further numerical simulations. The three zones of flow are analyzed separately: the vicinity of the solid wall, the turbulent transition zone, and the turbulent core. For each area are determined the distribution law of temperature. It is determined the dependence of between the Stanton and Prandtl numbers with correction factors, based on measurements experimental. Major Findings/Results: The limitation of the laminar thermal substrate was determined based on the theory of Landau and Levice, using the assumption that the longitudinal component of the velocity pulsation and the pulsation’s frequency varies proportionally with the distance to the wall. For the calculation of the average temperature, the formula is used a similar solution as for the velocity, by an analogous mediation. On these assumptions, the numerical modeling was performed with a gradient of temperature for the turbulent flow in pipes (intact or damaged, with cracks) having 4 different diameters, between 200-500 mm, as there were in the Thermoelectric Power-plant Oradea. Conclusions: It was made a superposition between the molecular viscosity and the turbulent one, followed by addition between the molecular and the turbulent transfer coefficients, necessary to elaborate the theoretical and the numerical modeling. The concept of laminar boundary layer has a different thickness when it is compared the flow with heat transfer and that one without a temperature gradient. The obtained results are within the margin of error of 5%, between the semi-empirical classical theories and the developed model, based on the experimental data. Finally, it is obtained a general correlation between the Stanton number and the Prandtl number, for a specific flow (with associated Reynolds number).

Keywords: experimental measurements, numerical correlations, thermal pollution through pipelines, turbulent thermal flow

Procedia PDF Downloads 149
20315 Development of Wave-Dissipating Block Installation Simulation for Inexperienced Worker Training

Authors: Hao Min Chuah, Tatsuya Yamazaki, Ryosui Iwasawa, Tatsumi Suto

Abstract:

In recent years, with the advancement of digital technology, the movement to introduce so-called ICT (Information and Communication Technology), such as computer technology and network technology, to civil engineering construction sites and construction sites is accelerating. As part of this movement, attempts are being made in various situations to reproduce actual sites inside computers and use them for designing and construction planning, as well as for training inexperienced engineers. The installation of wave-dissipating blocks on coasts, etc., is a type of work that has been carried out by skilled workers based on their years of experience and is one of the tasks that is difficult for inexperienced workers to carry out on site. Wave-dissipating blocks are structures that are designed to protect coasts, beaches, and so on from erosion by reducing the energy of ocean waves. Wave-dissipating blocks usually weigh more than 1 t and are installed by being suspended by a crane, so it would be time-consuming and costly for inexperienced workers to train on-site. In this paper, therefore, a block installation simulator is developed based on Unity 3D, a game development engine. The simulator computes porosity. Porosity is defined as the ratio of the total volume of the wave breaker blocks inside the structure to the final shape of the ideal structure. Using the evaluation of porosity, the simulator can determine how well the user is able to install the blocks. The voxelization technique is used to calculate the porosity of the structure, simplifying the calculations. Other techniques, such as raycasting and box overlapping, are employed for accurate simulation. In the near future, the simulator will install an automatic block installation algorithm based on combinatorial optimization solutions and compare the user-demonstrated block installation and the appropriate installation solved by the algorithm.

Keywords: 3D simulator, porosity, user interface, voxelization, wave-dissipating blocks

Procedia PDF Downloads 82
20314 Analysys of Some Solutions to Protect the Tombolo of Giens

Authors: Yves Lacroix, Van Van Than, Didier Léandri, Pierre Liardet

Abstract:

The western Tombolo of the Giens peninsula in southern France, known as Almanarre beach, is subject to coastal erosion. We are trying to use computer simulation in order to propose solutions to stop this erosion. Our aim was first to determine the main factors for this erosion and successfully apply a coupled hydro-sedimentological numerical model based on observations and measurements that have been performed on the site for decades. We have gathered all available information and data about waves, winds, currents, tides, bathymetry, coastal line, and sediments concerning the site. These have been divided into two sets: one devoted to calibrating a numerical model using Mike 21 software, the other to serve as a reference in order to numerically compare the present situation to what it could be if we implemented different types of underwater constructions. This paper presents the first part of the study: selecting and melting different sources into a coherent data basis, identifying the main erosion factors, and calibrating the coupled software model against the selected reference period. Our results bring calibration of the numerical model with good fitting coefficients. They also show that the winter South-Western storm events conjugated to depressive weather conditions constitute a major factor of erosion, mainly due to wave impact in the northern part of the Almanarre beach. Together, current and wind impact is shown negligible.

Keywords: Almanarre beach, coastal erosion, hydro-sedimentological, numerical model

Procedia PDF Downloads 304
20313 PolyScan: Comprehending Human Polymicrobial Infections for Vector-Borne Disease Diagnostic Purposes

Authors: Kunal Garg, Louise Theusen Hermansan, Kanoktip Puttaraska, Oliver Hendricks, Heidi Pirttinen, Leona Gilbert

Abstract:

The Germ Theory (one infectious determinant is equal to one disease) has unarguably evolved our capability to diagnose and treat infectious diseases over the years. Nevertheless, the advent of technology, climate change, and volatile human behavior has brought about drastic changes in our environment, leading us to question the relevance of the Germ Theory in our day, i.e. will vector-borne disease (VBD) sufferers produce multiple immune responses when tested for multiple microbes? Vector diseased patients producing multiple immune responses to different microbes would evidently suggest human polymicrobial infections (HPI). Ongoing diagnostic tools are exceedingly unequipped with the current research findings that would aid in diagnosing patients for polymicrobial infections. This shortcoming has caused misdiagnosis at very high rates, consequently diminishing the patient’s quality of life due to inadequate treatment. Equipped with the state-of-art scientific knowledge, PolyScan intends to address the pitfalls in current VBD diagnostics. PolyScan is a multiplex and multifunctional enzyme linked Immunosorbent assay (ELISA) platform that can test for numerous VBD microbes and allow simultaneous screening for multiple types of antibodies. To validate PolyScan, Lyme Borreliosis (LB) and spondyloarthritis (SpA) patient groups (n = 54 each) were tested for Borrelia burgdorferi, Borrelia burgdorferi Round Body (RB), Borrelia afzelii, Borrelia garinii, and Ehrlichia chaffeensis against IgM and IgG antibodies. LB serum samples were obtained from Germany and SpA serum samples were obtained from Denmark under relevant ethical approvals. The SpA group represented chronic LB stage because reactive arthritis (SpA subtype) in the form of Lyme arthritis links to LB. It was hypothesized that patients from both the groups will produce multiple immune responses that as a consequence would evidently suggest HPI. It was also hypothesized that the multiple immune response proportion in SpA patient group would be significantly larger when compared to the LB patient group across both antibodies. It was observed that 26% LB patients and 57% SpA patients produced multiple immune responses in contrast to 33% LB patients and 30% SpA patients that produced solitary immune responses when tested against IgM. Similarly, 52% LB patients and an astounding 73% SpA patients produced multiple immune responses in contrast to 30% LB patients and 8% SpA patients that produced solitary immune responses when tested against IgG. Interestingly, IgM immune dysfunction in both the patient groups was also recorded. Atypically, 6% of the unresponsive 18% LB with IgG antibody was recorded producing multiple immune responses with the IgM antibody. Similarly, 12% of the unresponsive 19% SpA with IgG antibody was recorded producing multiple immune responses with the IgM antibody. Thus, results not only supported hypothesis but also suggested that IgM may atypically prevail longer than IgG. The PolyScan concept will aid clinicians to detect patients for early, persistent, late, polymicrobial, & immune dysfunction conditions linked to different VBD. PolyScan provides a paradigm shift for the VBD diagnostic industry to follow that will drastically shorten patient’s time to receive adequate treatment.

Keywords: diagnostics, immune dysfunction, polymicrobial, TICK-TAG

Procedia PDF Downloads 314
20312 A Quantitative Survey Research on the Development and Assessment of Attitude toward Mathematics Instrument

Authors: Soofia Malik

Abstract:

The purpose of this study is to develop an instrument to measure undergraduate students’ attitudes toward mathematics (MAT) and to assess the data collected from the instrument for validity and reliability. The instrument is developed using five subscales: anxiety, enjoyment, self-confidence, value, and technology. The technology dimension is added as the fifth subscale of attitude toward mathematics because of the recent trend of incorporating online homework in mathematics courses as well as due to heavy reliance of higher education on using online learning management systems, such as Blackboard and Moodle. The sample consists of 163 (M = 82, F = 81) undergraduates enrolled in College Algebra course in the summer 2017 semester at a university in the USA. The data is analyzed to answer the research question: if and how do undergraduate students’ attitudes toward mathematics load using Principal Components Analysis (PCA)? As a result of PCA, three subscales emerged namely: anxiety/self-confidence scale, enjoyment, and value scale. After deleting the last five items or the last two subscales from the initial MAT scale, the Cronbach’s alpha was recalculated using the scores from 20 items and was found to be α = .95. It is important to note that the reliability of the initial MAT form was α = .93. This means that employing the final MAT survey form would yield consistent results in repeated uses. The final MAT form is, therefore, more reliable as compared to the initial MAT form.

Keywords: college algebra, Cronbach's alpha reliability coefficient, Principal Components Analysis, PCA, technology in mathematics

Procedia PDF Downloads 111
20311 Principle of Progressive Implementation and Education Policy for Former Combatants in Colombia

Authors: Ximena Rincon Castellanos

Abstract:

The research target was analyzed the education public policy of Colombia according to the content of the right to education. One problematic element of that content is the principle of progressive implementation of economic, social and cultural rights. The research included a complete study of public documents and other papers; as well as, one focus group with former combatants in a city where is located one of some 'hogares de paz', which hosts these people after leaving the illegal group. This paper presents a critical approach to the public policy strategies to guarantee education to former combatants and its tension with the right to a progressive implementation. Firstly, education is understood as a technology level without considering higher education. Former combatant attends to SENA and private institutions, which offer technology education and it is counted by the Colombian Government as higher education. Therefore, statistics report a high level of attendance of excombatant to that education level, but actually, they do not expect to study a university carrier. Secondly, the budget approved has been invested in private institutions, despite public institutions are able to include this population and they need more money to strengthen the public offer, which has been considered as a better strategy to ensure education as a human right but not a good, by the special rapporteur on the right to education. As a consequence, the progressive implementation should be a guide to change and improve current strategies, invest the budget available into the public system of education in order to give former combatants the chance to access to universities.

Keywords: higher education, progressive implementation, public service, private offering and technology education

Procedia PDF Downloads 155
20310 An Optimal Control Model for the Dynamics of Visceral Leishmaniasis

Authors: Ibrahim M. Elmojtaba, Rayan M. Altayeb

Abstract:

Visceral leishmaniasis (VL) is a vector-borne disease caused by the protozoa parasite of the genus leishmania. The transmission of the parasite to humans and animals occurs via the bite of adult female sandflies previously infected by biting and sucking blood of an infectious humans or animals. In this paper we use a previously proposed model, and then applied two optimal controls, namely treatment and vaccination to that model to investigate optimal strategies for controlling the spread of the disease using treatment and vaccination as the system control variables. The possible impact of using combinations of the two controls, either one at a time or two at a time on the spread of the disease is also examined. Our results provide a framework for vaccination and treatment strategies to reduce susceptible and infection individuals of VL in five years.

Keywords: visceral leishmaniasis, treatment, vaccination, optimal control, numerical simulation

Procedia PDF Downloads 392
20309 Developing and Enacting a Model for Institutional Implementation of the Humanizing Pedagogy: Case Study of Nelson Mandela University

Authors: Mukhtar Raban

Abstract:

As part of Nelson Mandela University’s journey of repositioning its learning and teaching agenda, the university adopted and foregrounded a humanizing pedagogy-aligning with institutional goals of critically transforming the academic project. The university established the Humanizing Pedagogy Praxis and Research Niche (HPPRN) as a centralized hub for coordinating institutional work exploring and advancing humanizing pedagogies and tasked the unit with developing and enacting a model for humanizing pedagogy exploration. This investigation endeavored to report on the development and enactment of a model that sought to institutionalize a humanizing pedagogy at a South African university. Having followed a qualitative approach, the investigation presents the case study of Nelson Mandela University’s HPPRN and the model it subsequently established and enacted for the advancement towards a more common institutional understanding, interpretation and application of the humanizing pedagogy. The study adopted an interpretive lens for analysis, complementing the qualitative approach of the investigation. The primary challenge that confronted the HPPRN was the development of a ‘living model’ that had to complement existing institutional initiatives while accommodating a renewed spirit of critical reflection, innovation and research of continued and new humanizing pedagogical exploration and applications. The study found that the explicit consideration of tenets of humanizing and critical pedagogies in underpinning and framing the HPPRN Model contributed to the sense of ‘lived’ humanizing pedagogy experiences during enactment. The multi-leveled inclusion of critical reflection in the development and enactment stages was found to further the processes of praxis employed at the university, which is integral to the advancement of humanizing and critical pedagogies. The development and implementation of a model that seeks to institutionalize the humanizing pedagogy at a university rely not only on sound theoretical conceptualization but also on the ‘richness of becoming more human’ explicitly expressed and encountered in praxes and application.

Keywords: humanizing pedagogy, critical pedagogy, institutional implementation, praxis

Procedia PDF Downloads 156
20308 Flow: A Fourth Musical Element

Authors: James R. Wilson

Abstract:

Music is typically defined as having the attributes of melody, harmony, and rhythm. In this paper, a fourth element is proposed -"flow". "Flow" is a new dimension in music that has always been present but only recently identified and measured. The Adagio "Flow Machine" enables us to envision this component and even suggests a new approach to music theory and analysis. The Adagio was created specifically to measure the underlying “flow” in music. The Adagio is an entirely new way to experience and visualize the music, to assist in performing music (both as a conductor and/or performer), and to provide a whole new methodology for music analysis and theory. The Adagio utilizes musical “hit points”, such as a transition from one musical section to another (for example, in a musical composition utilizing the sonata form, a transition from the exposition to the development section) to help define the compositions flow rate. Once the flow rate is established, the Adagio can be used to determine if the composer/performer/conductor has correctly maintained the proper rate of flow throughout the performance. An example is provided using Mozart’s Piano Concerto Number 21. Working with the Adagio yielded an unexpected windfall; it was determined via an empirical study conducted at Nova University’s Biofeedback Lab that watching the Adagio helped volunteers participating in a controlled experiment recover from stressors significantly faster than the control group. The Adagio can be thought of as a new arrow in the Musicologist's quiver. It provides a new, unique way of viewing the psychological impact and esthetic effectiveness of music composition. Additionally, with the current worldwide access to multi-media via the internet, flow analysis can be performed and shared with others with little time and/or expense.

Keywords: musicology, music analysis, music flow, music therapy

Procedia PDF Downloads 159
20307 Shedding Light on the Black Box: Explaining Deep Neural Network Prediction of Clinical Outcome

Authors: Yijun Shao, Yan Cheng, Rashmee U. Shah, Charlene R. Weir, Bruce E. Bray, Qing Zeng-Treitler

Abstract:

Deep neural network (DNN) models are being explored in the clinical domain, following the recent success in other domains such as image recognition. For clinical adoption, outcome prediction models require explanation, but due to the multiple non-linear inner transformations, DNN models are viewed by many as a black box. In this study, we developed a deep neural network model for predicting 1-year mortality of patients who underwent major cardio vascular procedures (MCVPs), using temporal image representation of past medical history as input. The dataset was obtained from the electronic medical data warehouse administered by Veteran Affairs Information and Computing Infrastructure (VINCI). We identified 21,355 veterans who had their first MCVP in 2014. Features for prediction included demographics, diagnoses, procedures, medication orders, hospitalizations, and frailty measures extracted from clinical notes. Temporal variables were created based on the patient history data in the 2-year window prior to the index MCVP. A temporal image was created based on these variables for each individual patient. To generate the explanation for the DNN model, we defined a new concept called impact score, based on the presence/value of clinical conditions’ impact on the predicted outcome. Like (log) odds ratio reported by the logistic regression (LR) model, impact scores are continuous variables intended to shed light on the black box model. For comparison, a logistic regression model was fitted on the same dataset. In our cohort, about 6.8% of patients died within one year. The prediction of the DNN model achieved an area under the curve (AUC) of 78.5% while the LR model achieved an AUC of 74.6%. A strong but not perfect correlation was found between the aggregated impact scores and the log odds ratios (Spearman’s rho = 0.74), which helped validate our explanation.

Keywords: deep neural network, temporal data, prediction, frailty, logistic regression model

Procedia PDF Downloads 139
20306 Optimizing Foaming Agents by Air Compression to Unload a Liquid Loaded Gas Well

Authors: Mhenga Agneta, Li Zhaomin, Zhang Chao

Abstract:

When velocity is high enough, gas can entrain fluid and carry to the surface, but as time passes by, velocity drops to a critical point where fluids will start to hold up in the tubing and cause liquid loading which prevents gas production and may lead to the death of the well. Foam injection is widely used as one of the methods to unload liquid. Since wells have different characteristics, it is not guaranteed that foam can be applied in all of them and bring successful results. This research presents a technology to optimize the efficiency of foam to unload liquid by air compression. Two methods are used to explain optimization; (i) mathematical formulas are used to solve and explain the myth of how density and critical velocity could be minimized when air is compressed into foaming agents, then the relationship between flow rates and pressure increase which would boost up the bottom hole pressure and increase the velocity to lift liquid to the surface. (ii) Experiments to test foam carryover capacity and stability as a function of time and surfactant concentration whereby three surfactants anionic sodium dodecyl sulfate (SDS), nonionic Triton 100 and cationic hexadecyltrimethylammonium bromide (HDTAB) were probed. The best foaming agents were injected to lift liquid loaded in a created vertical well model of 2.5 cm diameter and 390 cm high steel tubing covered by a transparent glass casing of 5 cm diameter and 450 cm high. The results show that, after injecting foaming agents, liquid unloading was successful by 75%; however, the efficiency of foaming agents to unload liquid increased by 10% with an addition of compressed air at a ratio of 1:1. Measured values and calculated values were compared and brought about ± 3% difference which is a good number. The successful application of the technology indicates that engineers and stakeholders could bring water flooded gas wells back to production with optimized results by firstly paying attention to the type of surfactants (foaming agents) used, concentration of surfactants, flow rates of the injected surfactants then compressing air to the foaming agents at a proper ratio.

Keywords: air compression, foaming agents, gas well, liquid loading

Procedia PDF Downloads 123
20305 Research on Fuzzy Test Framework Based on Concolic Execution

Authors: Xiong Xie, Yuhang Chen

Abstract:

Vulnerability discovery technology is a significant field of the current. In this paper, a fuzzy framework based on concolic execution has been proposed. Fuzzy test and symbolic execution are widely used in the field of vulnerability discovery technology. But each of them has its own advantages and disadvantages. During the path generation stage, path traversal algorithm based on generation is used to get more accurate path. During the constraint solving stage, dynamic concolic execution is used to avoid the path explosion. If there is external call, the concolic based on function summary is used. Experiments show that the framework can effectively improve the ability of triggering vulnerabilities and code coverage.

Keywords: concolic execution, constraint solving, fuzzy test, vulnerability discovery

Procedia PDF Downloads 213
20304 Implementation of State-Space and Super-Element Techniques for the Modeling and Control of Smart Structures with Damping Characteristics

Authors: Nader Ghareeb, Rüdiger Schmidt

Abstract:

Minimizing the weight in flexible structures means reducing material and costs as well. However, these structures could become prone to vibrations. Attenuating these vibrations has become a pivotal engineering problem that shifted the focus of many research endeavors. One technique to do that is to design and implement an active control system. This system is mainly composed of a vibrating structure, a sensor to perceive the vibrations, an actuator to counteract the influence of disturbances, and finally a controller to generate the appropriate control signals. In this work, two different techniques are explored to create two different mathematical models of an active control system. The first model is a finite element model with a reduced number of nodes and it is called a super-element. The second model is in the form of state-space representation, i.e. a set of partial differential equations. The damping coefficients are calculated and incorporated into both models. The effectiveness of these models is demonstrated when the system is excited by its first natural frequency and an active control strategy is developed and implemented to attenuate the resulting vibrations. Results from both modeling techniques are presented and compared.

Keywords: damping coefficients, finite element analysis, super-element, state-space model

Procedia PDF Downloads 308
20303 A 3D Numerical Environmental Modeling Approach For Assessing Transport of Spilled Oil in Porous Beach Conditions under a Meso-Scale Tank Design

Authors: J. X. Dong, C. J. An, Z. Chen, E. H. Owens, M. C. Boufadel, E. Taylor, K. Lee

Abstract:

Shorelines are vulnerable to significant environmental impacts from oil spills. Stranded oil can cause potential short- to long-term detrimental effects along beaches that include injuries to the ecosystem, socio-economic and cultural resources. In this study, a three-dimensional (3D) numerical modeling approach is developed to evaluate the fate and transport of spilled oil for hypothetical oiled shoreline cases under various combinations of beach geomorphology and environmental conditions. The developed model estimates the spatial and temporal distribution of spilled oil for the various test conditions, using the finite volume method and considering the physical transport (dispersion and advection), sinks, and sorption processes. The model includes a user-friendly interface for data input on variables such as beach properties, environmental conditions, and physical-chemical properties of spilled oil. An experimental mesoscale tank design was used to test the developed model for dissolved petroleum hydrocarbon within shorelines. The simulated results for effects of different sediment substrates, oil types, and shoreline features for the transport of spilled oil are comparable to those obtained with a commercially available model. Results show that the properties of substrates and the oil removal by shoreline effects have significant impacts on oil transport in the beach area. Sensitivity analysis, through the application of the one-step-at-a-time method (OAT), for the 3D model identified hydraulic conductivity as the most sensitive parameter. The 3D numerical model allows users to examine the behavior of oil on and within beaches, assess potential environmental impacts, and provide technical support for decisions related to shoreline clean-up operations.

Keywords: dissolved petroleum hydrocarbons, environmental multimedia model, finite volume method, sensitivity analysis, total petroleum hydrocarbons

Procedia PDF Downloads 198
20302 Millimeter Wave Antenna for 5G Mobile Communications Systems

Authors: Hind Mestouri

Abstract:

The study and simulation of a millimeter wave antenna for 5G mobile communication systems is the topic of this paper. We present at the beginning the general aspects of the 5G technology. We recall the objectives of the 5G standard, its architecture, and the parameters that characterize it. The proposed antenna model is designed using the CST Microwave Studio simulation software. Numerous methods are used at all steps of the design procedures, such as theoretical calculation of parameters, declaration of parameter values, and evaluation of the antenna through the obtained results. Initially, we were interested in the design of an antenna array at the 10 GHz frequency. Afterward, we also simulated and presented an antenna array at 2.5 GHz. For each antenna designed, a parametric study was conducted to understand and highlight the role and effects of the various parameters in order to optimize them and achieve a final efficient structure. The obtained results using CST Microwave Studio showed that the characteristics of the designed antennas (bandwidth, gain, radiation pattern) satisfy the specifications of 5G mobile communications.

Keywords: 5G, antenna array, millimeter wave, 10 GHz, CST Microwave Studio

Procedia PDF Downloads 69
20301 Multi-Generational Analysis of Perception and Acceptance of Mental Illnesses: Current Indian Context

Authors: Anvi Kumar

Abstract:

This paper explores the attitudes and awareness of multiple generations ranging from Boomers I to GenZ (i.e. from 1954 to 2012) towards mental health issues. A convenient sample of 191 people was gathered in India aged 11-77. 20 people each were considered from 5 generational cohorts, namely- Boomers I, Boomers II, Gen X, Millennials, and Gen Z. The study tool comprised a survey that included demographic questions and the Community Attitude towards Mental Illness (CAMI) scale by Taylor & Dear (1981). Descriptive statistics, ANOVA, and Bonferonni’s post-hoc analysis have been used to perform the analysis. The findings reveal that the level of kindness towards those who struggle with mental health varies through certain age groups. An overall sense of exclusion of those struggling with mental health is prevalent among all age groups. GenZ’s awareness of mental health issues is primarily via social media, as against the rest of the generations seeking it from close relatives and friends. The study’s findings suggest a need to investigate further the quality of mental health knowledge content and its consumption pattern. Understanding the dynamics of information sharing and the potential for biases requires further discovery.

Keywords: attitude, behaviour, mental illness, Gen Z, millennials, Gen Y, multi-generations, generational differences

Procedia PDF Downloads 63
20300 Using the Weakest Precondition to Achieve Self-Stabilization in Critical Networks

Authors: Antonio Pizzarello, Oris Friesen

Abstract:

Networks, such as the electric power grid, must demonstrate exemplary performance and integrity. Integrity depends on the quality of both the system design model and the deployed software. Integrity of the deployed software is key, for both the original versions and the many that occur throughout numerous maintenance activity. Current software engineering technology and practice do not produce adequate integrity. Distributed systems utilize networks where each node is an independent computer system. The connections between them is realized via a network that is normally redundantly connected to guarantee the presence of a path between two nodes in the case of failure of some branch. Furthermore, at each node, there is software which may fail. Self-stabilizing protocols are usually present that recognize failure in the network and perform a repair action that will bring the node back to a correct state. These protocols first introduced by E. W. Dijkstra are currently present in almost all Ethernets. Super stabilization protocols capable of reacting to a change in the network topology due to the removal or addition of a branch in the network are less common but are theoretically defined and available. This paper describes how to use the Software Integrity Assessment (SIA) methodology to analyze self-stabilizing software. SIA is based on the UNITY formalism for parallel and distributed programming, which allows the analysis of code for verifying the progress property p leads-to q that describes the progress of all computations starting in a state satisfying p to a state satisfying q via the execution of one or more system modules. As opposed to demonstrably inadequate test and evaluation methods SIA allows the analysis and verification of any network self-stabilizing software as well as any other software that is designed to recover from failure without external intervention of maintenance personnel. The model to be analyzed is obtained by automatic translation of the system code to a transition system that is based on the use of the weakest precondition.

Keywords: network, power grid, self-stabilization, software integrity assessment, UNITY, weakest precondition

Procedia PDF Downloads 209
20299 Assessment of On-Site Solar and Wind Energy at a Manufacturing Facility in Ireland

Authors: A. Sgobba, C. Meskell

Abstract:

The feasibility of on-site electricity production from solar and wind and the resulting load management for a specific manufacturing plant in Ireland are assessed. The industry sector accounts directly and indirectly for a high percentage of electricity consumption and global greenhouse gas emissions; therefore, it will play a key role in emission reduction and control. Manufacturing plants, in particular, are often located in non-residential areas since they require open spaces for production machinery, parking facilities for the employees, appropriate routes for supply and delivery, special connections to the national grid and other environmental impacts. Since they have larger spaces compared to commercial sites in urban areas, they represent an appropriate case study for evaluating the technical and economic viability of energy system integration with low power density technologies, such as solar and wind, for on-site electricity generation. The available open space surrounding the analysed manufacturing plant can be efficiently used to produce a discrete quantity of energy, instantaneously and locally consumed. Therefore, transmission and distribution losses can be reduced. The usage of storage is not required due to the high and almost constant electricity consumption profile. The energy load of the plant is identified through the analysis of gas and electricity consumption, both internally monitored and reported on the bills. These data are not often recorded and available to third parties since manufacturing companies usually keep track only of the overall energy expenditures. The solar potential is modelled for a period of 21 years based on global horizontal irradiation data; the hourly direct and diffuse radiation and the energy produced by the system at the optimum pitch angle are calculated. The model is validated using PVWatts and SAM tools. Wind speed data are available for the same period within one-hour step at a height of 10m. Since the hub of a typical wind turbine reaches a higher altitude, complementary data for a different location at 50m have been compared, and a model for the estimate of wind speed at the required height in the right location is defined. Weibull Statistical Distribution is used to evaluate the wind energy potential of the site. The results show that solar and wind energy are, as expected, generally decoupled. Based on the real case study, the percentage of load covered every hour by on-site generation (Level of Autonomy LA) and the resulting electricity bought from the grid (Expected Energy Not Supplied EENS) are calculated. The economic viability of the project is assessed through Net Present Value, and the influence the main technical and economic parameters have on NPV is presented. Since the results show that the analysed renewable sources can not provide enough electricity, the integration with a cogeneration technology is studied. Finally, the benefit to energy system integration of wind, solar and a cogeneration technology is evaluated and discussed.

Keywords: demand, energy system integration, load, manufacturing, national grid, renewable energy sources

Procedia PDF Downloads 119
20298 Driving Performance Improvement in Mini Markets: The Impact of Talent Management, Business Skills, and Technology Adoption in Johannesburg and Cape Town, South Africa

Authors: Fedil Jemal Ahmed

Abstract:

This conference abstract paper presents a study that aimed to explore the impact of talent management and business skills on performance improvement in mini markets located in Johannesburg and Cape Town, South Africa. Mini markets are small retail stores that play a crucial role in providing essential goods and services to communities. However, due to their small size, they often face significant challenges in terms of resources and management. The study conducted interviews with mini market owners and managers in Johannesburg and Cape Town to understand their approach to talent management, business skills, and their impact on business performance. The results showed that effective talent management practices, including recruitment, training, and retention, along with strong business skills, had a significant positive impact on business performance in mini markets. Furthermore, the study found that the use of technology, such as point of sale systems and inventory management software, can also contribute to business performance improvement in mini markets. The results suggest that mini market owners and managers should prioritize talent management, business skills, and invest in technology to improve their business performance. Comparing the improvements made by mini markets in Johannesburg and Cape Town to those made by others, the study found that the adoption of effective talent management practices and strong business skills were key factors in driving performance improvement. Mini market owners and managers who invested in these areas were better equipped to manage their resources, enhance their customer service, and increase their profitability. When comparing the personal experiences of the fedil jemal who improved their business performance from a small market to a large one, they found that effective talent management practices and strong business skills were crucial in achieving success. Through the adoption of effective talent management practices, the fedil was able to attract and retain top talent, ensuring that the business was managed effectively. Furthermore, the fedil invested in improving their business skills, such as financial management, marketing, and customer service, which helped to increase their revenue and profitability. In terms of technology adoption, the author found that the use of point-of-sale systems and inventory management software were essential in managing their inventory and improving their customer service. By investing in technology, the fedil was able to streamline their operations and enhance their overall business performance. In conclusion, this study provides valuable insights into the importance of talent management, business skills, and technology adoption in improving business performance in mini markets. It highlights the need for mini market owners and managers to prioritize these areas and invest in them to enhance their business performance. The findings of this study have practical implications for mini market owners and managers who are looking to improve their business performance and compete in a highly competitive market. By adopting effective talent management practices, developing strong business skills, and investing in technology, mini market owners and managers can improve their operations and increase their profitability.

Keywords: talent management, business skills, technology adoption, mini markets

Procedia PDF Downloads 90
20297 Numerical Study on Pretensioned Bridge Girder Using Thermal Strain Technique

Authors: Prashant Motwani, Arghadeep Laskar

Abstract:

The transfer of prestress force from prestressing strands to the surrounding concrete is dependent on the bond between the two materials. It is essential to understand the actual bond stress distribution along the transfer length to determine the transfer zone in pre-tensioned concrete. A 3-D nonlinear finite element model has been developed to simulate the transfer of prestress force from steel to concrete in pre-tensioned bridge girders through thermal strain technique using commercially available package ABAQUS. Full-scale bridge girder has been analyzed with thermal strain approach where the damage plasticity constitutive model has been used to model concrete. Parameters such as concrete strain, effective prestress, upward camber and longitudinal stress have been compared with analytical results. The discrepancy between numerical and analytical values was within 20%. The paper also presents a convergence study on mesh density and aspect ratio of the elements to perform the finite element study.

Keywords: aspect ratio, bridge girder, centre of gravity of strand, mesh density, finite element model, pretensioned bridge girder

Procedia PDF Downloads 216
20296 A Density Function Theory Based Comparative Study of Trans and Cis - Resveratrol

Authors: Subhojyoti Chatterjee, Peter J. Mahon, Feng Wang

Abstract:

Resveratrol (RvL), a phenolic compound, is a key ingredient in wine and tomatoes that has been studied over the years because of its important bioactivities such as anti-oxidant, anti-aging and antimicrobial properties. Out of the two isomeric forms of resveratrol i.e. trans and cis, the health benefit is primarily associated with the trans form. Thus, studying the structural properties of the isomers will not only provide an insight into understanding the RvL isomers, but will also help in designing parameters for differentiation in order to achieve 99.9% purity of trans-RvL. In the present study, density function theory (DFT) study is conducted, using the B3LYP/6-311++G** model to explore the through bond and through space intramolecular interactions. Properties such as vibrational spectroscopy (IR and Raman), nuclear magnetic resonance (NMR) spectra, excess orbital energy spectrum (EOES), energy based decomposition analyses (EDA) and Fukui function are calculated. It is discovered that the structure of trans-RvL, although it is C1 non-planar, the backbone non-H atoms are nearly in the same plane; whereas the cis-RvL consists of two major planes of R1 and R2 that are not in the same plane. The absence of planarity gives rise to a H-bond of 2.67Å in cis-RvL. Rotation of the C(5)-C(8) single bond in trans-RvL produces higher energy barriers since it may break the (planar) entire conjugated structure; while such rotation in cis-RvL produces multiple minima and maxima depending on the positions of the rings. The calculated FT-IR spectrum shows very different spectral features for trans and cis-RvL in the region 900 – 1500 cm-1, where the spectral peaks at 1138-1158 cm-1 are split in cis-RvL compared to a single peak at 1165 cm-1 in trans-RvL. In the Raman spectra, there is significant enhancement of cis-RvL in the region above 3000cm-1. Further, the carbon chemical environment (13C NMR) of the RvL molecule exhibit a larger chemical shift for cis-RvL compared to trans-RvL (Δδ = 8.18 ppm) for the carbon atom C(11), indicating that the chemical environment of the C group in cis-RvL is more diverse than its other isomer. The energy gap between highest occupied molecular orbital (HOMO) and the lowest occupied molecular orbital (LUMO) is 3.95 eV for trans and 4.35 eV for cis-RvL. A more detailed inspection using the recently developed EOES revealed that most of the large energy differences i.e. Δεcis-trans > ±0.30 eV, in their orbitals are contributed from the outer valence shell. They are MO60 (HOMO), MO52-55 and MO46. The active sites that has been captured by Fukui function (f + > 0.08) are associated with the stilbene C=C bond of RvL and cis-RvL is more active at these sites than in trans-RvL, as cis orientation breaks the large conjugation of trans-RvL so that the hydroxyl oxygen’s are more active in cis-RvL. Finally, EDA highlights the interaction energy (ΔEInt) of the phenolic compound, where trans is preferred over the cis-RvL (ΔΔEi = -4.35 kcal.mol-1) isomer. Thus, these quantum mechanics results could help in unwinding the diversified beneficial activities associated with resveratrol.

Keywords: resveratrol, FT-IR, Raman, NMR, excess orbital energy spectrum, energy decomposition analysis, Fukui function

Procedia PDF Downloads 183
20295 Bivariate Time-to-Event Analysis with Copula-Based Cox Regression

Authors: Duhania O. Mahara, Santi W. Purnami, Aulia N. Fitria, Merissa N. Z. Wirontono, Revina Musfiroh, Shofi Andari, Sagiran Sagiran, Estiana Khoirunnisa, Wahyudi Widada

Abstract:

For assessing interventions in numerous disease areas, the use of multiple time-to-event outcomes is common. An individual might experience two different events called bivariate time-to-event data, the events may be correlated because it come from the same subject and also influenced by individual characteristics. The bivariate time-to-event case can be applied by copula-based bivariate Cox survival model, using the Clayton and Frank copulas to analyze the dependence structure of each event and also the covariates effect. By applying this method to modeling the recurrent event infection of hemodialysis insertion on chronic kidney disease (CKD) patients, from the AIC and BIC values we find that the Clayton copula model was the best model with Kendall’s Tau is (τ=0,02).

Keywords: bivariate cox, bivariate event, copula function, survival copula

Procedia PDF Downloads 62
20294 Testing Causal Model of Depression Based on the Components of Subscales Lifestyle with Mediation of Social Health

Authors: Abdolamir Gatezadeh, Jamal Daghaleh

Abstract:

The lifestyle of individuals is important and determinant for the status of psychological and social health. Recently, especially in developed countries, the relationship between lifestyle and mental illnesses, including depression, has attracted the attention of many people. In order to test the causal model of depression based on lifestyle with mediation of social health in the study, basic and applied methods were used in terms of objective and descriptive-field as well as the data collection. Methods: This study is a basic research type and is in the framework of correlational plans. In this study, the population includes all adults in Ahwaz city. A randomized, multistage sampling of 384 subjects was selected as the subjects. Accordingly, the data was collected and analyzed using structural equation modeling. Results: In data analysis, path analysis indicated the confirmation of the assumed model fit of research. This means that subscales lifestyle has a direct effect on depression and subscales lifestyle through the mediation of social health which in turn has an indirect effect on depression. Discussion and conclusion: According to the results of the research, the depression can be used to explain the components of the lifestyle and social health.

Keywords: depression, subscales lifestyle, social health, causal model

Procedia PDF Downloads 150