Search results for: IT/OT convergence
153 Acceptability of the Carers-ID Intervention for Family Carers of People with Intellectual Disabilities
Authors: Mark Linden, Michael Brown, Lynne Marsh, Maria Truesdale, Stuart Todd, Nathan Hughes, Trisha Forbes, Rachel Leonard
Abstract:
Background: Family carers of people with intellectual disabilities (ID) face ongoing challenges in accessing services and often experience poor mental health. Online support programmes may prove effective in addressing the mental health and well-being needs of family carers. This study sought to test the acceptability of a newly developed online support programme for carers of people with intellectual disabilities called Carers-ID. Methods A sequential mixed-methods explanatory design was utilised. An adapted version of the Acceptability of Health Apps among Adolescents (AHAA) Scale was distributed to family carers who had viewed the Carers-ID.com intervention. Following this, participants were invited to take part in an online interview. Interview questions focused on participants’ experiences of using the programme and its acceptability. Qualitative and quantitative data were analysed separately and then brought together through the triangulation protocol developed by Farmer et al (2006). Findings: Seventy family carers responded to the acceptability survey, whilst 10 took part in interviews. Six themes were generated from interviews with family carers. Based on our triangulation, four areas of convergence were identified, these included, programme usability and ease, attitudes towards the programme, perceptions of effectiveness, and programme relatability. Conclusions: In order to be acceptable, online interventions for carers of people with ID need to be accessible, understandable and easy to use, as carers time is precious. Further research is needed to investigate the effectiveness of online interventions for family carers, specifically considering which carers the intervention works for, and for whom it may not.Keywords: intellectual disability, family carer, acceptability study, online intervention
Procedia PDF Downloads 92152 Numerical Simulation of Large-Scale Landslide-Generated Impulse Waves With a Soil‒Water Coupling Smooth Particle Hydrodynamics Model
Authors: Can Huang, Xiaoliang Wang, Qingquan Liu
Abstract:
Soil‒water coupling is an important process in landslide-generated impulse waves (LGIW) problems, accompanied by large deformation of soil, strong interface coupling and three-dimensional effect. A meshless particle method, smooth particle hydrodynamics (SPH) has great advantages in dealing with complex interface and multiphase coupling problems. This study presents an improved soil‒water coupled model to simulate LGIW problems based on an open source code DualSPHysics (v4.0). Aiming to solve the low efficiency problem in modeling real large-scale LGIW problems, graphics processing unit (GPU) acceleration technology is implemented into this code. An experimental example, subaerial landslide-generated water waves, is simulated to demonstrate the accuracy of this model. Then, the Huangtian LGIW, a real large-scale LGIW problem is modeled to reproduce the entire disaster chain, including landslide dynamics, fluid‒solid interaction, and surge wave generation. The convergence analysis shows that a particle distance of 5.0 m can provide a converged landslide deposit and surge wave for this example. Numerical simulation results are in good agreement with the limited field survey data. The application example of the Huangtian LGIW provides a typical reference for large-scale LGIW assessments, which can provide reliable information on landslide dynamics, interface coupling behavior, and surge wave characteristics.Keywords: soil‒water coupling, landslide-generated impulse wave, large-scale, SPH
Procedia PDF Downloads 64151 Morpho-Syntactic Pattern in Maithili Urdu
Authors: Mohammad Jahangeer Warsi
Abstract:
This is, perhaps, the first linguistic study of Maithili Urdu, a dialect of Urdu language of Indo-Aryan family, spoken by around four million speakers in Darbhanga, Samastipur, Begusarai, Madhubani, and Muzafarpur districts of Bihar. It has the subject–verb–object (SOV) word order and it lacks script and literature. Needless to say, this work is an attempt to document this dialect so that it should contribute to the field of descriptive linguistics. Besides, it is also spoken by majority of Maithili diaspora community. Maithili Urdu does not have its own script or literature, yet it has maintained an oral history of over many centuries. It has contributed to enriching the Maithili, Hindi and Urdu languages and literature very profoundly. Dialects are the contact languages of particular regions, and they have a deep impact on their cultural heritage. Slowly with time, these dialects begin to take shape of languages. The convergence of a dialect into a language is a symbol and pride of the people who speak it. Although, confined to the five districts of northern Bihar, yet highly popular among the natives, it is the primary mode of communication of the local Muslims. The paper will focus on the structure of expressions about Maithili Urdu that include the structure of words, phrases, clauses, and sentences. There are clear differences in linguistic features of Maithili Urdu vis-à-vis Urdu, Maithili and Hindi. Though being a dialect of Urdu, interestingly, there is only one second person pronoun tu and lack of agentive marker –ne. Although being spoken in the vicinity of Hindi, Urdu and Maithili, it undoubtedly has its own linguistic features, of them, verb conjugation is remarkably unique. Because of the oral tradition of this link language, intonation has become significantly prominent. This paper will discuss the morpho-syntactic pattern of Maithili Urdu and will go through a sample text to authenticate the findings.Keywords: cultural heritage, morpho-syntactic pattern, Maithili Urdu, verb conjugation
Procedia PDF Downloads 214150 Optimized Weight Selection of Control Data Based on Quotient Space of Multi-Geometric Features
Authors: Bo Wang
Abstract:
The geometric processing of multi-source remote sensing data using control data of different scale and different accuracy is an important research direction of multi-platform system for earth observation. In the existing block bundle adjustment methods, as the controlling information in the adjustment system, the approach using single observation scale and precision is unable to screen out the control information and to give reasonable and effective corresponding weights, which reduces the convergence and adjustment reliability of the results. Referring to the relevant theory and technology of quotient space, in this project, several subjects are researched. Multi-layer quotient space of multi-geometric features is constructed to describe and filter control data. Normalized granularity merging mechanism of multi-layer control information is studied and based on the normalized scale factor, the strategy to optimize the weight selection of control data which is less relevant to the adjustment system can be realized. At the same time, geometric positioning experiment is conducted using multi-source remote sensing data, aerial images, and multiclass control data to verify the theoretical research results. This research is expected to break through the cliché of the single scale and single accuracy control data in the adjustment process and expand the theory and technology of photogrammetry. Thus the problem to process multi-source remote sensing data will be solved both theoretically and practically.Keywords: multi-source image geometric process, high precision geometric positioning, quotient space of multi-geometric features, optimized weight selection
Procedia PDF Downloads 284149 Screening Diversity: Artificial Intelligence and Virtual Reality Strategies for Elevating Endangered African Languages in the Film and Television Industry
Authors: Samuel Ntsanwisi
Abstract:
This study investigates the transformative role of Artificial Intelligence (AI) and Virtual Reality (VR) in the preservation of endangered African languages. The study is contextualized within the film and television industry, highlighting disparities in screen representation for certain languages in South Africa, underscoring the need for increased visibility and preservation efforts; with globalization and cultural shifts posing significant threats to linguistic diversity, this research explores approaches to language preservation. By leveraging AI technologies, such as speech recognition, translation, and adaptive learning applications, and integrating VR for immersive and interactive experiences, the study aims to create a framework for teaching and passing on endangered African languages. Through digital documentation, interactive language learning applications, storytelling, and community engagement, the research demonstrates how these technologies can empower communities to revitalize their linguistic heritage. This study employs a dual-method approach, combining a rigorous literature review to analyse existing research on the convergence of AI, VR, and language preservation with primary data collection through interviews and surveys with ten filmmakers. The literature review establishes a solid foundation for understanding the current landscape, while interviews with filmmakers provide crucial real-world insights, enriching the study's depth. This balanced methodology ensures a comprehensive exploration of the intersection between AI, VR, and language preservation, offering both theoretical insights and practical perspectives from industry professionals.Keywords: language preservation, endangered languages, artificial intelligence, virtual reality, interactive learning
Procedia PDF Downloads 61148 Real Fictions: Converging Landscapes and Imagination in an English Village
Authors: Edoardo Lomi
Abstract:
A problem of central interest in anthropology concerns the ethnographic displacement of modernity’s conceptual sovereignty over that of native collectives worldwide. Part of this critical project has been the association of Western modernity with a dualist, naturalist ontology. Despite its demonstrated value for comparative work, this association often comes at the cost of reproducing ideas that lack an empirical ethnographic basis. This paper proposes a way forward by bringing to bear some of the results produced by an ethnographic study of a village in Wiltshire, South England. Due to its picturesque qualities, this village has served for decades as a ready-made set for fantasy movies and a backdrop to fictional stories. These forms of mediation have in turn generated some apparent paradoxes, such as fictitious characters that affect actual material changes, films that become more real than history, and animated stories that, while requiring material grounds to unfold, inhabit a time and space in other respects distinct from that of material processes. Drawing on ongoing fieldwork and interviews with locals and tourists, this paper considers the ways villagers engage with fiction as part of their everyday lives. The resulting image is one of convergence, in the same landscape, of people and things having different ontological status. This study invites reflection on the implications of this image for diversifying our imagery of Western lifeworlds. To this end, the notion of ‘real fictions’ is put forth, connecting the ethnographic blurring of modernist distinctions–such as sign and signified, mind and matter, materiality and immateriality–with discussions on anthropology’s own reliance on fictions for critical comparative work.Keywords: England, ethnography, landscape, modernity, mediation, ontology, post-structural theory
Procedia PDF Downloads 121147 Iot-Based Interactive Patient Identification and Safety Management System
Authors: Jonghoon Chun, Insung Kim, Jonghyun Lim, Gun Ro
Abstract:
We believe that it is possible to provide a solution to reduce patient safety accidents by displaying correct medical records and prescription information through interactive patient identification. Our system is based on the use of smart bands worn by patients and these bands communicate with the hybrid gateways which understand both BLE and Wifi communication protocols. Through the convergence of low-power Bluetooth (BLE) and hybrid gateway technology, which is one of short-range wireless communication technologies, we implement ‘Intelligent Patient Identification and Location Tracking System’ to prevent medical malfunction frequently occurring in medical institutions. Based on big data and IOT technology using MongoDB, smart band (BLE, NFC function) and hybrid gateway, we develop a system to enable two-way communication between medical staff and hospitalized patients as well as to store locational information of the patients in minutes. Based on the precise information provided using big data systems, such as location tracking and movement of in-hospital patients wearing smart bands, our findings include the fact that a patient-specific location tracking algorithm can more efficiently operate HIS (Hospital Information System) and other related systems. Through the system, we can always correctly identify patients using identification tags. In addition, the system automatically determines whether the patient is a scheduled for medical service by the system in use at the medical institution, and displays the appropriateness of the medical treatment and the medical information (medical record and prescription information) on the screen and voice. This work was supported in part by the Korea Technology and Information Promotion Agency for SMEs (TIPA) grant funded by the Korean Small and Medium Business Administration (No. S2410390).Keywords: BLE, hybrid gateway, patient identification, IoT, safety management, smart band
Procedia PDF Downloads 311146 Applications of Evolutionary Optimization Methods in Reinforcement Learning
Authors: Rahul Paul, Kedar Nath Das
Abstract:
The paradigm of Reinforcement Learning (RL) has become prominent in training intelligent agents to make decisions in environments that are both dynamic and uncertain. The primary objective of RL is to optimize the policy of an agent in order to maximize the cumulative reward it receives throughout a given period. Nevertheless, the process of optimization presents notable difficulties as a result of the inherent trade-off between exploration and exploitation, the presence of extensive state-action spaces, and the intricate nature of the dynamics involved. Evolutionary Optimization Methods (EOMs) have garnered considerable attention as a supplementary approach to tackle these challenges, providing distinct capabilities for optimizing RL policies and value functions. The ongoing advancement of research in both RL and EOMs presents an opportunity for significant advancements in autonomous decision-making systems. The convergence of these two fields has the potential to have a transformative impact on various domains of artificial intelligence (AI) applications. This article highlights the considerable influence of EOMs in enhancing the capabilities of RL. Taking advantage of evolutionary principles enables RL algorithms to effectively traverse extensive action spaces and discover optimal solutions within intricate environments. Moreover, this paper emphasizes the practical implementations of EOMs in the field of RL, specifically in areas such as robotic control, autonomous systems, inventory problems, and multi-agent scenarios. The article highlights the utilization of EOMs in facilitating RL agents to effectively adapt, evolve, and uncover proficient strategies for complex tasks that may pose challenges for conventional RL approaches.Keywords: machine learning, reinforcement learning, loss function, optimization techniques, evolutionary optimization methods
Procedia PDF Downloads 81145 Numerical Modeling of Air Shock Wave Generated by Explosive Detonation and Dynamic Response of Structures
Authors: Michał Lidner, Zbigniew SzcześNiak
Abstract:
The ability to estimate blast load overpressure properly plays an important role in safety design of buildings. The issue of studying of blast loading on structural elements has been explored for many years. However, in many literature reports shock wave overpressure is estimated with simplified triangular or exponential distribution in time. This indicates some errors when comparing real and numerical reaction of elements. Nonetheless, it is possible to further improve setting similar to the real blast load overpressure function versus time. The paper presents a method of numerical analysis of the phenomenon of the air shock wave propagation. It uses Finite Volume Method and takes into account energy losses due to a heat transfer with respect to an adiabatic process rule. A system of three equations (conservation of mass, momentum and energy) describes the flow of a volume of gaseous medium in the area remote from building compartments, which can inhibit the movement of gas. For validation three cases of a shock wave flow were analyzed: a free field explosion, an explosion inside a steel insusceptible tube (the 1D case) and an explosion inside insusceptible cube (the 3D case). The results of numerical analysis were compared with the literature reports. Values of impulse, pressure, and its duration were studied. Finally, an overall good convergence of numerical results with experiments was achieved. Also the most important parameters were well reflected. Additionally analyses of dynamic response of one of considered structural element were made.Keywords: adiabatic process, air shock wave, explosive, finite volume method
Procedia PDF Downloads 192144 A Geometrical Multiscale Approach to Blood Flow Simulation: Coupling 2-D Navier-Stokes and 0-D Lumped Parameter Models
Authors: Azadeh Jafari, Robert G. Owens
Abstract:
In this study, a geometrical multiscale approach which means coupling together the 2-D Navier-Stokes equations, constitutive equations and 0-D lumped parameter models is investigated. A multiscale approach, suggest a natural way of coupling detailed local models (in the flow domain) with coarser models able to describe the dynamics over a large part or even the whole cardiovascular system at acceptable computational cost. In this study we introduce a new velocity correction scheme to decouple the velocity computation from the pressure one. To evaluate the capability of our new scheme, a comparison between the results obtained with Neumann outflow boundary conditions on the velocity and Dirichlet outflow boundary conditions on the pressure and those obtained using coupling with the lumped parameter model has been performed. Comprehensive studies have been done based on the sensitivity of numerical scheme to the initial conditions, elasticity and number of spectral modes. Improvement of the computational algorithm with stable convergence has been demonstrated for at least moderate Weissenberg number. We comment on mathematical properties of the reduced model, its limitations in yielding realistic and accurate numerical simulations, and its contribution to a better understanding of microvascular blood flow. We discuss the sophistication and reliability of multiscale models for computing correct boundary conditions at the outflow boundaries of a section of the cardiovascular system of interest. In this respect the geometrical multiscale approach can be regarded as a new method for solving a class of biofluids problems, whose application goes significantly beyond the one addressed in this work.Keywords: geometrical multiscale models, haemorheology model, coupled 2-D navier-stokes 0-D lumped parameter modeling, computational fluid dynamics
Procedia PDF Downloads 361143 Disrupted or Discounted Cash Flow: Impact of Digitisation on Business Valuation
Authors: Matthias Haerri, Tobias Huettche, Clemens Kustner
Abstract:
This article discusses the impact of digitization on business valuation. In order to become and remain ‘digital’, investments are necessary whose return on investment (ROI) often remains vague. This uncertainty is contradictory for a valuation, that rely on predictable cash flows, fixed capital structures and the steady state. However digitisation does not make a company valuation impossible, but traditional approaches must be reconsidered. The authors identify four areas that are to be changing: (1) Tools instead of intuition - In the future, company valuation will neither be art nor science, but craft. This does not require intuition, but experience and good tools. Digital evaluation tools beyond Excel will therefore gain in importance. (2) Real-time instead of deadline - At present, company valuations are always carried out on a case-by-case basis and on a specific key date. This will change with the digitalization and the introduction of web-based valuation tools. Company valuations can thus not only be carried out faster and more efficiently, but can also be offered more frequently. Instead of calculating the value for a previous key date, current and real-time valuations can be carried out. (3) Predictive planning instead of analysis of the past - Past data will also be needed in the future, but its use will not be limited to monovalent time series or key figure analyses. With pictures of ‘black swans’ and the ‘turkey illusion’ it was made clear to us that we build forecasts on too few data points of the past and underestimate the power of chance. Predictive planning can help here. (4) Convergence instead of residual value - Digital transformation shortens the lifespan of viable business models. If companies want to live forever, they have to change forever. For the company valuation, this means that the business model valid on the valuation date only has a limited service life.Keywords: business valuation, corporate finance, digitisation, disruption
Procedia PDF Downloads 133142 English Language Proficiency and Use as Determinants of Transactional Success in Gbagi Market, Ibadan, Nigeria
Authors: A. Robbin
Abstract:
Language selection can be an efficient negotiation strategy employed by both service or product providers and their customers to achieve transactional success. The transactional scenario in Gbagi Market, Ibadan, Nigeria provides an appropriate setting for the exploration of the Nigerian multilingual situation with its own interesting linguistic peculiarities which questions the functionality of the ‘Lingua Franca’ in trade situations. This study examined English Language proficiency among Yoruba Traders in Gbagi Market, Ibadan and its use as determinants of transactional success during service encounters. Randomly selected Yoruba-English bilingual traders and customers were administered questionnaires and the data subjected to statistical and descriptive analysis using Giles Communication Accommodation Theory. Findings reveal that only fifty percent of the traders used for the study were proficient in speaking English language. Traders with minimal proficiency in Standard English, however, resulted in the use of the Nigerian Pidgin English. Both traders and customers select the Mother Tongue, which is the Yoruba Language during service encounters but are quick to converge to the other’s preferred language as the transactional exchange demands. The English language selection is not so much for the prestige or lingua franca status of the language as it is for its functions, which include ease of communication, negotiation, and increased sales. The use of English during service encounters is mostly determined by customer’s linguistic preference which the trader accommodates to for better negotiation and never as a first choice. This convergence is found to be beneficial as it ensures sales and return patronage. Although the English language is not a preferred code choice in Gbagi Market, it serves a functional trade strategy for transactional success during service encounters in the market.Keywords: communication accommodation theory, language selection, proficiency, service encounter, transaction
Procedia PDF Downloads 158141 Disparities Versus Similarities; WHO Good Practices for Pharmaceutical Quality Control Laboratories and ISO/IEC 17025:2017: International Standards for Quality Management Systems in Pharmaceutical Laboratories
Authors: Mercy Okezue, Kari Clase, Stephen Byrn, Paddy Shivanand
Abstract:
Medicines regulatory authorities expect pharmaceutical companies and contract research organizations to seek ways to certify that their laboratory control measurements are reliable. Establishing and maintaining laboratory quality standards are essential in ensuring the accuracy of test results. ‘ISO/IEC 17025:2017’ and ‘WHO Good Practices for Pharmaceutical Quality Control Laboratories (GPPQCL)’ are two quality standards commonly employed in developing laboratory quality systems. A review was conducted on the two standards to elaborate on areas on convergence and divergence. The goal was to understand how differences in each standard's requirements may influence laboratories' choices as to which document is easier to adopt for quality systems. A qualitative review method compared similar items in the two standards while mapping out areas where there were specific differences in the requirements of the two documents. The review also provided a detailed description of the clauses and parts covering management and technical requirements in these laboratory standards. The review showed that both documents share requirements for over ten critical areas covering objectives, infrastructure, management systems, and laboratory processes. There were, however, differences in standard expectations where GPPQCL emphasizes system procedures for planning and future budgets that will ensure continuity. Conversely, ISO 17025 was more focused on the risk management approach to establish laboratory quality systems. Elements in the two documents form common standard requirements to assure the validity of laboratory test results that promote mutual recognition. The ISO standard currently has more global patronage than GPPQCL.Keywords: ISO/IEC 17025:2017, laboratory standards, quality control, WHO GPPQCL
Procedia PDF Downloads 197140 The Structure of Southern Tunisian Atlas Deformation Front: Integrated Geological and Geophysical Interpretation
Authors: D. Manai, J. Alvarez-Marron, M. Inoubli
Abstract:
The southern Tunisian Atlas is a part of the wide Cenozoic intracontinental deformation that affected North Africa as a result of convergence between African and Eurasian plates. The Southern Tunisian Atlas Front (STAF) corresponds to the chotts area that covers several hundreds of Km² and represents a 60 km wide transition between the deformed Tunisian Atlas to the North and the undeformed Saharan platform to the South. It includes three morphostructural alignments, a fold and thrust range in the North, a wide depression in the middle and a monocline to horizontal zone to the south. Four cross-sections have been constructed across the chotts area to illustrate the structure of the Southern Tunisian Atlas Front based on integrated geological and geophysical data including geological maps, petroleum wells, and seismic data. The fold and thrust zone of the northern chotts is interpreted as related to a detachment level near the Triassic-Jurassic contact. The displacement of the basal thrust seems to die out progressively under the Fejej antiform and it is responsible to the south dipping of the southern chotts range. The restoration of the cross-sections indicates that the Southern Tunisian Atlas front is a weakly deformed wide zone developed during the Cenozoic inversion with a maximum calculated shortening in the order of 1000 m. The wide structure of this STAF has been influenced by a pre-existing large thickness of upper Jurassic-Aptian sediments related to the rifting episodes associated to the evolution of Tethys in the Maghreb. During Jurassic to Aptian period, the chotts area corresponded to a highly subsiding basin.Keywords: Southern Tunisian Atlas Front, subsident sub- basin, wide deformation, balanced cross-sections.
Procedia PDF Downloads 149139 Optimizing Super Resolution Generative Adversarial Networks for Resource-Efficient Single-Image Super-Resolution via Knowledge Distillation and Weight Pruning
Authors: Hussain Sajid, Jung-Hun Shin, Kum-Won Cho
Abstract:
Image super-resolution is the most common computer vision problem with many important applications. Generative adversarial networks (GANs) have promoted remarkable advances in single-image super-resolution (SR) by recovering photo-realistic images. However, high memory requirements of GAN-based SR (mainly generators) lead to performance degradation and increased energy consumption, making it difficult to implement it onto resource-constricted devices. To relieve such a problem, In this paper, we introduce an optimized and highly efficient architecture for SR-GAN (generator) model by utilizing model compression techniques such as Knowledge Distillation and pruning, which work together to reduce the storage requirement of the model also increase in their performance. Our method begins with distilling the knowledge from a large pre-trained model to a lightweight model using different loss functions. Then, iterative weight pruning is applied to the distilled model to remove less significant weights based on their magnitude, resulting in a sparser network. Knowledge Distillation reduces the model size by 40%; pruning then reduces it further by 18%. To accelerate the learning process, we employ the Horovod framework for distributed training on a cluster of 2 nodes, each with 8 GPUs, resulting in improved training performance and faster convergence. Experimental results on various benchmarks demonstrate that the proposed compressed model significantly outperforms state-of-the-art methods in terms of peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and image quality for x4 super-resolution tasks.Keywords: single-image super-resolution, generative adversarial networks, knowledge distillation, pruning
Procedia PDF Downloads 96138 An Analysis on Clustering Based Gene Selection and Classification for Gene Expression Data
Authors: K. Sathishkumar, V. Thiagarasu
Abstract:
Due to recent advances in DNA microarray technology, it is now feasible to obtain gene expression profiles of tissue samples at relatively low costs. Many scientists around the world use the advantage of this gene profiling to characterize complex biological circumstances and diseases. Microarray techniques that are used in genome-wide gene expression and genome mutation analysis help scientists and physicians in understanding of the pathophysiological mechanisms, in diagnoses and prognoses, and choosing treatment plans. DNA microarray technology has now made it possible to simultaneously monitor the expression levels of thousands of genes during important biological processes and across collections of related samples. Elucidating the patterns hidden in gene expression data offers a tremendous opportunity for an enhanced understanding of functional genomics. However, the large number of genes and the complexity of biological networks greatly increase the challenges of comprehending and interpreting the resulting mass of data, which often consists of millions of measurements. A first step toward addressing this challenge is the use of clustering techniques, which is essential in the data mining process to reveal natural structures and identify interesting patterns in the underlying data. This work presents an analysis of several clustering algorithms proposed to deals with the gene expression data effectively. The existing clustering algorithms like Support Vector Machine (SVM), K-means algorithm and evolutionary algorithm etc. are analyzed thoroughly to identify the advantages and limitations. The performance evaluation of the existing algorithms is carried out to determine the best approach. In order to improve the classification performance of the best approach in terms of Accuracy, Convergence Behavior and processing time, a hybrid clustering based optimization approach has been proposed.Keywords: microarray technology, gene expression data, clustering, gene Selection
Procedia PDF Downloads 323137 Necessary Condition to Utilize Adaptive Control in Wind Turbine Systems to Improve Power System Stability
Authors: Javad Taherahmadi, Mohammad Jafarian, Mohammad Naser Asefi
Abstract:
The global capacity of wind power has dramatically increased in recent years. Therefore, improving the technology of wind turbines to take different advantages of this enormous potential in the power grid, could be interesting subject for scientists. The doubly-fed induction generator (DFIG) wind turbine is a popular system due to its many advantages such as the improved power quality, high energy efficiency and controllability, etc. With an increase in wind power penetration in the network and with regard to the flexible control of wind turbines, the use of wind turbine systems to improve the dynamic stability of power systems has been of significance importance for researchers. Subsynchronous oscillations are one of the important issues in the stability of power systems. Damping subsynchronous oscillations by using wind turbines has been studied in various research efforts, mainly by adding an auxiliary control loop to the control structure of the wind turbine. In most of the studies, this control loop is composed of linear blocks. In this paper, simple adaptive control is used for this purpose. In order to use an adaptive controller, the convergence of the controller should be verified. Since adaptive control parameters tend to optimum values in order to obtain optimum control performance, using this controller will help the wind turbines to have positive contribution in damping the network subsynchronous oscillations at different wind speeds and system operating points. In this paper, the application of simple adaptive control in DFIG wind turbine systems to improve the dynamic stability of power systems is studied and the essential condition for using this controller is considered. It is also shown that this controller has an insignificant effect on the dynamic stability of the wind turbine, itself.Keywords: almost strictly positive real (ASPR), doubly-fed induction generator (DIFG), simple adaptive control (SAC), subsynchronous oscillations, wind turbine
Procedia PDF Downloads 377136 Punishment on top of Punishment - Impact of Inmate Misconduct
Authors: Nazirah Hassan, Andrew Kendrick
Abstract:
Punishment inside the penal institution has always been practiced in order to maintain discipline and keep order. Nonetheless, criminologists have long debated that the enforcement of discipline by punishing inmates is often ineffective and has a detrimental impact on inmates’ conduct. This paper uses data from a sample of 289 incarcerated young offenders to investigate the prevalence of institutional misconduct. It explores punitive cultural practices inside institutions and how this culture affects the inmates’ conduct during confinement. The project focused on male and female young offenders aged 12 to 21 years old, in eight juvenile justice institutions. The research collected quantitative and qualitative data using a mixed-method approach. All participants completed the Direct and Indirect Prisoner behavior Checklist-Scaled Version Revised (DIPC-SCALED-R). In addition, exploratory interviews were carried out with sixteen inmates and eight institutional staff. Results of the questionnaire survey show that almost half of the inmates reported a higher level of involvement in perpetration. It demonstrates a remarkable convergence of direct, rather than indirect, perpetration. Also, inmates reported a higher level of tobacco used and behavior associated with negative attitudes towards staff and institutional rules. In addition to this, the qualitative data suggests that the punitive culture encourages the onset of misconduct by increasing the stressful and oppressive conditions within the institution. In general, physical exercise and locking up inmates are two forms of punishment that were ubiquitous throughout the institutions. Interestingly, physical exercise is not only enforced by institutional staff but also inmates. These findings are discussed in terms of existing literature and their practical implications are considered.Keywords: institutional punishment, incarcerated young offenders, punitive culture, institutional misconduct
Procedia PDF Downloads 242135 Finite Element Analysis for Earing Prediction Incorporating the BBC2003 Material Model with Fully Implicit Integration Method: Derivation and Numerical Algorithm
Authors: Sajjad Izadpanah, Seyed Hadi Ghaderi, Morteza Sayah Irani, Mahdi Gerdooei
Abstract:
In this research work, a sophisticated yield criterion known as BBC2003, capable of describing planar anisotropic behaviors of aluminum alloy sheets, was integrated into the commercial finite element code ABAQUS/Standard via a user subroutine. The complete formulation of the implementation process using a fully implicit integration scheme, i.e., the classic backward Euler method, is presented, and relevant aspects of the yield criterion are introduced. In order to solve nonlinear differential and algebraic equations, the line-search algorithm was adopted in the user-defined material subroutine (UMAT) to expand the convergence domain of the iterative Newton-Raphson method. The developed subroutine was used to simulate a challenging computational problem with complex stress states, i.e., deep drawing of an anisotropic aluminum alloy AA3105. The accuracy and stability of the developed subroutine were confirmed by comparing the numerically predicted earing and thickness variation profiles with the experimental results, which showed an excellent agreement between numerical and experimental earing and thickness profiles. The integration of the BBC2003 yield criterion into ABAQUS/Standard represents a significant contribution to the field of computational mechanics and provides a useful tool for analyzing the mechanical behavior of anisotropic materials subjected to complex loading conditions.Keywords: BBC2003 yield function, plastic anisotropy, fully implicit integration scheme, line search algorithm, explicit and implicit integration schemes
Procedia PDF Downloads 75134 The Impact of Land Cover Change on Stream Discharges and Water Resources in Luvuvhu River Catchment, Vhembe District, Limpopo Province, South Africa
Authors: P. M. Kundu, L. R. Singo, J. O. Odiyo
Abstract:
Luvuvhu River catchment in South Africa experiences floods resulting from heavy rainfall of intensities exceeding 15 mm per hour associated with the Inter-tropical Convergence Zone (ITCZ). The generation of runoff is triggered by the rainfall intensity and soil moisture status. In this study, remote sensing and GIS techniques were used to analyze the hydrologic response to land cover changes. Runoff was calculated as a product of the net precipitation and a curve number coefficient. It was then routed using the Muskingum-Cunge method using a diffusive wave transfer model that enabled the calculation of response functions between start and end point. Flood frequency analysis was determined using theoretical probability distributions. Spatial data on land cover was obtained from multi-temporal Landsat images while data on rainfall, soil type, runoff and stream discharges was obtained by direct measurements in the field and from the Department of Water. A digital elevation model was generated from contour maps available at http://www.ngi.gov.za. The results showed that land cover changes had impacted negatively to the hydrology of the catchment. Peak discharges in the whole catchment were noted to have increased by at least 17% over the period while flood volumes were noted to have increased by at least 11% over the same period. The flood time to peak indicated a decreasing trend, in the range of 0.5 to 1 hour within the years. The synergism between remotely sensed digital data and GIS for land surface analysis and modeling was realized, and it was therefore concluded that hydrologic modeling has potential for determining the influence of changes in land cover on the hydrologic response of the catchment.Keywords: catchment, digital elevation model, hydrological model, routing, runoff
Procedia PDF Downloads 566133 A Hybrid Block Multistep Method for Direct Numerical Integration of Fourth Order Initial Value Problems
Authors: Adamu S. Salawu, Ibrahim O. Isah
Abstract:
Direct solution to several forms of fourth-order ordinary differential equations is not easily obtained without first reducing them to a system of first-order equations. Thus, numerical methods are being developed with the underlying techniques in the literature, which seeks to approximate some classes of fourth-order initial value problems with admissible error bounds. Multistep methods present a great advantage of the ease of implementation but with a setback of several functions evaluation for every stage of implementation. However, hybrid methods conventionally show a slightly higher order of truncation for any k-step linear multistep method, with the possibility of obtaining solutions at off mesh points within the interval of solution. In the light of the foregoing, we propose the continuous form of a hybrid multistep method with Chebyshev polynomial as a basis function for the numerical integration of fourth-order initial value problems of ordinary differential equations. The basis function is interpolated and collocated at some points on the interval [0, 2] to yield a system of equations, which is solved to obtain the unknowns of the approximating polynomial. The continuous form obtained, its first and second derivatives are evaluated at carefully chosen points to obtain the proposed block method needed to directly approximate fourth-order initial value problems. The method is analyzed for convergence. Implementation of the method is done by conducting numerical experiments on some test problems. The outcome of the implementation of the method suggests that the method performs well on problems with oscillatory or trigonometric terms since the approximations at several points on the solution domain did not deviate too far from the theoretical solutions. The method also shows better performance compared with an existing hybrid method when implemented on a larger interval of solution.Keywords: Chebyshev polynomial, collocation, hybrid multistep method, initial value problems, interpolation
Procedia PDF Downloads 123132 Globally Convergent Sequential Linear Programming for Multi-Material Topology Optimization Using Ordered Solid Isotropic Material with Penalization Interpolation
Authors: Darwin Castillo Huamaní, Francisco A. M. Gomes
Abstract:
The aim of the multi-material topology optimization (MTO) is to obtain the optimal topology of structures composed by many materials, according to a given set of constraints and cost criteria. In this work, we seek the optimal distribution of materials in a domain, such that the flexibility of the structure is minimized, under certain boundary conditions and the intervention of external forces. In the case we have only one material, each point of the discretized domain is represented by two values from a function, where the value of the function is 1 if the element belongs to the structure or 0 if the element is empty. A common way to avoid the high computational cost of solving integer variable optimization problems is to adopt the Solid Isotropic Material with Penalization (SIMP) method. This method relies on the continuous interpolation function, power function, where the base variable represents a pseudo density at each point of domain. For proper exponent values, the SIMP method reduces intermediate densities, since values other than 0 or 1 usually does not have a physical meaning for the problem. Several extension of the SIMP method were proposed for the multi-material case. The one that we explore here is the ordered SIMP method, that has the advantage of not being based on the addition of variables to represent material selection, so the computational cost is independent of the number of materials considered. Although the number of variables is not increased by this algorithm, the optimization subproblems that are generated at each iteration cannot be solved by methods that rely on second derivatives, due to the cost of calculating the second derivatives. To overcome this, we apply a globally convergent version of the sequential linear programming method, which solves a linear approximation sequence of optimization problems.Keywords: globally convergence, multi-material design ordered simp, sequential linear programming, topology optimization
Procedia PDF Downloads 315131 Uncanny Orania: White Complicity as the Abject of the Discursive Construction of Racism
Authors: Daphne Fietz
Abstract:
This paper builds on a reflection on an autobiographical experience of uncanniness during fieldwork in the white Afrikaner settlement Orania in South Africa. Drawing on Kristeva’s theory of abjection to establish a theory of Whiteness which is based on boundary threats, it is argued that the uncanny experience as the emergence of the abject points to a moment of crisis of the author’s Whiteness. The emanating abject directs the author to her closeness or convergence with Orania's inhabitants, that is a reciprocity based on mutual Whiteness. The experienced confluence appeals to the author’s White complicity to racism. With recourse to Butler’s theory of subjectivation, the abject, White complicity, inhabits both the outside of a discourse on racism, and of the 'self', as 'I' establish myself in relation to discourse. In this view, the qualities of the experienced abject are linked to the abject of discourse on racism, or, in other words, its frames of intelligibility. It then becomes clear, that discourse on (overt) racism functions as a necessary counter-image through which White morality is established instead of questioned, because here, by White reasoning, the abject of complicity to racism is successfully repressed, curbed, as completely impossible in the binary construction. Hence, such discourse endangers a preservation of racism in its pre-discursive and structural forms as long as its critique does not encompass its own location and performance in discourse. Discourse on overt racism is indispensable to White ignorance as it covers underlying racism and pre-empts further critique. This understanding directs us towards a form of critique which does necessitate self-reflection, uncertainty, and vigilance, which will be referred to as a discourse of relationality. Such a discourse diverges from the presumption of a detached author as a point of reference, and instead departs from attachment, dependence, mutuality and embraces the visceral as a resource of knowledge of relationality. A discourse of relationality points to another possibility of White engagement with Whiteness and racism and further promotes a conception of responsibility, which allows for and highlights dispossession and relationality in contrast to single agency and guilt.Keywords: abjection, discourse, relationality, the visceral, whiteness
Procedia PDF Downloads 158130 Effects of Aerodynamic on Suspended Cables Using Non-Linear Finite Element Approach
Authors: Justin Nwabanne, Sam Omenyi, Jeremiah Chukwuneke
Abstract:
This work presents structural nonlinear static analysis of a horizontal taut cable using Finite Element Analysis (FEA) method. The FEA was performed analytically to determine the tensions at each nodal point and subsequently, performed based on finite element displacement method computationally using the FEA software, ANSYS 14.0 to determine their behaviour under the influence of aerodynamic forces imposed on the cable. The convergence procedure is adapted into the method to prevent excessive displacements through the computations. The work compared the two FEA cases by examining the effectiveness of the analytical model in describing the response with few degrees of freedom and the ability of the nonlinear finite element procedure adopted to capture the complex features of cable dynamics with reference to the aerodynamic external influence. Results obtained from this work explain that the analytic FEM results without aerodynamic influence show a parabolic response with an optimum deflection at nodal points 12 and 13 with the cable weight at nodes 12 and 13 having the value -1.002936N while for the cable tension shows an optimum deflection value for nodes 12 and 13 at -189396.97kg/km. The maximum displacement for the cable system was obtained from ANSYS 14.0 as 4483.83 mm for X, Y and Z components of displacements at node number 2 while the maximum displacement obtained is 4218.75mm for all the directional components. The dynamic behaviour of a taut cable investigated has application in a typical power transmission line. Aerodynamic influences on the cables were considered using FEA approach by employing ANSYS 14.0 showed a complex modal behaviour as expected.Keywords: aerodynamics, cable tension and weight, finite element analysis, nodal, non-linear model, optimum deflection, suspended cable, transmission line
Procedia PDF Downloads 278129 Fully Eulerian Finite Element Methodology for the Numerical Modeling of the Dynamics of Heart Valves
Authors: Aymen Laadhari
Abstract:
During the last decade, an increasing number of contributions have been made in the fields of scientific computing and numerical methodologies applied to the study of the hemodynamics in the heart. In contrast, the numerical aspects concerning the interaction of pulsatile blood flow with highly deformable thin leaflets have been much less explored. This coupled problem remains extremely challenging and numerical difficulties include e.g. the resolution of full Fluid-Structure Interaction problem with large deformations of extremely thin leaflets, substantial mesh deformations, high transvalvular pressure discontinuities, contact between leaflets. Although the Lagrangian description of the structural motion and strain measures is naturally used, many numerical complexities can arise when studying large deformations of thin structures. Eulerian approaches represent a promising alternative to readily model large deformations and handle contact issues. We present a fully Eulerian finite element methodology tailored for the simulation of pulsatile blood flow in the aorta and sinus of Valsalva interacting with highly deformable thin leaflets. Our method enables to use a fluid solver on a fixed mesh, whilst being able to easily model the mechanical properties of the valve. We introduce a semi-implicit time integration scheme based on a consistent NewtonRaphson linearization. A variant of the classical Newton method is introduced and guarantees a third-order convergence. High-fidelity computational geometries are built and simulations are performed under physiological conditions. We address in detail the main features of the proposed method, and we report several experiments with the aim of illustrating its accuracy and efficiency.Keywords: eulerian, level set, newton, valve
Procedia PDF Downloads 278128 Partnering with Stakeholders to Secure Digitization of Water
Authors: Sindhu Govardhan, Kenneth G. Crowther
Abstract:
Modernisation of the water sector is leading to increased connectivity and integration of emerging technologies with traditional ones, leading to new security risks. The convergence of Information Technology (IT) with Operation Technology (OT) results in solutions that are spread across larger geographic areas, increasingly consist of interconnected Industrial Internet of Things (IIOT) devices and software, rely on the integration of legacy with modern technologies, use of complex supply chain components leading to complex architectures and communication paths. The result is that multiple parties collectively own and operate these emergent technologies, threat actors find new paths to exploit, and traditional cybersecurity controls are inadequate. Our approach is to explicitly identify and draw data flows that cross trust boundaries between owners and operators of various aspects of these emerging and interconnected technologies. On these data flows, we layer potential attack vectors to create a frame of reference for evaluating possible risks against connected technologies. Finally, we identify where existing controls, mitigations, and other remediations exist across industry partners (e.g., suppliers, product vendors, integrators, water utilities, and regulators). From these, we are able to understand potential gaps in security, the roles in the supply chain that are most likely to effectively remediate those security gaps, and test cases to evaluate and strengthen security across these partners. This informs a “shared responsibility” solution that recognises that security is multi-layered and requires collaboration to be successful. This shared responsibility security framework improves visibility, understanding, and control across the entire supply chain, and particularly for those water utilities that are accountable for safe and continuous operations.Keywords: cyber security, shared responsibility, IIOT, threat modelling
Procedia PDF Downloads 77127 Information Visualization Methods Applied to Nanostructured Biosensors
Authors: Osvaldo N. Oliveira Jr.
Abstract:
The control of molecular architecture inherent in some experimental methods to produce nanostructured films has had great impact on devices of various types, including sensors and biosensors. The self-assembly monolayers (SAMs) and the electrostatic layer-by-layer (LbL) techniques, for example, are now routinely used to produce tailored architectures for biosensing where biomolecules are immobilized with long-lasting preserved activity. Enzymes, antigens, antibodies, peptides and many other molecules serve as the molecular recognition elements for detecting an equally wide variety of analytes. The principles of detection are also varied, including electrochemical methods, fluorescence spectroscopy and impedance spectroscopy. In this presentation an overview will be provided of biosensors made with nanostructured films to detect antibodies associated with tropical diseases and HIV, in addition to detection of analytes of medical interest such as cholesterol and triglycerides. Because large amounts of data are generated in the biosensing experiments, use has been made of computational and statistical methods to optimize performance. Multidimensional projection techniques such as Sammon´s mapping have been shown more efficient than traditional multivariate statistical analysis in identifying small concentrations of anti-HIV antibodies and for distinguishing between blood serum samples of animals infected with two tropical diseases, namely Chagas´ disease and Leishmaniasis. Optimization of biosensing may include a combination of another information visualization method, the Parallel Coordinate technique, with artificial intelligence methods in order to identify the most suitable frequencies for reaching higher sensitivity using impedance spectroscopy. Also discussed will be the possible convergence of technologies, through which machine learning and other computational methods may be used to treat data from biosensors within an expert system for clinical diagnosis.Keywords: clinical diagnosis, information visualization, nanostructured films, layer-by-layer technique
Procedia PDF Downloads 337126 Orbit Determination from Two Position Vectors Using Finite Difference Method
Authors: Akhilesh Kumar, Sathyanarayan G., Nirmala S.
Abstract:
An unusual approach is developed to determine the orbit of satellites/space objects. The determination of orbits is considered a boundary value problem and has been solved using the finite difference method (FDM). Only positions of the satellites/space objects are known at two end times taken as boundary conditions. The technique of finite difference has been used to calculate the orbit between end times. In this approach, the governing equation is defined as the satellite's equation of motion with a perturbed acceleration. Using the finite difference method, the governing equations and boundary conditions are discretized. The resulting system of algebraic equations is solved using Tri Diagonal Matrix Algorithm (TDMA) until convergence is achieved. This methodology test and evaluation has been done using all GPS satellite orbits from National Geospatial-Intelligence Agency (NGA) precise product for Doy 125, 2023. Towards this, two hours of twelve sets have been taken into consideration. Only positions at the end times of each twelve sets are considered boundary conditions. This algorithm is applied to all GPS satellites. Results achieved using FDM compared with the results of NGA precise orbits. The maximum RSS error for the position is 0.48 [m] and the velocity is 0.43 [mm/sec]. Also, the present algorithm is applied on the IRNSS satellites for Doy 220, 2023. The maximum RSS error for the position is 0.49 [m], and for velocity is 0.28 [mm/sec]. Next, a simulation has been done for a Highly Elliptical orbit for DOY 63, 2023, for the duration of 6 hours. The RSS of difference in position is 0.92 [m] and velocity is 1.58 [mm/sec] for the orbital speed of more than 5km/sec. Whereas the RSS of difference in position is 0.13 [m] and velocity is 0.12 [mm/sec] for the orbital speed less than 5km/sec. Results show that the newly created method is reliable and accurate. Further applications of the developed methodology include missile and spacecraft targeting, orbit design (mission planning), space rendezvous and interception, space debris correlation, and navigation solutions.Keywords: finite difference method, grid generation, NavIC system, orbit perturbation
Procedia PDF Downloads 85125 Factors of Divergence of Shari’Ah Supervisory Opinions and Its Effects on the Harmonization of Islamic Banking Products and Services
Authors: Dlir Abdullah Ahmed
Abstract:
Overall aims of this study are to investigate the effects of differences of opinions among Shari’ah supervisory bodies on standardization and internationalization of Islamic banking products and services. The study has used semi-structured in-depth interview where five respondents from both the Middle East and Malaysia Shari’ah advisors participated in the interview sessions. The data were analyzed by both manual and software techniques. The findings reveal that indeed there are differences of opinions among Shari’ah advisors in different jurisdictions. These differences are due to differences in educational background, schools of thoughts, environment in which they operate, and legal requirements. Moreover, the findings also reveal that these differences in opinions among Shari’ah bodies create confusions among public and bankers, and negatively affect standardization of Islamic banking transactions. In addition, the study has explored the possibility to develop Islamic-based products. However, the finding shows that it is difficult for the industry to have Islamic-based products due to high competition from conventional counterpart, legal constraints and moral hazard. Furthermore, the findings indicate that lack of political will and unity, lack of technology are the main constraints to internationalization of Islamic banking products. Last but not least, the study found that there are possibility of convergence of opinions, standardization of Islamic banking products and services if there are unified international Shari’ah h advisory council, international basic requirements for Islamic Shari’ah h advisors, and increase training and educations of Islamic bankers. This study has several implications to the bankers, policymakers and researchers. The policymakers should be able to resolve their political differences and set up unified international advisory council and international research and development center. The bankers should increase training and educations of the workforce as well improve on their banking infrastructure to facility cross-border transactions.Keywords: Shari’ah h views, Islamic banking, products & services, standardization.
Procedia PDF Downloads 69124 Determining the Extent and Direction of Relief Transformations Caused by Ski Run Construction Using LIDAR Data
Authors: Joanna Fidelus-Orzechowska, Dominika Wronska-Walach, Jaroslaw Cebulski
Abstract:
Mountain areas are very often exposed to numerous transformations connected with the development of tourist infrastructure. In mountain areas in Poland ski tourism is very popular, so agricultural areas are often transformed into tourist areas. The construction of new ski runs can change the direction and rate of slope development. The main aim of this research was to determine geomorphological and hydrological changes within slopes caused by ski run constructions. The study was conducted in the Remiaszów catchment in the Inner Polish Carpathians (southern Poland). The mean elevation of the catchment is 859 m a.s.l. and the maximum is 946 m a.s.l. The surface area of the catchment is 1.16 km2, of which 16.8% is the area of the two studied ski runs. The studied ski runs were constructed in 2014 and 2015. In order to determine the relief transformations connected with new ski run construction high resolution LIDAR data was analyzed. The general relief changes in the studied catchment were determined on the basis of ALS (Airborne Laser Scanning ) data obtained before (2013) and after (2016) ski run construction. Based on the two sets of ALS data a digital elevation models of differences (DoDs) was created, which made it possible to determine the quantitative relief changes in the entire studied catchment. Additionally, cross and longitudinal profiles were calculated within slopes where new ski runs were built. Detailed data on relief changes within selected test surfaces was obtained based on TLS (Terrestrial Laser Scanning). Hydrological changes within the analyzed catchment were determined based on the convergence and divergence index. The study shows that the construction of the new ski runs caused significant geomorphological and hydrological changes in the entire studied catchment. However, the most important changes were identified within the ski slopes. After the construction of ski runs the entire catchment area lowered about 0.02 m. Hydrological changes in the studied catchment mainly led to the interruption of surface runoff pathways and changes in runoff direction and geometry.Keywords: hydrological changes, mountain areas, relief transformations, ski run construction
Procedia PDF Downloads 143