Search results for: complex network approach
18084 Development of Coastal Inundation–Inland and River Flow Interface Module Based on 2D Hydrodynamic Model
Authors: Eun-Taek Sin, Hyun-Ju Jang, Chang Geun Song, Yong-Sik Han
Abstract:
Due to the climate change, the coastal urban area repeatedly suffers from the loss of property and life by flooding. There are three main causes of inland submergence. First, when heavy rain with high intensity occurs, the water quantity in inland cannot be drained into rivers by increase in impervious surface of the land development and defect of the pump, storm sewer. Second, river inundation occurs then water surface level surpasses the top of levee. Finally, Coastal inundation occurs due to rising sea water. However, previous studies ignored the complex mechanism of flooding, and showed discrepancy and inadequacy due to linear summation of each analysis result. In this study, inland flooding and river inundation were analyzed together by HDM-2D model. Petrov-Galerkin stabilizing method and flux-blocking algorithm were applied to simulate the inland flooding. In addition, sink/source terms with exponentially growth rate attribute were added to the shallow water equations to include the inland flooding analysis module. The applications of developed model gave satisfactory results, and provided accurate prediction in comprehensive flooding analysis. The applications of developed model gave satisfactory results, and provided accurate prediction in comprehensive flooding analysis. To consider the coastal surge, another module was developed by adding seawater to the existing Inland Flooding-River Inundation binding module for comprehensive flooding analysis. Based on the combined modules, the Coastal Inundation – Inland & River Flow Interface was simulated by inputting the flow rate and depth data in artificial flume. Accordingly, it was able to analyze the flood patterns of coastal cities over time. This study is expected to help identify the complex causes of flooding in coastal areas where complex flooding occurs, and assist in analyzing damage to coastal cities. Acknowledgements—This research was supported by a grant ‘Development of the Evaluation Technology for Complex Causes of Inundation Vulnerability and the Response Plans in Coastal Urban Areas for Adaptation to Climate Change’ [MPSS-NH-2015-77] from the Natural Hazard Mitigation Research Group, Ministry of Public Safety and Security of Korea.Keywords: flooding analysis, river inundation, inland flooding, 2D hydrodynamic model
Procedia PDF Downloads 36218083 Personalize E-Learning System Based on Clustering and Sequence Pattern Mining Approach
Authors: H. S. Saini, K. Vijayalakshmi, Rishi Sayal
Abstract:
Network-based education has been growing rapidly in size and quality. Knowledge clustering becomes more important in personalized information retrieval for web-learning. A personalized-Learning service after the learners’ knowledge has been classified with clustering. Through automatic analysis of learners’ behaviors, their partition with similar data level and interests may be discovered so as to produce learners with contents that best match educational needs for collaborative learning. We present a specific mining tool and a recommender engine that we have integrated in the online learning in order to help the teacher to carry out the whole e-learning process. We propose to use sequential pattern mining algorithms to discover the most used path by the students and from this information can recommend links to the new students automatically meanwhile they browse in the course. We have Developed a specific author tool in order to help the teacher to apply all the data mining process. We tend to report on many experiments with real knowledge so as to indicate the quality of using both clustering and sequential pattern mining algorithms together for discovering personalized e-learning systems.Keywords: e-learning, cluster, personalization, sequence, pattern
Procedia PDF Downloads 42818082 Multi-Vehicle Detection Using Histogram of Oriented Gradients Features and Adaptive Sliding Window Technique
Authors: Saumya Srivastava, Rina Maiti
Abstract:
In order to achieve a better performance of vehicle detection in a complex environment, we present an efficient approach for a multi-vehicle detection system using an adaptive sliding window technique. For a given frame, image segmentation is carried out to establish the region of interest. Gradient computation followed by thresholding, denoising, and morphological operations is performed to extract the binary search image. Near-region field and far-region field are defined to generate hypotheses using the adaptive sliding window technique on the resultant binary search image. For each vehicle candidate, features are extracted using a histogram of oriented gradients, and a pre-trained support vector machine is applied for hypothesis verification. Later, the Kalman filter is used for tracking the vanishing point. The experimental results show that the method is robust and effective on various roads and driving scenarios. The algorithm was tested on highways and urban roads in India.Keywords: gradient, vehicle detection, histograms of oriented gradients, support vector machine
Procedia PDF Downloads 12418081 Thermodynamic Properties of Calcium-Containing DPPA and DPPC Liposomes
Authors: Tamaz Mdzinarashvili, Mariam Khvedelidze, Eka Shekiladze, Salome Chinchaladze, Mariam Mdzinarashvili
Abstract:
The work is about the preparation of calcium-containing 1,2-Dipalmitoyl-sn-glycero-3-phosphocholine (DPPC) and 1,2-Dipalmitoyl-sn-glycero-3-phosphatidic acid (DPPA) and their calorimetric study. In order to prepare these complex liposomes, for the first stage it is necessary for ligands and lipids to directly interact, followed by the addition of pH-buffered water or solvent at temperatures slightly above the liposome phase transition temperature. The resulting mixture is briefly but vigorously shaken and then transformed into liposomes of the desired size using an extruder. Particle sizing and calorimetry were used to evaluate liposome formation. We determined the possible structure of calcium-containing liposomes made by our new technology and determined their thermostability. The paper provides calculations showing how many phospholipid molecules are required to make a 200 nm diameter liposome. Calculations showed that 33x10³ lipid molecules are needed to prepare one DPPA and DPPC liposome. Based on the calorimetric experiments, we determined that the structure of uncomplexed DPPA liposomes is unilaminar (one double layer), while DPPC liposome is a nanoparticle with a multilaminar (multilayer) structure. This was determined by the cooperativity of the heat absorption peak. Calorimetric studies of calcium liposomes made by our technology showed that calcium ions are placed in the multilaminar structure of the DPPC liposome. Calcium ions also formed a complex in the DPPA liposome structure, moreover, calcium made the DPPA liposome multilaminar, since the cooperative narrow heat absorption peak was transformed into a three-peak heat absorption peak. Since both types of liposomes in complex with calcium ions present a multilaminar structure, where the number of lipid heads in one particle is large, the number of calcium ions in one particle will also be increased. That makes it possible to use these nanoparticles as transporters of a large amount of calcium ions in a living organism.Keywords: calcium, liposomes, thermodynamic parameters, calorimetry
Procedia PDF Downloads 3718080 A Methodological Approach to Digital Engineering Adoption and Implementation for Organizations
Authors: Sadia H. Syeda, Zain H. Malik
Abstract:
As systems continue to become more complex and the interdependencies of processes and sub-systems continue to grow and transform, the need for a comprehensive method of tracking and linking the lifecycle of the systems in a digital form becomes ever more critical. Digital Engineering (DE) provides an approach to managing an authoritative data source that links, tracks, and updates system data as it evolves and grows throughout the system development lifecycle. DE enables the developing, tracking, and sharing system data, models, and other related artifacts in a digital environment accessible to all necessary stakeholders. The DE environment provides an integrated electronic repository that enables traceability between design, engineering, and sustainment artifacts. The DE activities' primary objective is to develop a set of integrated, coherent, and consistent system models for the program. It is envisioned to provide a collaborative information-sharing environment for various stakeholders, including operational users, acquisition personnel, engineering personnel, and logistics and sustainment personnel. Examining the processes that DE can support in the systems engineering life cycle (SELC) is a primary step in the DE adoption and implementation journey. Through an analysis of the U.S Department of Defense’s (DoD) Office of the Secretary of Defense (OSD’s) Digital Engineering Strategy and their implementation, examples of DE implementation by the industry and technical organizations, this paper will provide descriptions of the current DE processes and best practices of implementing DE across an enterprise. This will help identify the capabilities, environment, and infrastructure needed to develop a potential roadmap for implementing DE practices consistent with its business strategy. A capability maturity matrix will be provided to assess the organization’s DE maturity emphasizing how all the SELC elements interlink to form a cohesive ecosystem. If implemented, DE can increase efficiency and improve the systems engineering processes' quality and outcomes.Keywords: digital engineering, digital environment, digital maturity model, single source of truth, systems engineering life-cycle
Procedia PDF Downloads 9218079 Secure Optimized Ingress Filtering in Future Internet Communication
Authors: Bander Alzahrani, Mohammed Alreshoodi
Abstract:
Information-centric networking (ICN) using architectures such as the Publish-Subscribe Internet Technology (PURSUIT) has been proposed as a new networking model that aims at replacing the current used end-centric networking model of the Internet. This emerged model focuses on what is being exchanged rather than which network entities are exchanging information, which gives the control plane functions such as routing and host location the ability to be specified according to the content items. The forwarding plane of the PURSUIT ICN architecture uses a simple and light mechanism based on Bloom filter technologies to forward the packets. Although this forwarding scheme solve many problems of the today’s Internet such as the growth of the routing table and the scalability issues, it is vulnerable to brute force attacks which are starting point to distributed- denial-of-service (DDoS) attacks. In this work, we design and analyze a novel source-routing and information delivery technique that keeps the simplicity of using Bloom filter-based forwarding while being able to deter different attacks such as denial of service attacks at the ingress of the network. To achieve this, special forwarding nodes called Edge-FW are directly attached to end user nodes and used to perform a security test for malicious injected random packets at the ingress of the path to prevent any possible attack brute force attacks at early stage. In this technique, a core entity of the PURSUIT ICN architecture called topology manager, that is responsible for finding shortest path and creating a forwarding identifiers (FId), uses a cryptographically secure hash function to create a 64-bit hash, h, over the formed FId for authentication purpose to be included in the packet. Our proposal restricts the attacker from injecting packets carrying random FIds with a high amount of filling factor ρ, by optimizing and reducing the maximum allowed filling factor ρm in the network. We optimize the FId to the minimum possible filling factor where ρ ≤ ρm, while it supports longer delivery trees, so the network scalability is not affected by the chosen ρm. With this scheme, the filling factor of any legitimate FId never exceeds the ρm while the filling factor of illegitimate FIds cannot exceed the chosen small value of ρm. Therefore, injecting a packet containing an FId with a large value of filling factor, to achieve higher attack probability, is not possible anymore. The preliminary analysis of this proposal indicates that with the designed scheme, the forwarding function can detect and prevent malicious activities such DDoS attacks at early stage and with very high probability.Keywords: forwarding identifier, filling factor, information centric network, topology manager
Procedia PDF Downloads 15418078 Cloud Data Security Using Map/Reduce Implementation of Secret Sharing Schemes
Authors: Sara Ibn El Ahrache, Tajje-eddine Rachidi, Hassan Badir, Abderrahmane Sbihi
Abstract:
Recently, there has been increasing confidence for a favorable usage of big data drawn out from the huge amount of information deposited in a cloud computing system. Data kept on such systems can be retrieved through the network at the user’s convenience. However, the data that users send include private information, and therefore, information leakage from these data is now a major social problem. The usage of secret sharing schemes for cloud computing have lately been approved to be relevant in which users deal out their data to several servers. Notably, in a (k,n) threshold scheme, data security is assured if and only if all through the whole life of the secret the opponent cannot compromise more than k of the n servers. In fact, a number of secret sharing algorithms have been suggested to deal with these security issues. In this paper, we present a Mapreduce implementation of Shamir’s secret sharing scheme to increase its performance and to achieve optimal security for cloud data. Different tests were run and through it has been demonstrated the contributions of the proposed approach. These contributions are quite considerable in terms of both security and performance.Keywords: cloud computing, data security, Mapreduce, Shamir's secret sharing
Procedia PDF Downloads 30618077 An Agent-Based Modelling Simulation Approach to Calculate Processing Delay of GEO Satellite Payload
Authors: V. Vicente E. Mujica, Gustavo Gonzalez
Abstract:
The global coverage of broadband multimedia and internet-based services in terrestrial-satellite networks demand particular interests for satellite providers in order to enhance services with low latencies and high signal quality to diverse users. In particular, the delay of on-board processing is an inherent source of latency in a satellite communication that sometimes is discarded for the end-to-end delay of the satellite link. The frame work for this paper includes modelling of an on-orbit satellite payload using an agent model that can reproduce the properties of processing delays. In essence, a comparison of different spatial interpolation methods is carried out to evaluate physical data obtained by an GEO satellite in order to define a discretization function for determining that delay. Furthermore, the performance of the proposed agent and the development of a delay discretization function are together validated by simulating an hybrid satellite and terrestrial network. Simulation results show high accuracy according to the characteristics of initial data points of processing delay for Ku bands.Keywords: terrestrial-satellite networks, latency, on-orbit satellite payload, simulation
Procedia PDF Downloads 27118076 NUX: A Lightweight Block Cipher for Security at Wireless Sensor Node Level
Authors: Gaurav Bansod, Swapnil Sutar, Abhijit Patil, Jagdish Patil
Abstract:
This paper proposes an ultra-lightweight cipher NUX. NUX is a generalized Feistel network. It supports 128/80 bit key length and block length of 64 bit. For 128 bit key length, NUX needs only 1022 GEs which is less as compared to all existing cipher design. NUX design results into less footprint area and minimal memory size. This paper presents security analysis of NUX cipher design which shows cipher’s resistance against basic attacks like Linear and Differential Cryptanalysis. Advanced attacks like Biclique attack is also mounted on NUX cipher design. Two different F function in NUX cipher design results in high diffusion mechanism which generates large number of active S-boxes in minimum number of rounds. NUX cipher has total 31 rounds. NUX design will be best-suited design for critical application like smart grid, IoT, wireless sensor network, where memory size, footprint area and the power dissipation are the major constraints.Keywords: lightweight cryptography, Feistel cipher, block cipher, IoT, encryption, embedded security, ubiquitous computing
Procedia PDF Downloads 37318075 Arc Interruption Design for DC High Current/Low SC Fuses via Simulation
Authors: Ali Kadivar, Kaveh Niayesh
Abstract:
This report summarizes a simulation-based approach to estimate the current interruption behavior of a fuse element utilized in a DC network protecting battery banks under different stresses. Due to internal resistance of the battries, the short circuit current in very close to the nominal current, and it makes the fuse designation tricky. The base configuration considered in this report consists of five fuse units in parallel. The simulations are performed using a multi-physics software package, COMSOL® 5.6, and the necessary material parameters have been calculated using two other software packages.The first phase of the simulation starts with the heating of the fuse elements resulted from the current flow through the fusing element. In this phase, the heat transfer between the metallic strip and the adjacent materials results in melting and evaporation of the filler and housing before the aluminum strip is evaporated and the current flow in the evaporated strip is cut-off, or an arc is eventually initiated. The initiated arc starts to expand, so the entire metallic strip is ablated, and a long arc of around 20 mm is created within the first 3 milliseconds after arc initiation (v_elongation = 6.6 m/s. The final stage of the simulation is related to the arc simulation and its interaction with the external circuitry. Because of the strong ablation of the filler material and venting of the arc caused by the melting and evaporation of the filler and housing before an arc initiates, the arc is assumed to burn in almost pure ablated material. To be able to precisely model this arc, one more step related to the derivation of the transport coefficients of the plasma in ablated urethane was necessary. The results indicate that an arc current interruption, in this case, will not be achieved within the first tens of milliseconds. In a further study, considering two series elements, the arc was interrupted within few milliseconds. A very important aspect in this context is the potential impact of many broken strips parallel to the one where the arc occurs. The generated arcing voltage is also applied to the other broken strips connected in parallel with arcing path. As the gap between the other strips is very small, a large voltage of a few hundred volts generated during the current interruption may eventually lead to a breakdown of another gap. As two arcs in parallel are not stable, one of the arcs will extinguish, and the total current will be carried by one single arc again. This process may be repeated several times if the generated voltage is very large. The ultimate result would be that the current interruption may be delayed.Keywords: DC network, high current / low SC fuses, FEM simulation, paralle fuses
Procedia PDF Downloads 6618074 A Novel Design in the Use of Planar Transformers for LDMOS Based Amplifiers in Bands II, III, DRM+, DVB-T and DAB+
Authors: Antonis Constantinides, Christos Yiallouras, Christakis Damianou
Abstract:
The coaxial transformer-coupled push-pull circuitry has been used widely in HF and VHF amplifiers for many decades without significant changes in the topology of the transformers. Basic changes over the years concerned the construction and turns ratio of the transformers as has been imposed upon the newer technologies active devices demands. The balun transmission line transformers applied in push-pull amplifiers enable input/output impedance transformation, but are mainly used to convert the balanced output into unbalanced and the input unbalanced into balanced. A simple and affordable alternative solution over the traditional coaxial transformer is the coreless planar balun. A key advantage over the traditional approach lies in the high specifications repeatability; simplifying the amplifier construction requirements as the planar balun constitutes an integrated part of the PCB copper layout. This paper presents the performance analysis of a planar LDMOS MRFE6VP5600 Push-Pull amplifier that enables robust operation in Band III, DVB-T, DVB-T2 standards but functions equally well in Band II, for DRM+ new generation transmitters.Keywords: amplifier, balun, complex impedance, LDMOS, planar-transformers
Procedia PDF Downloads 44018073 The Problem of Suffering: Job, The Servant and Prophet of God
Authors: Barbara Pemberton
Abstract:
Now that people of all faiths are experiencing suffering due to many global issues, shared narratives may provide common ground in which true understanding of each other may take root. This paper will consider the all too common problem of suffering and address how adherents of the three great monotheistic religions seek understanding and the appropriate believer’s response from the same story found within their respective sacred texts. Most scholars from each of these three traditions—Judaism, Christianity, and Islam— consider the writings of the Tanakh/Old Testament to at least contain divine revelation. While they may not agree on the extent of the revelation or the method of its delivery, they do share stories as well as a common desire to glean God’s message for God’s people from the pages of the text. One such shared story is that of Job, the servant of Yahweh--called Ayyub, the prophet of Allah, in the Qur’an. Job is described as a pious, righteous man who loses everything—family, possessions, and health—when his faith is tested. Three friends come to console him. Through it, all Job remains faithful to his God who rewards him by restoring all that was lost. All three hermeneutic communities consider Job to be an archetype of human response to suffering, regarding Job’s response to his situation as exemplary. The story of Job addresses more than the distribution of the evil problem. At stake in the story is Job’s very relationship to his God. Some exegetes believe that Job was adapted into the Jewish milieu by a gifted redactor who used the original ancient tale as the “frame” for the biblical account (chapters 1, 2, and 4:7-17) and then enlarged the story with the complex center section of poetic dialogues creating a complex work with numerous possible interpretations. Within the poetic center, Job goes so far as to question God, a response to which Jews relate, finding strength in dialogue—even in wrestling with God. Muslims only embrace the Job of the biblical narrative frame, as further identified through the Qur’an and the prophetic traditions, considering the center section an errant human addition not representative of a true prophet of Islam. The Qur’anic injunction against questioning God also renders the center theologically suspect. Christians also draw various responses from the story of Job. While many believers may agree with the Islamic perspective of God’s ultimate sovereignty, others would join their Jewish neighbors in questioning God, not anticipating answers but rather an awareness of his presence—peace and hope becoming a reality experienced through the indwelling presence of God’s Holy Spirit. Related questions are as endless as the possible responses. This paper will consider a few of the many Jewish, Christian, and Islamic insights from the ancient story, in hopes adherents within each tradition will use it to better understand the other faiths’ approach to suffering.Keywords: suffering, Job, Qur'an, tanakh
Procedia PDF Downloads 18618072 Posterior Thigh Compartment Syndrome Associated with Hamstring Avulsion and Antiplatelet Therapy
Authors: Andrea Gatti, Federica Coppotelli, Ma Primavera, Laura Palmieri, Umberto Tarantino
Abstract:
Aim of study: Scientific literature is scarce of studies and reviews valuing the pros and cons of the paratricipital approach for the treatment of humeral shaft fractures; the lateral paratricipital approach is a valid alternative to the classical posterior approach to the humeral shaft as it preserves both the triceps muscle and the elbow extensor mechanisms; based on our experience, this retrospective analysis aims at analyzing outcome, risks and benefits of the lateral paratricipital approach for humeral shaft fractures. Methods: Our study includes 14 patients treated between 2018 and 2019 for unilateral humeral shaft fractures: 13 with a B1 or B2 and a patient with a C fracture type (according to the AO/ATO Classification); 6 of our patients identified as male while 8 as female; age average was 57.8 years old (range 21-73 years old). A lateral paratricipital approach was performed on all 14 patients, sparing the triceps muscle by avoiding the olecranon osteotomy and by assessing the integrity and the preservation of the radial nerve; the humeral shaft fracture osteosynthesis was performed by means of plates and screws. After surgery all patients have started elbow functional rehabilitation with acceptable pain management. Post-operative follow-up has been carried out by assessing radiographs, MEPS (Mayo Elbow Performance Score) and DASH (Disability of Arm Shoulder and Hand) functional assessment and ROM of the affected joint. Results: All 14 patients had an optimal post-operative follow-up with an adequate osteosynthesis and functional rehabilitations by entirely preserving the operated elbow joint; the mean elbow ROM was 0-118.6 degree (range of 0-130) while the average MEPS score was 86 (range75-100) and 79.9 for the DASH (range 21.7-86.1). Just 2 patients suffered of temporary radial nerve apraxia, healed in the subsequent follow-ups. CONCLUSION: The lateral paratricipital approach preserve both the integrity of the triceps muscle and the elbow biomechanism but we do strongly recommend additional studies to be carried out to highlight differences between it and the classical posterior approach in treating humeral shaft fractures.Keywords: paratricepital approach, humerus shaft fracture, posterior approach humeral shaft, paratricipital postero-lateral approach
Procedia PDF Downloads 12918071 Online Learning for Modern Business Models: Theoretical Considerations and Algorithms
Authors: Marian Sorin Ionescu, Olivia Negoita, Cosmin Dobrin
Abstract:
This scientific communication reports and discusses learning models adaptable to modern business problems and models specific to digital concepts and paradigms. In the PAC (probably approximately correct) learning model approach, in which the learning process begins by receiving a batch of learning examples, the set of learning processes is used to acquire a hypothesis, and when the learning process is fully used, this hypothesis is used in the prediction of new operational examples. For complex business models, a lot of models should be introduced and evaluated to estimate the induced results so that the totality of the results are used to develop a predictive rule, which anticipates the choice of new models. In opposition, for online learning-type processes, there is no separation between the learning (training) and predictive phase. Every time a business model is approached, a test example is considered from the beginning until the prediction of the appearance of a model considered correct from the point of view of the business decision. After choosing choice a part of the business model, the label with the logical value "true" is known. Some of the business models are used as examples of learning (training), which helps to improve the prediction mechanisms for future business models.Keywords: machine learning, business models, convex analysis, online learning
Procedia PDF Downloads 14118070 Numerical Modelling of Shear Zone and Its Implications on Slope Instability at Letšeng Diamond Open Pit Mine, Lesotho
Authors: M. Ntšolo, D. Kalumba, N. Lefu, G. Letlatsa
Abstract:
Rock mass damage due to shear tectonic activity has been investigated largely in geoscience where fluid transport is of major interest. However, little has been studied on the effect of shear zones on rock mass behavior and its impact on stability of rock slopes. At Letšeng Diamonds open pit mine in Lesotho, the shear zone composed of sheared kimberlite material, calcite and altered basalt is forming part of the haul ramp into the main pit cut 3. The alarming rate at which the shear zone is deteriorating has triggered concerns about both local and global stability of pit the walls. This study presents the numerical modelling of the open pit slope affected by shear zone at Letšeng Diamond Mine (LDM). Analysis of the slope involved development of the slope model by using a two-dimensional finite element code RS2. Interfaces between shear zone and host rock were represented by special joint elements incorporated in the finite element code. The analysis of structural geological mapping data provided a good platform to understand the joint network. Major joints including shear zone were incorporated into the model for simulation. This approach proved successful by demonstrating that continuum modelling can be used to evaluate evolution of stresses, strain, plastic yielding and failure mechanisms that are consistent with field observations. Structural control due to geological shear zone structure proved to be important in its location, size and orientation. Furthermore, the model analyzed slope deformation and sliding possibility along shear zone interfaces. This type of approach can predict shear zone deformation and failure mechanism, hence mitigation strategies can be deployed for safety of human lives and property within mine pits.Keywords: numerical modeling, open pit mine, shear zone, slope stability
Procedia PDF Downloads 29918069 WHO Surgical Safety Checklist in a Rural Ugandan Hospital, Barriers and Drivers to Implementation
Authors: Lucie Litvack, Malaz Elsaddig, Kevin Jones
Abstract:
There is strong evidence to support the efficacy of the World Health Organization (WHO) Surgical Safety Checklist in improving patient safety; however, its use can be associated with difficulties. This study uses qualitative data collected in Kitovu Healthcare Complex, a rural Ugandan hospital, to identify factors that may influence the use of the checklist in a low-income setting. Potential barriers to and motivators for the hospital’s use of this checklist are identified and explored through observations of current patient safety practices; semi-structured interviews with theatre staff; a focus group with doctors; and trial implementation of the checklist. Barriers identified include the institutional context; knowledge and understanding; patient safety culture; resources and checklist contents. Motivators for correct use include prior knowledge; team attitudes; and a hospital advocate. Challenges are complex and unique to this socioeconomic context. Stepwise change to improve patient safety practices, local champions, whole team training, and checklist modification may assist the implementation and sustainable use of the checklist in an effective way.Keywords: anaesthesia, patient safety, Uganda, WHO surgical safety checklist
Procedia PDF Downloads 35618068 Artificial Neural Networks Face to Sudden Load Change for Shunt Active Power Filter
Authors: Dehini Rachid, Ferdi Brahim
Abstract:
The shunt active power filter (SAPF) is not destined only to improve the power factor, but also to compensate the unwanted harmonic currents produced by nonlinear loads. This paper presents a SAPF with identification and control method based on artificial neural network (ANN). To identify harmonics, many techniques are used, among them the conventional p-q theory and the relatively recent one the artificial neural network method. It is difficult to get satisfied identification and control characteristics by using a normal (ANN) due to the nonlinearity of the system (SAPF + fast nonlinear load variations). This work is an attempt to undertake a systematic study of the problem to equip the (SAPF) with the harmonics identification and DC link voltage control method based on (ANN). The latter has been applied to the (SAPF) with fast nonlinear load variations. The results of computer simulations and experiments are given, which can confirm the feasibility of the proposed active power filter.Keywords: artificial neural networks (ANN), p-q theory, harmonics, total harmonic distortion
Procedia PDF Downloads 38618067 A New Reliability based Channel Allocation Model in Mobile Networks
Authors: Anujendra, Parag Kumar Guha Thakurta
Abstract:
The data transmission between mobile hosts and base stations (BSs) in Mobile networks are often vulnerable to failure. Thus, efficient link connectivity, in terms of the services of both base stations and communication channels of the network, is required in wireless mobile networks to achieve highly reliable data transmission. In addition, it is observed that the number of blocked hosts is increased due to insufficient number of channels during heavy load in the network. Under such scenario, the channels are allocated accordingly to offer a reliable communication at any given time. Therefore, a reliability-based channel allocation model with acceptable system performance is proposed as a MOO problem in this paper. Two conflicting parameters known as Resource Reuse factor (RRF) and the number of blocked calls are optimized under reliability constraint in this problem. The solution to such MOO problem is obtained through NSGA-II (Non-dominated Sorting Genetic Algorithm). The effectiveness of the proposed model in this work is shown with a set of experimental results.Keywords: base station, channel, GA, pareto-optimal, reliability
Procedia PDF Downloads 40818066 High-Fidelity Materials Screening with a Multi-Fidelity Graph Neural Network and Semi-Supervised Learning
Authors: Akeel A. Shah, Tong Zhang
Abstract:
Computational approaches to learning the properties of materials are commonplace, motivated by the need to screen or design materials for a given application, e.g., semiconductors and energy storage. Experimental approaches can be both time consuming and costly. Unfortunately, computational approaches such as ab-initio electronic structure calculations and classical or ab-initio molecular dynamics are themselves can be too slow for the rapid evaluation of materials, often involving thousands to hundreds of thousands of candidates. Machine learning assisted approaches have been developed to overcome the time limitations of purely physics-based approaches. These approaches, on the other hand, require large volumes of data for training (hundreds of thousands on many standard data sets such as QM7b). This means that they are limited by how quickly such a large data set of physics-based simulations can be established. At high fidelity, such as configuration interaction, composite methods such as G4, and coupled cluster theory, gathering such a large data set can become infeasible, which can compromise the accuracy of the predictions - many applications require high accuracy, for example band structures and energy levels in semiconductor materials and the energetics of charge transfer in energy storage materials. In order to circumvent this problem, multi-fidelity approaches can be adopted, for example the Δ-ML method, which learns a high-fidelity output from a low-fidelity result such as Hartree-Fock or density functional theory (DFT). The general strategy is to learn a map between the low and high fidelity outputs, so that the high-fidelity output is obtained a simple sum of the physics-based low-fidelity and correction, Although this requires a low-fidelity calculation, it typically requires far fewer high-fidelity results to learn the correction map, and furthermore, the low-fidelity result, such as Hartree-Fock or semi-empirical ZINDO, is typically quick to obtain, For high-fidelity outputs the result can be an order of magnitude or more in speed up. In this work, a new multi-fidelity approach is developed, based on a graph convolutional network (GCN) combined with semi-supervised learning. The GCN allows for the material or molecule to be represented as a graph, which is known to improve accuracy, for example SchNet and MEGNET. The graph incorporates information regarding the numbers of, types and properties of atoms; the types of bonds; and bond angles. They key to the accuracy in multi-fidelity methods, however, is the incorporation of low-fidelity output to learn the high-fidelity equivalent, in this case by learning their difference. Semi-supervised learning is employed to allow for different numbers of low and high-fidelity training points, by using an additional GCN-based low-fidelity map to predict high fidelity outputs. It is shown on 4 different data sets that a significant (at least one order of magnitude) increase in accuracy is obtained, using one to two orders of magnitude fewer low and high fidelity training points. One of the data sets is developed in this work, pertaining to 1000 simulations of quinone molecules (up to 24 atoms) at 5 different levels of fidelity, furnishing the energy, dipole moment and HOMO/LUMO.Keywords: .materials screening, computational materials, machine learning, multi-fidelity, graph convolutional network, semi-supervised learning
Procedia PDF Downloads 4118065 Clinical Pathway for Postoperative Organ Transplants
Authors: Tahsien Okasha
Abstract:
Transplantation medicine is one of the most challenging and complex areas of modern medicine. Some of the key areas for medical management are the problems of transplant rejection, during which the body has an immune response to the transplanted organ, possibly leading to transplant failure and the need to immediately remove the organ from the recipient. When possible, transplant rejection can be reduced through serotyping to determine the most appropriate donor-recipient match and through the use of immunosuppressant drugs. Postoperative care actually begins before the surgery in terms of education, discharge planning, nutrition, pulmonary rehabilitation, and patient/family education. This also allows for expectations to be managed. A multidisciplinary approach is the key, and collaborative team meetings are essential to ensuring that all team members are "on the same page.". The following clinical pathway map and guidelines with the aim to decrease alteration in clinical practice and are intended for those healthcare professionals who look after organ transplant patients. They are also intended to be useful to both medical and surgical trainees as well as nurse specialists and other associated healthcare professionals involved in the care of organ transplant patients. This pathway is general pathway include the general guidelines that can be applicable for all types of organ transplant with special considerations to each organ.Keywords: organ transplant, clinical pathway, postoperative care, same page
Procedia PDF Downloads 43718064 Optimizing Wind Turbine Blade Geometry for Enhanced Performance and Durability: A Computational Approach
Authors: Nwachukwu Ifeanyi
Abstract:
Wind energy is a vital component of the global renewable energy portfolio, with wind turbines serving as the primary means of harnessing this abundant resource. However, the efficiency and stability of wind turbines remain critical challenges in maximizing energy output and ensuring long-term operational viability. This study proposes a comprehensive approach utilizing computational aerodynamics and aeromechanics to optimize wind turbine performance across multiple objectives. The proposed research aims to integrate advanced computational fluid dynamics (CFD) simulations with structural analysis techniques to enhance the aerodynamic efficiency and mechanical stability of wind turbine blades. By leveraging multi-objective optimization algorithms, the study seeks to simultaneously optimize aerodynamic performance metrics such as lift-to-drag ratio and power coefficient while ensuring structural integrity and minimizing fatigue loads on the turbine components. Furthermore, the investigation will explore the influence of various design parameters, including blade geometry, airfoil profiles, and turbine operating conditions, on the overall performance and stability of wind turbines. Through detailed parametric studies and sensitivity analyses, valuable insights into the complex interplay between aerodynamics and structural dynamics will be gained, facilitating the development of next-generation wind turbine designs. Ultimately, this research endeavours to contribute to the advancement of sustainable energy technologies by providing innovative solutions to enhance the efficiency, reliability, and economic viability of wind power generation systems. The findings have the potential to inform the design and optimization of wind turbines, leading to increased energy output, reduced maintenance costs, and greater environmental benefits in the transition towards a cleaner and more sustainable energy future.Keywords: computation, robotics, mathematics, simulation
Procedia PDF Downloads 5818063 Defects Estimation of Embedded Systems Components by a Bond Graph Approach
Authors: I. Gahlouz, A. Chellil
Abstract:
The paper concerns the estimation of system components faults by using an unknown inputs observer. To reach this goal, we used the Bond Graph approach to physical modelling. We showed that this graphical tool is allowing the representation of system components faults as unknown inputs within the state representation of the considered physical system. The study of the causal and structural features of the system (controllability, observability, finite structure, and infinite structure) based on the Bond Graph approach was hence fulfilled in order to design an unknown inputs observer which is used for the system component fault estimation.Keywords: estimation, bond graph, controllability, observability
Procedia PDF Downloads 41318062 Doris Salcedo: Parameters of Political Commitment in Colombia
Authors: Diana Isabel Torres Silva
Abstract:
Doris Salcedo is the most prominent sculptor from Colombia ever and currently, one of the most prestigious Latin-American artists in the world. Her artwork, intended as political art, has war as a background, in particular the Colombian civil conflict, and it addresses the way that its violence affects victims’ lives irreparably. While Salcedo is internationally recognized as a talented and a politically committed artist, some Colombian critics consider her artwork as the propagandist and influenced by the interest of multinational companies and the organizations that fund it. This paper, as part of a more extended research project, attempts to demonstrate that Doris Salcedo’s artwork makes visible the victims suffering and mourning and compels the viewers’ sympathy, although its approach is superficial. It does not achieve a complete or complex understanding of the social and historical causes underneath the war and maybe because of that has become a successful commodity for the international arts market. The paper considers, firstly, the influence that Colombian Nuevo Teatro, from the sixties, had on Salcedo’s early political perspective and, secondly, analyzes in detail the first series of her artwork (1992-1998) and how those works address grieving. The focus point of this analysis will be the domestic furniture sculptures, which are the main symbolic element of Salcedo’s oeuvre.Keywords: Arts and politics, Doris Salcedo, Colombian art, Political Art
Procedia PDF Downloads 35018061 A Generic Approach to Reuse Unified Modeling Language Components Following an Agile Process
Authors: Rim Bouhaouel, Naoufel Kraïem, Zuhoor Al Khanjari
Abstract:
Unified Modeling Language (UML) is considered as one of the widespread modeling language standardized by the Object Management Group (OMG). Therefore, the model driving engineering (MDE) community attempts to provide reuse of UML diagrams, and do not construct it from scratch. The UML model appears according to a specific software development process. The existing method generation models focused on the different techniques of transformation without considering the development process. Our work aims to construct an UML component from fragments of UML diagram basing on an agile method. We define UML fragment as a portion of a UML diagram, which express a business target. To guide the generation of fragments of UML models using an agile process, we need a flexible approach, which adapts to the agile changes and covers all its activities. We use the software product line (SPL) to derive a fragment of process agile method. This paper explains our approach, named RECUP, to generate UML fragments following an agile process, and overviews the different aspects. In this paper, we present the approach and we define the different phases and artifacts.Keywords: UML, component, fragment, agile, SPL
Procedia PDF Downloads 39718060 Deep Convolutional Neural Network for Detection of Microaneurysms in Retinal Fundus Images at Early Stage
Authors: Goutam Kumar Ghorai, Sandip Sadhukhan, Arpita Sarkar, Debprasad Sinha, G. Sarkar, Ashis K. Dhara
Abstract:
Diabetes mellitus is one of the most common chronic diseases in all countries and continues to increase in numbers significantly. Diabetic retinopathy (DR) is damage to the retina that occurs with long-term diabetes. DR is a major cause of blindness in the Indian population. Therefore, its early diagnosis is of utmost importance towards preventing progression towards imminent irreversible loss of vision, particularly in the huge population across rural India. The barriers to eye examination of all diabetic patients are socioeconomic factors, lack of referrals, poor access to the healthcare system, lack of knowledge, insufficient number of ophthalmologists, and lack of networking between physicians, diabetologists and ophthalmologists. A few diabetic patients often visit a healthcare facility for their general checkup, but their eye condition remains largely undetected until the patient is symptomatic. This work aims to focus on the design and development of a fully automated intelligent decision system for screening retinal fundus images towards detection of the pathophysiology caused by microaneurysm in the early stage of the diseases. Automated detection of microaneurysm is a challenging problem due to the variation in color and the variation introduced by the field of view, inhomogeneous illumination, and pathological abnormalities. We have developed aconvolutional neural network for efficient detection of microaneurysm. A loss function is also developed to handle severe class imbalance due to very small size of microaneurysms compared to background. The network is able to locate the salient region containing microaneurysms in case of noisy images captured by non-mydriatic cameras. The ground truth of microaneurysms is created by expert ophthalmologists for MESSIDOR database as well as private database, collected from Indian patients. The network is trained from scratch using the fundus images of MESSIDOR database. The proposed method is evaluated on DIARETDB1 and the private database. The method is successful in detection of microaneurysms for dilated and non-dilated types of fundus images acquired from different medical centres. The proposed algorithm could be used for development of AI based affordable and accessible system, to provide service at grass root-level primary healthcare units spread across the country to cater to the need of the rural people unaware of the severe impact of DR.Keywords: retinal fundus image, deep convolutional neural network, early detection of microaneurysms, screening of diabetic retinopathy
Procedia PDF Downloads 14218059 A Theoretical Approach on Electoral Competition, Lobby Formation and Equilibrium Policy Platforms
Authors: Deepti Kohli, Meeta Keswani Mehra
Abstract:
The paper develops a theoretical model of electoral competition with purely opportunistic candidates and a uni-dimensional policy using the probability voting approach while focusing on the aspect of lobby formation to analyze the inherent complex interactions between centripetal and centrifugal forces and their effects on equilibrium policy platforms. There exist three types of agents, namely, Left-wing, Moderate and Right-wing who comprise of the total voting population. Also, it is assumed that the Left and Right agents are free to initiate a lobby of their choice. If initiated, these lobbies generate donations which in turn can be contributed to one (or both) electoral candidates in order to influence them to implement the lobby’s preferred policy. Four different lobby formation scenarios have been considered: no lobby formation, only Left, only Right and both Left and Right. The equilibrium policy platforms, amount of individual donations by agents to their respective lobbies and the contributions offered to the electoral candidates have been solved for under each of the above four cases. Since it is assumed that the agents cannot coordinate each other’s actions during the lobby formation stage, there exists a probability with which a lobby would be formed, which is also solved for in the model. The results indicate that the policy platforms of the two electoral candidates converge completely under the cases of no lobby and both (extreme) formations but diverge under the cases of only one (Left or Right) lobby formation. This is because in the case of no lobby being formed, only the centripetal forces (emerging from the election-winning aspect) are present while in the case of both extreme (Left-wing and Right-wing) lobbies being formed, centrifugal forces (emerging from the lobby formation aspect) also arise but cancel each other out, again resulting in a pure policy convergence phenomenon. In contrast, in case of only one lobby being formed, both centripetal and centrifugal forces interact strategically, leading the two electoral candidates to choose completely different policy platforms in equilibrium. Additionally, it is found that in equilibrium, while the donation by a specific agent type increases with the formation of both lobbies in comparison to when only one lobby is formed, the probability of implementation of the policy being advocated by that lobby group falls.Keywords: electoral competition, equilibrium policy platforms, lobby formation, opportunistic candidates
Procedia PDF Downloads 33318058 Optimization of Lean Methodologies in the Textile Industry Using Design of Experiments
Authors: Ahmad Yame, Ahad Ali, Badih Jawad, Daw Al-Werfalli Mohamed Nasser, Sabah Abro
Abstract:
Industries in general have a lot of waste. Wool textile company, Baniwalid, Libya has many complex problems that led to enormous waste generated due to the lack of lean strategies, expertise, technical support and commitment. To successfully address waste at wool textile company, this study will attempt to develop a methodical approach that integrates lean manufacturing tools to optimize performance characteristics such as lead time and delivery. This methodology will utilize Value Stream Mapping (VSM) techniques to identify the process variables that affect production. Once these variables are identified, Design of Experiments (DOE) Methodology will be used to determine the significantly influential process variables, these variables are then controlled and set at their optimal to achieve optimal levels of productivity, quality, agility, efficiency and delivery to analyze the outputs of the simulation model for different lean configurations. The goal of this research is to investigate how the tools of lean manufacturing can be adapted from the discrete to the continuous manufacturing environment and to evaluate their benefits at a specific industrial.Keywords: lean manufacturing, DOE, value stream mapping, textiles
Procedia PDF Downloads 45518057 Functional to Business Process Orientation in Business Schools
Authors: Sunitha Thappa
Abstract:
Business environment is a set of complex interdependent dimensions that corporates have to always be vigil in identifying the influential waves. Over the year business environment has evolved into a basket of uncertainties. Every organization strives to counter this dynamic nature of business environment by recurrently evaluating the primary and support activities of its value chain. This has led to companies redesigning their business models, reinvent business processes and operating procedure on unremitting basis. A few specific issues that are placed before the present day managers are breaking down the functional interpretation of any challenge that organizations confronts, reduction in organizational hierarchy and tackling the components of the value chain to retain their competitive advantage. It is how effectively managers detect the changes and swiftly reorient themselves to these changes that define their success or failure. Given the complexity of decision making in this dynamic environment, two important question placed before the B-schools of today. Firstly, are they grooming and nurturing managerial talent proficient enough to thrive in this multifaceted business environment? Secondly, are the management graduates walking through their portals, able to view challenges from a cross-functional perspective with emphasis to customer and process rather than hierarchy and functions. This paper focuses on the need for a process oriented approach to management education.Keywords: management education, pedagogy, functional, process
Procedia PDF Downloads 33218056 The Predictive Value of Serum Bilirubin in the Post-Transplant De Novo Malignancy: A Data Mining Approach
Authors: Nasim Nosoudi, Amir Zadeh, Hunter White, Joshua Conrad, Joon W. Shim
Abstract:
De novo Malignancy has become one of the major causes of death after transplantation, so early cancer diagnosis and detection can drastically improve survival rates post-transplantation. Most previous work focuses on using artificial intelligence (AI) to predict transplant success or failure outcomes. In this work, we focused on predicting de novo malignancy after liver transplantation using AI. We chose the patients that had malignancy after liver transplantation with no history of malignancy pre-transplant. Their donors were cancer-free as well. We analyzed 254,200 patient profiles with post-transplant malignancy from the US Organ Procurement and Transplantation Network (OPTN). Several popular data mining methods were applied to the resultant dataset to build predictive models to characterize de novo malignancy after liver transplantation. Recipient's bilirubin, creatinine, weight, gender, number of days recipient was on the transplant waiting list, Epstein Barr Virus (EBV), International normalized ratio (INR), and ascites are among the most important factors affecting de novo malignancy after liver transplantationKeywords: De novo malignancy, bilirubin, data mining, transplantation
Procedia PDF Downloads 10518055 CFD Simulation of the Inlet Pressure Effects on the Cooling Capacity Enhancement for Vortex Tube with Couple Vortex Chambers
Authors: Nader Pourmahmoud, Amir Hassanzadeh
Abstract:
This article investigates the effects of inlet pressure in a newly introduced vortex tube which has been equipped with an additional vortex chamber. A 3-D compressible turbulent flow computation has been carried out toward analysis of complex flow field in this apparatus. Numerical results of flows are derived by utilizing the standard k-ε turbulence model for analyzing high rotating complex flow field. The present research has focused on cooling effect and given a characteristics curve for minimum cool temperature. In addition, the effect of inlet pressure for both chambers has been studied in details. To be presented numerical results show that the effect of inlet pressure in second chamber has more important role in improving the performance of the vortex tube than first one. By increasing the pressure in the second chamber, cold outlet temperature reaches a higher decrease. When both chambers are fed with high pressure fluid, best operation condition of vortex tube occurs. However, it is not possible to feed both chambers with high pressure due to the conditions of working environment.Keywords: energy separation, inlet pressure, numerical simulation, vortex chamber, vortex tube
Procedia PDF Downloads 371