Search results for: delays resulting from two separate causes at the same time
21140 Heat Exchanger Optimization of a Domestic Refrigerator with Separate Cooling Circuits
Authors: Tugba Tosun, Mert Tosun
Abstract:
Cooling system performance and energy consumption in the bypass two-circuit cycle have been studied experimentally to find optimum evaporator type and geometry, capillary tube diameter and capillary length. Two types of evaporators, such as wire on the tube and finned tube evaporators were used for the experiments in the fresh food compartment. As capillary tube inner diameter and total length; 0.66 mm and 0.8mm, and 3000 mm and 3500 mm were selected as parameters, respectively. Experiments were performed at the 25⁰C ambient temperature while the average temperature of the fresh food compartment is kept at 5⁰C and the highest package temperature of the freezer compartment is kept at -18⁰C, which are defined in IEC 62552 European standard. The Design of Experiments (DOE) technique which is six sigma method has been used to indicate of effective parameters in the bypass two-circuit cycle. The experimental results revealed that the most effective parameter of the system is the evaporator type. Finned tube evaporator with 12 tube passes was found as the best option for the bypass two-circuit refrigeration cycle among the 8 different opportunities. The optimum cooling performance and the lowest energy consumption were provided with 0.66 mm capillary tube inner diameter and 3500 mm capillary tube length.Keywords: capillary tube, energy consumption, heat exchanger, refrigerator, separate cooling circuits
Procedia PDF Downloads 16821139 Poly(S/DVB)HIPE Filled with Cellulose from Water Hyacinth
Authors: Metinee Kawsomboon, Thanchanok Tulaphol, Manit Nithitanakul, Jitima Preechawong
Abstract:
PolyHIPE is a porous polymeric material from polymerization of high internal phase emulsion (HIPE) which contains 74% of internal phase (disperse phase) and 26 % of external phase (continues phase). Typically, polyHIPE was prepared from styrene (S) and divinylbenzene (DVB) and they were used in various kind of applications such as catalyst support, gas adsorption, separation membranes, and tissue engineering scaffolds due to high specific surface areas, high porousity, ability to adsorb large quantities of liquid. In this research, cellulose from water hyacinth (Eichornia Crassipes), an aquatic plant that grows and spread rapidly in rivers and waterways in Thailand was added into polyHIPE to increase mechanical property of polyHIPE. Addition of unmodified and modified cellulose to poly(S/DVB)HIPE resulting in a decrease in the surface area and thermal stability of the resulting materials. Mechanical properties of the resulting polyHIPEs filled with both unmodified and modified cellulose exhibited higher compressive strength and Young’s modulus by 146.3% and 162.5% respectively, compared to unfilled polyHIPEs. The water adsorption capacity of filled polyHIPE was also improved.Keywords: porous polymer, PolyHIPE, cellulose, surface modification, water hyacinth
Procedia PDF Downloads 14221138 Cost Overruns in Mega Projects: Project Progress Prediction with Probabilistic Methods
Authors: Yasaman Ashrafi, Stephen Kajewski, Annastiina Silvennoinen, Madhav Nepal
Abstract:
Mega projects either in construction, urban development or energy sectors are one of the key drivers that build the foundation of wealth and modern civilizations in regions and nations. Such projects require economic justification and substantial capital investment, often derived from individual and corporate investors as well as governments. Cost overruns and time delays in these mega projects demands a new approach to more accurately predict project costs and establish realistic financial plans. The significance of this paper is that the cost efficiency of megaprojects will improve and decrease cost overruns. This research will assist Project Managers (PMs) to make timely and appropriate decisions about both cost and outcomes of ongoing projects. This research, therefore, examines the oil and gas industry where most mega projects apply the classic methods of Cost Performance Index (CPI) and Schedule Performance Index (SPI) and rely on project data to forecast cost and time. Because these projects are always overrun in cost and time even at the early phase of the project, the probabilistic methods of Monte Carlo Simulation (MCS) and Bayesian Adaptive Forecasting method were used to predict project cost at completion of projects. The current theoretical and mathematical models which forecast the total expected cost and project completion date, during the execution phase of an ongoing project will be evaluated. Earned Value Management (EVM) method is unable to predict cost at completion of a project accurately due to the lack of enough detailed project information especially in the early phase of the project. During the project execution phase, the Bayesian adaptive forecasting method incorporates predictions into the actual performance data from earned value management and revises pre-project cost estimates, making full use of the available information. The outcome of this research is to improve the accuracy of both cost prediction and final duration. This research will provide a warning method to identify when current project performance deviates from planned performance and crates an unacceptable gap between preliminary planning and actual performance. This warning method will support project managers to take corrective actions on time.Keywords: cost forecasting, earned value management, project control, project management, risk analysis, simulation
Procedia PDF Downloads 40321137 Loading and Unloading Scheduling Problem in a Multiple-Multiple Logistics Network: Modelling and Solving
Authors: Yasin Tadayonrad
Abstract:
Most of the supply chain networks have many nodes starting from the suppliers’ side up to the customers’ side that each node sends/receives the raw materials/products from/to the other nodes. One of the major concerns in this kind of supply chain network is finding the best schedule for loading /unloading the shipments through the whole network by which all the constraints in the source and destination nodes are met and all the shipments are delivered on time. One of the main constraints in this problem is loading/unloading capacity in each source/ destination node at each time slot (e.g., per week/day/hour). Because of the different characteristics of different products/groups of products, the capacity of each node might differ based on each group of products. In most supply chain networks (especially in the Fast-moving consumer goods industry), there are different planners/planning teams working separately in different nodes to determine the loading/unloading timeslots in source/destination nodes to send/receive the shipments. In this paper, a mathematical problem has been proposed to find the best timeslots for loading/unloading the shipments minimizing the overall delays subject to respecting the capacity of loading/unloading of each node, the required delivery date of each shipment (considering the lead-times), and working-days of each node. This model was implemented on python and solved using Python-MIP on a sample data set. Finally, the idea of a heuristic algorithm has been proposed as a way of improving the solution method that helps to implement the model on larger data sets in real business cases, including more nodes and shipments.Keywords: supply chain management, transportation, multiple-multiple network, timeslots management, mathematical modeling, mixed integer programming
Procedia PDF Downloads 9121136 Digital Transformation as the Subject of the Knowledge Model of the Discursive Space
Authors: Rafal Maciag
Abstract:
Due to the development of the current civilization, one must create suitable models of its pervasive massive phenomena. Such a phenomenon is the digital transformation, which has a substantial number of disciplined, methodical interpretations forming the diversified reflection. This reflection could be understood pragmatically as the current temporal, a local differential state of knowledge. The model of the discursive space is proposed as a model for the analysis and description of this knowledge. Discursive space is understood as an autonomous multidimensional space where separate discourses traverse specific trajectories of what can be presented in multidimensional parallel coordinate system. Discursive space built on the world of facts preserves the complex character of that world. Digital transformation as a discursive space has a relativistic character that means that at the same time, it is created by the dynamic discourses and these discourses are molded by the shape of this space.Keywords: complexity, digital transformation, discourse, discursive space, knowledge
Procedia PDF Downloads 19221135 Internet-Of-Things and Ergonomics, Increasing Productivity and Reducing Waste: A Case Study
Authors: V. Jaime Contreras, S. Iliana Nunez, S. Mario Sanchez
Abstract:
Inside a manufacturing facility, we can find innumerable automatic and manual operations, all of which are relevant to the production process. Some of these processes add more value to the products more than others. Manual operations tend to add value to the product since they can be found in the final assembly area o final operations of the process. In this areas, where a mistake or accident can increase the cost of waste exponentially. To reduce or mitigate these costly mistakes, one approach is to rely on automation to eliminate the operator from the production line - requires a hefty investment and development of specialized machinery. In our approach, the center of the solution is the operator through sufficient and adequate instrumentation, real-time reporting and ergonomics. Efficiency and reduced cycle time can be achieved thorough the integration of Internet-of-Things (IoT) ready technologies into assembly operations to enhance the ergonomics of the workstations. Augmented reality visual aids, RFID triggered personalized workstation dimensions and real-time data transfer and reporting can help achieve these goals. In this case study, a standard work cell will be used for real-life data acquisition and a simulation software to extend the data points beyond the test cycle. Three comparison scenarios will run in the work cell. Each scenario will introduce a dimension of the ergonomics to measure its impact independently. Furthermore, the separate test will determine the limitations of the technology and provide a reference for operating costs and investment required. With the ability, to monitor costs, productivity, cycle time and scrap/waste in real-time the ROI (return on investment) can be determined at the different levels to integration. This case study will help to show that ergonomics in the assembly lines can make significant impact when IoT technologies are introduced. Ergonomics can effectively reduce waste and increase productivity with minimal investment if compared with setting up to custom machine.Keywords: augmented reality visual aids, ergonomics, real-time data acquisition and reporting, RFID triggered workstation dimensions
Procedia PDF Downloads 21421134 Structural Health Monitoring and Damage Structural Identification Using Dynamic Response
Authors: Reza Behboodian
Abstract:
Monitoring the structural health and diagnosing their damage in the early stages has always been one of the topics of concern. Nowadays, research on structural damage detection methods based on vibration analysis is very extensive. Moreover, these methods can be used as methods of permanent and timely inspection of structures and prevent further damage to structures. Non-destructive methods are the low-cost and economical methods for determining the damage of structures. In this research, a non-destructive method for detecting and identifying the failure location in structures based on dynamic responses resulting from time history analysis is proposed. When the structure is damaged due to the reduction of stiffness, and due to the applied loads, the displacements in different parts of the structure were increased. In the proposed method, the damage position is determined based on the calculation of the strain energy difference in each member of the damaged structure and the healthy structure at any time. Defective members of the structure are indicated by the amount of strain energy relative to the healthy state. The results indicated that the proper accuracy and performance of the proposed method for identifying failure in structures.Keywords: failure, time history analysis, dynamic response, strain energy
Procedia PDF Downloads 13321133 Application of Simulation of Discrete Events in Resource Management of Massive Concreting
Authors: Mohammad Amin Hamedirad, Seyed Javad Vaziri Kang Olyaei
Abstract:
Project planning and control are one of the most critical issues in the management of construction projects. Traditional methods of project planning and control, such as the critical path method or Gantt chart, are not widely used for planning projects with discrete and repetitive activities, and one of the problems of project managers is planning the implementation process and optimal allocation of its resources. Massive concreting projects is also a project with discrete and repetitive activities. This study uses the concept of simulating discrete events to manage resources, which includes finding the optimal number of resources considering various limitations such as limitations of machinery, equipment, human resources and even technical, time and implementation limitations using analysis of resource consumption rate, project completion time and critical points analysis of the implementation process. For this purpose, the concept of discrete-event simulation has been used to model different stages of implementation. After reviewing the various scenarios, the optimal number of allocations for each resource is finally determined to reach the maximum utilization rate and also to reduce the project completion time or reduce its cost according to the existing constraints. The results showed that with the optimal allocation of resources, the project completion time could be reduced by 90%, and the resulting costs can be reduced by up to 49%. Thus, allocating the optimal number of project resources using this method will reduce its time and cost.Keywords: simulation, massive concreting, discrete event simulation, resource management
Procedia PDF Downloads 14821132 Numerical Simulation and Experimental Validation of the Tire-Road Separation in Quarter-car Model
Authors: Quy Dang Nguyen, Reza Nakhaie Jazar
Abstract:
The paper investigates vibration dynamics of tire-road separation for a quarter-car model; this separation model is developed to be close to the real situation considering the tire is able to separate from the ground plane. A set of piecewise linear mathematical models is developed and matches the in-contact and no-contact states to be considered as mother models for further investigations. The bound dynamics are numerically simulated in the time response and phase portraits. The separation analysis may determine which values of suspension parameters can delay and avoid the no-contact phenomenon, which results in improving ride comfort and eliminating the potentially dangerous oscillation. Finally, model verification is carried out in the MSC-ADAMS environment.Keywords: quarter-car vibrations, tire-road separation, separation analysis, separation dynamics, ride comfort, ADAMS validation
Procedia PDF Downloads 9221131 Investigating the Impact of Task Demand and Duration on Passage of Time Judgements and Duration Estimates
Authors: Jesika A. Walker, Mohammed Aswad, Guy Lacroix, Denis Cousineau
Abstract:
There is a fundamental disconnect between the experience of time passing and the chronometric units by which time is quantified. Specifically, there appears to be no relationship between the passage of time judgments (PoTJs) and verbal duration estimates at short durations (e.g., < 2000 milliseconds). When a duration is longer than several minutes, however, evidence suggests that a slower feeling of time passing is predictive of overestimation. Might the length of a task moderate the relation between PoTJs and duration estimates? Similarly, the estimation paradigm (prospective vs. retrospective) and the mental effort demanded of a task (task demand) have both been found to influence duration estimates. However, only a handful of experiments have investigated these effects for tasks of long durations, and the results have been mixed. Thus, might the length of a task also moderate the effects of the estimation paradigm and task demand on duration estimates? To investigate these questions, 273 participants performed either an easy or difficult visual and memory search task for either eight or 58 minutes, under prospective or retrospective instructions. Afterward, participants provided a duration estimate in minutes, followed by a PoTJ on a Likert scale (1 = very slow, 7 = very fast). A 2 (prospective vs. retrospective) × 2 (eight minutes vs. 58 minutes) × 2 (high vs. low difficulty) between-subjects ANOVA revealed a two-way interaction between task demand and task duration on PoTJs, p = .02. Specifically, time felt faster in the more challenging task, but only in the eight-minute condition, p < .01. Duration estimates were transformed into RATIOs (estimate/actual duration) to standardize estimates across durations. An ANOVA revealed a two-way interaction between estimation paradigm and task duration, p = .03. Specifically, participants overestimated the task more if they were given prospective instructions, but only in the eight-minute task. Surprisingly, there was no effect of task difficulty on duration estimates. Thus, the demands of a task may influence ‘feeling of time’ and ‘estimation time’ differently, contributing to the existing theory that these two forms of time judgement rely on separate underlying cognitive mechanisms. Finally, a significant main effect of task duration was found for both PoTJs and duration estimates (ps < .001). Participants underestimated the 58-minute task (m = 42.5 minutes) and overestimated the eight-minute task (m = 10.7 minutes). Yet, they reported the 58-minute task as passing significantly slower on a Likert scale (m = 2.5) compared to the eight-minute task (m = 4.1). In fact, a significant correlation was found between PoTJ and duration estimation (r = .27, p <.001). This experiment thus provides evidence for a compensatory effect at longer durations, in which people underestimate a ‘slow feeling condition and overestimate a ‘fast feeling condition. The results are discussed in relation to heuristics that might alter the relationship between these two variables when conditions range from several minutes up to almost an hour.Keywords: duration estimates, long durations, passage of time judgements, task demands
Procedia PDF Downloads 13021130 A Simulation Study on the Applicability of Overbooking Strategies in Inland Container Transport
Authors: S. Fazi, B. Behdani
Abstract:
The inland transportation of maritime containers entails the use of different modalities whose capacity is typically booked in advance. Containers may miss their scheduled departure time at a terminal for several reasons, such as delays, change of transport modes, multiple bookings pending. In those cases, it may be difficult for transport service providers to find last minute containers to fill the vacant capacity. Similarly to other industries, overbooking could potentially limit these drawbacks at the cost of a lower service level in case of actual excess of capacity in overbooked rides. However, the presence of multiple modalities may provide the required flexibility in rescheduling and limit the dissatisfaction of the shippers in case of containers in overbooking. This flexibility is known with the term 'synchromodality'. In this paper, we evaluate via discrete event simulation the application of overbooking. Results show that in certain conditions overbooking can significantly increase profit and utilization of high-capacity means of transport, such as barges and trains. On the other hand, in case of high penalty costs and limited no-show, overbooking may lead to an excessive use of expensive trucks.Keywords: discrete event simulation, flexibility, inland shipping, multimodality, overbooking
Procedia PDF Downloads 13421129 Moving Target Defense against Various Attack Models in Time Sensitive Networks
Authors: Johannes Günther
Abstract:
Time Sensitive Networking (TSN), standardized in the IEEE 802.1 standard, has been lent increasing attention in the context of mission critical systems. Such mission critical systems, e.g., in the automotive domain, aviation, industrial, and smart factory domain, are responsible for coordinating complex functionalities in real time. In many of these contexts, a reliable data exchange fulfilling hard time constraints and quality of service (QoS) conditions is of critical importance. TSN standards are able to provide guarantees for deterministic communication behaviour, which is in contrast to common best-effort approaches. Therefore, the superior QoS guarantees of TSN may aid in the development of new technologies, which rely on low latencies and specific bandwidth demands being fulfilled. TSN extends existing Ethernet protocols with numerous standards, providing means for synchronization, management, and overall real-time focussed capabilities. These additional QoS guarantees, as well as management mechanisms, lead to an increased attack surface for potential malicious attackers. As TSN guarantees certain deadlines for priority traffic, an attacker may degrade the QoS by delaying a packet beyond its deadline or even execute a denial of service (DoS) attack if the delays lead to packets being dropped. However, thus far, security concerns have not played a major role in the design of such standards. Thus, while TSN does provide valuable additional characteristics to existing common Ethernet protocols, it leads to new attack vectors on networks and allows for a range of potential attacks. One answer to these security risks is to deploy defense mechanisms according to a moving target defense (MTD) strategy. The core idea relies on the reduction of the attackers' knowledge about the network. Typically, mission-critical systems suffer from an asymmetric disadvantage. DoS or QoS-degradation attacks may be preceded by long periods of reconnaissance, during which the attacker may learn about the network topology, its characteristics, traffic patterns, priorities, bandwidth demands, periodic characteristics on links and switches, and so on. Here, we implemented and tested several MTD-like defense strategies against different attacker models of varying capabilities and budgets, as well as collaborative attacks of multiple attackers within a network, all within the context of TSN networks. We modelled the networks and tested our defense strategies on an OMNET++ testbench, with networks of different sizes and topologies, ranging from a couple dozen hosts and switches to significantly larger set-ups.Keywords: network security, time sensitive networking, moving target defense, cyber security
Procedia PDF Downloads 7321128 Performance Comparison of Wideband Covariance Matrix Sparse Representation (W-CMSR) with Other Wideband DOA Estimation Methods
Authors: Sandeep Santosh, O. P. Sahu
Abstract:
In this paper, performance comparison of wideband covariance matrix sparse representation (W-CMSR) method with other existing wideband Direction of Arrival (DOA) estimation methods has been made.W-CMSR relies less on a priori information of the incident signal number than the ordinary subspace based methods.Consider the perturbation free covariance matrix of the wideband array output. The diagonal covariance elements are contaminated by unknown noise variance. The covariance matrix of array output is conjugate symmetric i.e its upper right triangular elements can be represented by lower left triangular ones.As the main diagonal elements are contaminated by unknown noise variance,slide over them and align the lower left triangular elements column by column to obtain a measurement vector.Simulation results for W-CMSR are compared with simulation results of other wideband DOA estimation methods like Coherent signal subspace method (CSSM), Capon, l1-SVD, and JLZA-DOA. W-CMSR separate two signals very clearly and CSSM, Capon, L1-SVD and JLZA-DOA fail to separate two signals clearly and an amount of pseudo peaks exist in the spectrum of L1-SVD.Keywords: W-CMSR, wideband direction of arrival (DOA), covariance matrix, electrical and computer engineering
Procedia PDF Downloads 47121127 Waste Management Option for Bioplastics Alongside Conventional Plastics
Authors: Dan Akesson, Gauthaman Kuzhanthaivelu, Martin Bohlen, Sunil K. Ramamoorthy
Abstract:
Bioplastics can be defined as polymers derived partly or completely from biomass. Bioplastics can be biodegradable such as polylactic acid (PLA) and polyhydroxyalkonoates (PHA); or non-biodegradable (biobased polyethylene (bio-PE), polypropylene (bio-PP), polyethylene terephthalate (bio-PET)). The usage of such bioplastics is expected to increase in the future due to new found interest in sustainable materials. At the same time, these plastics become a new type of waste in the recycling stream. Most countries do not have separate bioplastics collection for it to be recycled or composted. After a brief introduction of bioplastics such as PLA in the UK, these plastics are once again replaced by conventional plastics by many establishments due to lack of commercial composting. Recycling companies fear the contamination of conventional plastic in the recycling stream and they said they would have to invest in expensive new equipment to separate bioplastics and recycle it separately. This project studies what happens when bioplastics contaminate conventional plastics. Three commonly used conventional plastics were selected for this study: polyethylene (PE), polypropylene (PP) and polyethylene terephthalate (PET). In order to simulate contamination, two biopolymers, either polyhydroxyalkanoate (PHA) or thermoplastic starch (TPS) were blended with the conventional polymers. The amount of bioplastics in conventional plastics was either 1% or 5%. The blended plastics were processed again to see the effect of degradation. The results from contamination showed that the tensile strength and the modulus of PE was almost unaffected whereas the elongation is clearly reduced indicating the increase in brittleness of the plastic. Generally, it can be said that PP is slightly more sensitive to the contamination than PE. This can be explained by the fact that the melting point of PP is higher than for PE and as a consequence, the biopolymer will degrade more quickly. However, the reduction of the tensile properties for PP is relatively modest. Impact strength is generally a more sensitive test method towards contamination. Again, PE is relatively unaffected by the contamination but for PP there is a relatively large reduction of the impact properties already at 1% contamination. PET is polyester, and it is, by its very nature, more sensitive to degradation than PE and PP. PET also has a much higher melting point than PE and PP, and as a consequence, the biopolymer will quickly degrade at the processing temperature of PET. As for the tensile strength, PET can tolerate 1% contamination without any reduction of the tensile strength. However, when the impact strength is examined, it is clear that already at 1% contamination, there is a strong reduction of the properties. The thermal properties show the change in the crystallinity. The blends were also characterized by SEM. Biphasic morphology can be seen as the two polymers are not truly blendable which also contributes to reduced mechanical properties. The study shows that PE is relatively robust against contamination, while polypropylene (PP) is sensitive and polyethylene terephthalate (PET) can be quite sensitive towards contamination.Keywords: bioplastics, contamination, recycling, waste management
Procedia PDF Downloads 22521126 Patterns of Positive Feedback Formation in the System of Online Action
Authors: D. Gvozdikov
Abstract:
The purpose of this study is an attempt to describe an online action as a system that combines disjointed events and behavioral chains into a whole. The research focuses on patterns of naturally-formed chains of actions united by an orientation towards the online environment. A key characteristic of the system of online action is that it acts as an attractor for separate actions and chains of behavioral repertoire accumulating time and efforts made by users. The article demonstrates how the chains of online-offline actions are combined into a single pattern due to a simple identifiable mechanism, a positive feedback system. Using methods of digital ethnography and analyzing the content of the Instagram application and media blogs, the research reveals how through the positive feedback mechanism the entire system of online action acquires stability and can expand involving new spheres of human activity.Keywords: digital anthropology, internet studies, systems theory, social media
Procedia PDF Downloads 13321125 Solutions of Thickening the Sludge from the Wastewater Treatment by a Rotor with Bars
Authors: Victorita Radulescu
Abstract:
Introduction: The sewage treatment plants, in the second stage, are formed by tanks having as main purpose the formation of the suspensions with high possible solid concentration values. The paper presents a solution to produce a rapid concentration of the slurry and sludge, having as main purpose the minimization as much as possible the size of the tanks. The solution is based on a rotor with bars, tested into two different areas of industrial activity: the remediation of the wastewater from the oil industry and, in the last year, into the mining industry. Basic Methods: It was designed, realized and tested a thickening system with vertical bars that manages to reduce sludge moisture content from 94% to 87%. The design was based on the hypothesis that the streamlines of the vortices detached from the rotor with vertical bars accelerate, under certain conditions, the sludge thickening. It is moved at the lateral sides, and in time, it became sediment. The formed vortices with the vertical axis in the viscous fluid, under the action of the lift, drag, weight, and inertia forces participate at a rapid aggregation of the particles thus accelerating the sludge concentration. Appears an interdependence between the Re number attached to the flow with vortex induced by the vertical bars and the size of the hydraulic compaction phenomenon, resulting from an accelerated process of sedimentation, therefore, a sludge thickening depending on the physic-chemical characteristics of the resulting sludge is projected the rotor's dimensions. Major findings/ Results: Based on the experimental measurements was performed the numerical simulation of the hydraulic rotor, as to assure the necessary vortices. The experimental measurements were performed to determine the optimal height and the density of the bars for the sludge thickening system, to assure the tanks dimensions as small as possible. The time thickening/settling was reduced by 24% compared to the conventional used systems. In the present, the thickeners intend to decrease the intermediate stage of water treatment, using primary and secondary settling; but they assume a quite long time, the order of 10-15 hours. By using this system, there are no intermediary steps; the thickening is done automatically when are created the vortices. Conclusions: The experimental tests were carried out in the wastewater treatment plant of the Refinery of oil from Brazi, near the city Ploiesti. The results prove its efficiency in reducing the time for compacting the sludge and the smaller humidity of the evacuated sediments. The utilization of this equipment is now extended and it is tested the mining industry, with significant results, in Lupeni mine, from the Jiu Valley.Keywords: experimental tests, hydrodynamic modeling, rotor efficiency, wastewater treatment
Procedia PDF Downloads 11821124 Study on Acoustic Source Detection Performance Improvement of Microphone Array Installed on Drones Using Blind Source Separation
Authors: Youngsun Moon, Yeong-Ju Go, Jong-Soo Choi
Abstract:
Most drones that currently have surveillance/reconnaissance missions are basically equipped with optical equipment, but we also need to use a microphone array to estimate the location of the acoustic source. This can provide additional information in the absence of optical equipment. The purpose of this study is to estimate Direction of Arrival (DOA) based on Time Difference of Arrival (TDOA) estimation of the acoustic source in the drone. The problem is that it is impossible to measure the clear target acoustic source because of the drone noise. To overcome this problem is to separate the drone noise and the target acoustic source using Blind Source Separation(BSS) based on Independent Component Analysis(ICA). ICA can be performed assuming that the drone noise and target acoustic source are independent and each signal has non-gaussianity. For maximized non-gaussianity each signal, we use Negentropy and Kurtosis based on probability theory. As a result, we can improve TDOA estimation and DOA estimation of the target source in the noisy environment. We simulated the performance of the DOA algorithm applying BSS algorithm, and demonstrated the simulation through experiment at the anechoic wind tunnel.Keywords: aeroacoustics, acoustic source detection, time difference of arrival, direction of arrival, blind source separation, independent component analysis, drone
Procedia PDF Downloads 16221123 Prevalence of Cyp2d6 and Its Implications for Personalized Medicine in Saudi Arabs
Authors: Hamsa T. Tayeb, Mohammad A. Arafah, Dana M. Bakheet, Duaa M. Khalaf, Agnieszka Tarnoska, Nduna Dzimiri
Abstract:
Background: CYP2D6 is a member of the cytochrome P450 mixed-function oxidase system. The enzyme is responsible for the metabolism and elimination of approximately 25% of clinically used drugs, especially in breast cancer and psychiatric therapy. Different phenotypes have been described displaying alleles that lead to a complete loss of enzyme activity, reduced function (poor metabolizers – PM), hyperfunctionality (ultrarapid metabolizers–UM) and therefore drug intoxication or loss of drug effect. The prevalence of these variants may vary among different ethnic groups. Furthermore, the xTAG system has been developed to categorized all patients into different groups based on their CYP2D6 substrate metabolization. Aim of the study: To determine the prevalence of the different CYP2D6 variants in our population, and to evaluate their clinical relevance in personalized medicine. Methodology: We used the Luminex xMAP genotyping system to sequence 305 Saudi individuals visiting the Blood Bank of our Institution and determine which polymorphisms of CYP2D6 gene are prevalent in our region. Results: xTAG genotyping showed that 36.72% (112 out of 305 individuals) carried the CYP2D6_*2. Out of the 112 individuals with the *2 SNP, 6.23% had multiple copies of *2 SNP (19 individuals out of 305 individuals), resulting in an UM phenotype. About 33.44% carried the CYP2D6_*41, which leads to decreased activity of the CYP2D6 enzyme. 19.67% had the wild-type alleles and thus had normal enzyme function. Furthermore, 15.74% carried the CYP2D6_*4, which is the most common nonfunctional form of the CYP2D6 enzyme worldwide. 6.56% carried the CYP2D6_*17, resulting in decreased enzyme activity. Approximately 5.73% carried the CYP2D6_*10, consequently decreasing the enzyme activity, resulting in a PM phenotype. 2.30% carried the CYP2D6_*29, leading to decreased metabolic activity of the enzyme, and 2.30% carried the CYP2D6_*35, resulting in an UM phenotype, 1.64% had a whole-gene deletion CYP2D6_*5, thus resulting in the loss of CYP2D6 enzyme production, 0.66% carried the CYP2D6_*6 variant. One individual carried the CYP2D6_*3(B), producing an inactive form of the enzyme, which leads to decrease of enzyme activity, resulting in a PM phenotype. Finally, one individual carried the CYP2D6_*9, which decreases the enzyme activity. Conclusions: Our study demonstrates that different CYP2D6 variants are highly prevalent in ethnic Saudi Arabs. This finding sets a basis for informed genotyping for these variants in personalized medicine. The study also suggests that xTAG is an appropriate procedure for genotyping the CYP2D6 variants in personalized medicine.Keywords: CYP2D6, hormonal breast cancer, pharmacogenetics, polymorphism, psychiatric treatment, Saudi population
Procedia PDF Downloads 57221122 Combination between Intrusion Systems and Honeypots
Authors: Majed Sanan, Mohammad Rammal, Wassim Rammal
Abstract:
Today, security is a major concern. Intrusion Detection, Prevention Systems and Honeypot can be used to moderate attacks. Many researchers have proposed to use many IDSs ((Intrusion Detection System) time to time. Some of these IDS’s combine their features of two or more IDSs which are called Hybrid Intrusion Detection Systems. Most of the researchers combine the features of Signature based detection methodology and Anomaly based detection methodology. For a signature based IDS, if an attacker attacks slowly and in organized way, the attack may go undetected through the IDS, as signatures include factors based on duration of the events but the actions of attacker do not match. Sometimes, for an unknown attack there is no signature updated or an attacker attack in the mean time when the database is updating. Thus, signature-based IDS fail to detect unknown attacks. Anomaly based IDS suffer from many false-positive readings. So there is a need to hybridize those IDS which can overcome the shortcomings of each other. In this paper we propose a new approach to IDS (Intrusion Detection System) which is more efficient than the traditional IDS (Intrusion Detection System). The IDS is based on Honeypot Technology and Anomaly based Detection Methodology. We have designed Architecture for the IDS in a packet tracer and then implemented it in real time. We have discussed experimental results performed: both the Honeypot and Anomaly based IDS have some shortcomings but if we hybridized these two technologies, the newly proposed Hybrid Intrusion Detection System (HIDS) is capable enough to overcome these shortcomings with much enhanced performance. In this paper, we present a modified Hybrid Intrusion Detection System (HIDS) that combines the positive features of two different detection methodologies - Honeypot methodology and anomaly based intrusion detection methodology. In the experiment, we ran both the Intrusion Detection System individually first and then together and recorded the data from time to time. From the data we can conclude that the resulting IDS are much better in detecting intrusions from the existing IDSs.Keywords: security, intrusion detection, intrusion prevention, honeypot, anomaly-based detection, signature-based detection, cloud computing, kfsensor
Procedia PDF Downloads 38321121 An Amended Method for Assessment of Hypertrophic Scars Viscoelastic Parameters
Authors: Iveta Bryjova
Abstract:
Recording of viscoelastic strain-vs-time curves with the aid of the suction method and a follow-up analysis, resulting into evaluation of standard viscoelastic parameters, is a significant technique for non-invasive contact diagnostics of mechanical properties of skin and assessment of its conditions, particularly in acute burns, hypertrophic scarring (the most common complication of burn trauma) and reconstructive surgery. For elimination of the skin thickness contribution, usable viscoelastic parameters deduced from the strain-vs-time curves are restricted to the relative ones (i.e. those expressed as a ratio of two dimensional parameters), like grosselasticity, net-elasticity, biological elasticity or Qu’s area parameters, in literature and practice conventionally referred to as R2, R5, R6, R7, Q1, Q2, and Q3. With the exception of parameters R2 and Q1, the remaining ones substantially depend on the position of inflection point separating the elastic linear and viscoelastic segments of the strain-vs-time curve. The standard algorithm implemented in commercially available devices relies heavily on the experimental fact that the inflection time comes about 0.1 sec after the suction switch-on/off, which depreciates credibility of parameters thus obtained. Although the Qu’s US 7,556,605 patent suggests a method of improving the precision of the inflection determination, there is still room for nonnegligible improving. In this contribution, a novel method of inflection point determination utilizing the advantageous properties of the Savitzky–Golay filtering is presented. The method allows computation of derivatives of smoothed strain-vs-time curve, more exact location of inflection and consequently more reliable values of aforementioned viscoelastic parameters. An improved applicability of the five inflection-dependent relative viscoelastic parameters is demonstrated by recasting a former study under the new method, and by comparing its results with those provided by the methods that have been used so far.Keywords: Savitzky–Golay filter, scarring, skin, viscoelasticity
Procedia PDF Downloads 30421120 Triplex Detection of Pistacia vera, Arachis hypogaea and Pisum sativum in Processed Food Products Using Probe Based PCR
Authors: Ergün Şakalar, Şeyma Özçirak Ergün, Emrah Yalazi̇, Emine Altinkaya, Cengiz Ataşoğlu
Abstract:
In recent years, food allergies which cause serious health problems affect to public health around the world. Foodstuffs which contain allergens are either intentionally used as ingredients or are encased as contaminant in food products. The prevalence of clinical allergy to peanuts and nuts is estimated at about 0.4%-1.1% of the adult population, representing the allergy to pistachio the 7% of the cases of tree nut causing allergic reactions. In order to protect public health and enforce the legislation, methods for sensitive analysis of pistachio and peanut contents in food are required. Pea, pistachio and peanut are used together, to reduce the cost in food production such as baklava, snack foods.DNA technology-based methods in food analysis are well-established and well-roundedtools for species differentiation, allergen detection. Especially, the probe-based TaqMan real-time PCR assay can amplify target DNA with efficiency, specificity, and sensitivity.In this study, pistachio, peanut and pea were finely ground and three separate series of triplet mixtures containing 0.1, 1, 10, 100, 1000, 10,000 and 100,000 mg kg-1 of each sample were prepared for each series, to a final weight of 100 g. DNA from reference samples and industrial products was successfully extracted with the GIDAGEN® Multi-Fast DNA Isolation Kit. TaqMan probes were designed for triplex determination of ITS, Ara h 3 and pea lectin genes which are specific regions for identification pistachio, peanut and pea, respectively.The real-time PCR as quantitative detected pistachio, peanut and pea in these mixtures down to the lowest investigated level of 0.1, 0.1 and 1 mg kg-1, respectively. Also, the methods reported here are capable of detecting of as little as 0.001% level of peanut DNA, 0,000001% level of pistachio DNA and 0.000001% level of pea DNA. We accomplish that the quantitative triplex real-time PCR method developed in this study canbe applied to detect pistachio, peanut and peatraces for three allergens at once in commercial food products.Keywords: allergens, DNA, real-time PCR, TaqMan probe
Procedia PDF Downloads 25621119 Strategic Model of Implementing E-Learning Using Funnel Model
Authors: Mohamed Jama Madar, Oso Wilis
Abstract:
E-learning is the application of information technology in the teaching and learning process. This paper presents the Funnel model as a solution for the problems of implementation of e-learning in tertiary education institutions. While existing models such as TAM, theory-based e-learning and pedagogical model have been used over time, they have generally been found to be inadequate because of their tendencies to treat materials development, instructional design, technology, delivery and governance as separate and isolated entities. Yet it is matching components that bring framework of e-learning strategic implementation. The Funnel model enhances all these into one and applies synchronously and asynchronously to e-learning implementation where the only difference is modalities. Such a model for e-learning implementation has been lacking. The proposed Funnel model avoids ad-ad-hoc approach which has made other systems unused or inefficient, and compromised educational quality. Therefore, the proposed Funnel model should help tertiary education institutions adopt and develop effective and efficient e-learning system which meets users’ requirements.Keywords: e-learning, pedagogical, technology, strategy
Procedia PDF Downloads 45221118 Using E-learning in a Tertiary Institution during Community Outbreak of COVID-19 in Hong Kong
Authors: Susan Ka Yee Chow
Abstract:
The Coronavirus disease (COVID-19) reached Hong Kong in 2019 resulting in epidemic in late January 2020. Considering the epidemic development, tertiary institutions made announcements that all on-campus classes were suspended since 01/29/2020. In Tung Wah College, e-learning was adopted in all courses for all programmes. For the undergraduate nursing students, the contact hours and curriculum are bounded by the Nursing Council of Hong Kong to ensure core competence after graduation. Unlike the usual e-learning where students are allowed having flexibility of time and place in their learning, real time learning mode using Blackboard was used to mimic the actual classroom learning environment. Students were required to attend classes according to the timetable using online platform. For lectures, voice over PowerPoint file was the initial step for mass lecturing. Real time lecture was then adopted to improve interactions between teacher and students. Post-lecture quizzes were developed to monitor the effectiveness of lecture delivery. The seminars and tutorials were conducted using real time mode where students were separated into small groups with interactive discussions with teacher within the group. Live time demonstrations were conducted during laboratory sessions. All teaching sessions were audio/video recorded for students’ referral. The assessments including seminar presentation and debate were retained. The learning mode creates an atmosphere for students to display the visual, audio and written works in a non-threatening atmosphere. Other students could comment using text or direct voice as they desired. Real time online learning is the pedagogy to replace classroom contacts in the emergent and unforeseeable circumstances. The learning pace and interaction between students and students with teacher are maintained. The learning mode has the advantage of creating an effective and beneficial learning experience.Keywords: e-learning, nursing curriculum, real time mode, teaching and learning
Procedia PDF Downloads 11521117 The Impact of the Board of Directors’ Characteristics on Tax Aggressiveness in USA Companies
Authors: jihen ayadi sellami
Abstract:
The rapid evolution of the global financial landscape has led to increased attention to corporate tax policies and the need to understand the factors that influence their tax behavior. In order to mitigate any residual loss for shareholders resulting from tax aggressiveness and resolve the agency problem, appropriate systems that separate the function of management from that of controlling are needed. In this context of growing concerns to limit aggressive corporate taxation practices through governance, this study discusses. Its aims is to examine the influence of six key characteristics of the board of directors (board size, diligence, CEO duality, presence of audit committees, gender diversity and independence of directors), given a governance mechanism, on the tax decisions of non-financial corporations in the United State. In fact, using a sample of 90 non-financial US firms from S&P 500 over a period of 4 years going from 2014 to 2017, the results based on a multivariate linear regression highlight significant associations between these characteristics and corporate tax policy. Notably, larger board, gender diversity, diligence and increased director independence appear to play an important role in reducing aggressive taxation. While duality has a positive and significant correlation with tax aggressiveness, that can be explained by the fact that the manager did properly exploit his specific position within the company. These findings contribute to a deeper understanding of how board characteristics can influence corporate tax management, providing avenues for more effective corporate governance and more responsible tax decision-makingKeywords: tax aggressiveness, board of directors, board size, CEO duality, audit committees, gender diversity, director independence, diligence, corporate governance, united states
Procedia PDF Downloads 6121116 The Role Collagen VI Plays in Heart Failure: A Tale Untold
Authors: Summer Hassan, David Crossman
Abstract:
Myocardial fibrosis (MF) has been loosely defined as the process occurring in the pathological remodeling of the myocardium due to excessive production and deposition of extracellular matrix (ECM) proteins, including collagen. This reduces tissue compliance and accelerates progression to heart failure, as well as affecting the electrical properties of the myocytes resulting in arrhythmias. Microscopic interrogation of MF is key to understanding the molecular orchestrators of disease. It is well-established that recruitment and stimulation of myofibroblasts result in Collagen deposition and the resulting expansion in the ECM. Many types of Collagens have been identified and implicated in scarring of tissue. In a series of experiments conducted at our lab, we aim to elucidate the role collagen VI plays in the development of myocardial fibrosis and its direct impact on myocardial function. This was investigated through an animal experiment in Rats with Collagen VI knockout diseased and healthy animals as well as Collagen VI wild diseased and healthy rats. Echocardiogram assessments of these rats ensued at four-time points, followed by microscopic interrogation of the myocardium aiming to correlate the role collagen VI plays in myocardial function. Our results demonstrate a deterioration in cardiac function as represented by the ejection fraction in the knockout healthy and diseased rats. This elucidates a potential protective role that collagen-VI plays following a myocardial insult. Current work is dedicated to the microscopic characterisation of the fibrotic process in all rat groups, with the results to follow.Keywords: heart failure, myocardial fibrosis, collagen, echocardiogram, confocal microscopy
Procedia PDF Downloads 8221115 Two-Step Inversion Method for Multi-mode Surface Waves
Authors: Ying Zhang
Abstract:
Surface waves provide critical constraints about the earth's structure in the crust and upper mantle. However, different modes of Love waves with close group velocities often arrive at a similar time and interfere with each other. This problem is typical for Love waves at intermediate periods that travel through the oceanic lithosphere. Therefore, we developed a two-step inversion approach to separate the waveforms of the fundamental and first higher mode of Love waves. We first solve the phase velocities of the two modes and their amplitude ratios. The misfit function is based on the sum of phase differences among the station pairs. We then solve the absolute amplitudes of the two modes and their initial phases using obtained phase velocities and amplitude ratio. The separated waveforms of each mode from the two-step inversion method can be further used in surface wave tomography to improve model resolution.Keywords: surface wave inversion, waveform separation, love waves, higher-mode interference
Procedia PDF Downloads 7021114 The Incidence of Postoperative Atrial Fibrillation after Coronary Artery Bypass Grafting in Patients with Local and Diffuse Coronary Artery Disease
Authors: Kamil Ganaev, Elina Vlasova, Andrei Shiryaev, Renat Akchurin
Abstract:
De novo atrial fibrillation (AF) after coronary artery bypass grafting (CABG) is a common complication. To date, there are no data on the possible effect of diffuse lesions of coronary arteries on the incidence of postoperative AF complications. Methods. Patients operated on-pump under hypothermic conditions during the calendar year (2020) were studied. Inclusion criteria - isolated CABG and achievement of complete myocardial revascularization. Patients with a history of AF moderate and severe valve dysfunction, hormonal thyroid pathology, initial CHF(Congestive heart failure), as well as patients with developed perioperative complications (IM, acute heart failure, massive blood loss) and deceased were excluded. Thus 227 patients were included; mean age 65±9 years; 69% were men. 89% of patients had a 3-vessel lesion of the coronary artery; the remainder had a 2-vessel lesion. Mean LV size: 3.9±0.3 cm, indexed LV volume: 29.4±5.3 mL/m2. Two groups were considered: D (n=98), patients with diffuse coronary heart disease, and L (n=129), patients with local coronary heart disease. Clinical and demographic characteristics in the groups were comparable. Rhythm assessment: continuous bedside ECG monitoring up to 5 days; ECG CT at 5-7 days after CABG; daily routine ECG registration. Follow-up period - postoperative hospital period. Results. The Median follow-up period was 9 (7;11) days. POFP (Postoperative atrial fibrillation) was detected in 61/227 (27%) patients: 34/98 (35%) in group D versus 27/129 (21%) in group L; p<0.05. Moreover, the values of revascularization index in groups D and L (3.9±0.7 and 3.8±0.5, respectively) were equal, and the mean time Cardiopulmonary bypass (CPB) (107±27 and 80±13min), as well as the mean ischemic time (67±17 and 55±11min) were significantly longer in group D (p<0.05). However, a separate analysis of these parameters in patients with and without developed AF did not reveal any significant differences in group D (CPB time 99±21.2 min, ischemic time 63±12.2 min), or in group L (CPB time 88±13.1 min, ischemic time 58.7±13.2 min). Conclusion. With the diffuse nature of coronary lesions, the incidence of AF in the hospital period after isolated CABG definitely increases. To better understand the role of severe coronary atherosclerosis in the development of POAF, it is necessary to distinguish the influence of organic features of atrial and ventricular myocardium (as a consequence of chronic coronary disease) from the features of surgical correction in diffuse coronary lesions.Keywords: atrial fibrillation, diffuse coronary artery disease, coronary artery bypass grafting, local coronary artery disease
Procedia PDF Downloads 21221113 Tabu Random Algorithm for Guiding Mobile Robots
Authors: Kevin Worrall, Euan McGookin
Abstract:
The use of optimization algorithms is common across a large number of diverse fields. This work presents the use of a hybrid optimization algorithm applied to a mobile robot tasked with carrying out a search of an unknown environment. The algorithm is then applied to the multiple robots case, which results in a reduction in the time taken to carry out the search. The hybrid algorithm is a Random Search Algorithm fused with a Tabu mechanism. The work shows that the algorithm locates the desired points in a quicker time than a brute force search. The Tabu Random algorithm is shown to work within a simulated environment using a validated mathematical model. The simulation was run using three different environments with varying numbers of targets. As an algorithm, the Tabu Random is small, clear and can be implemented with minimal resources. The power of the algorithm is the speed at which it locates points of interest and the robustness to the number of robots involved. The number of robots can vary with no changes to the algorithm resulting in a flexible algorithm.Keywords: algorithms, control, multi-agent, search and rescue
Procedia PDF Downloads 23921112 Optimization of Lubricant Distribution with Alternative Coordinates and Number of Warehouses Considering Truck Capacity and Time Windows
Authors: Taufik Rizkiandi, Teuku Yuri M. Zagloel, Andri Dwi Setiawan
Abstract:
Distribution and growth in the transportation and warehousing business sector decreased by 15,04%. There was a decrease in Gross Domestic Product (GDP) contribution level from rank 7 of 4,41% in 2019 to 3,81% in rank 8 in 2020. A decline in the transportation and warehousing business sector contributes to GDP, resulting in oil and gas companies implementing an efficient supply chain strategy to ensure the availability of goods, especially lubricants. Fluctuating demand for lubricants and warehouse service time limits are essential things that are taken into account in determining an efficient route. Add depots points as a solution so that demand for lubricants is fulfilled (not stock out). However, adding a depot will increase operating costs and storage costs. Therefore, it is necessary to optimize the addition of depots using the Capacitated Vehicle Routing Problem with Time Windows (CVRPTW). This research case study was conducted at an oil and gas company that produces lubricants from 2019 to 2021. The study results obtained the optimal route and the addition of a depot with a minimum additional cost. The total cost remains efficient with the addition of a depot when compared to one depot from Jakarta.Keywords: CVRPTW, optimal route, depot, tabu search algorithm
Procedia PDF Downloads 13621111 A Stepwise Approach to Automate the Search for Optimal Parameters in Seasonal ARIMA Models
Authors: Manisha Mukherjee, Diptarka Saha
Abstract:
Reliable forecasts of univariate time series data are often necessary for several contexts. ARIMA models are quite popular among practitioners in this regard. Hence, choosing correct parameter values for ARIMA is a challenging yet imperative task. Thus, a stepwise algorithm is introduced to provide automatic and robust estimates for parameters (p; d; q)(P; D; Q) used in seasonal ARIMA models. This process is focused on improvising the overall quality of the estimates, and it alleviates the problems induced due to the unidimensional nature of the methods that are currently used such as auto.arima. The fast and automated search of parameter space also ensures reliable estimates of the parameters that possess several desirable qualities, consequently, resulting in higher test accuracy especially in the cases of noisy data. After vigorous testing on real as well as simulated data, the algorithm doesn’t only perform better than current state-of-the-art methods, it also completely obviates the need for human intervention due to its automated nature.Keywords: time series, ARIMA, auto.arima, ARIMA parameters, forecast, R function
Procedia PDF Downloads 165