Search results for: linear systems
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 12212

Search results for: linear systems

10832 A Preliminary Kinematic Comparison of Vive and Vicon Systems for the Accurate Tracking of Lumbar Motion

Authors: Yaghoubi N., Moore Z., Van Der Veen S. M., Pidcoe P. E., Thomas J. S., Dexheimer B.

Abstract:

Optoelectronic 3D motion capture systems, such as the Vicon kinematic system, are widely utilized in biomedical research to track joint motion. These systems are considered powerful and accurate measurement tools with <2 mm average error. However, these systems are costly and may be difficult to implement and utilize in a clinical setting. 3D virtual reality (VR) is gaining popularity as an affordable and accessible tool to investigate motor control and perception in a controlled, immersive environment. The HTC Vive VR system includes puck-style trackers that seamlessly integrate into its VR environments. These affordable, wireless, lightweight trackers may be more feasible for clinical kinematic data collection. However, the accuracy of HTC Vive Trackers (3.0), when compared to optoelectronic 3D motion capture systems, remains unclear. In this preliminary study, we compared the HTC Vive Tracker system to a Vicon kinematic system in a simulated lumbar flexion task. A 6-DOF robot arm (SCORBOT ER VII, Eshed Robotec/RoboGroup, Rosh Ha’Ayin, Israel) completed various reaching movements to mimic increasing levels of hip flexion (15°, 30°, 45°). Light reflective markers, along with one HTC Vive Tracker (3.0), were placed on the rigid segment separating the elbow and shoulder of the robot. We compared position measures simultaneously collected from both systems. Our preliminary analysis shows no significant differences between the Vicon motion capture system and the HTC Vive tracker in the Z axis, regardless of hip flexion. In the X axis, we found no significant differences between the two systems at 15 degrees of hip flexion but minimal differences at 30 and 45 degrees, ranging from .047 cm ± .02 SE (p = .03) at 30 degrees hip flexion to .194 cm ± .024 SE (p < .0001) at 45 degrees of hip flexion. In the Y axis, we found a minimal difference for 15 degrees of hip flexion only (.743 cm ± .275 SE; p = .007). This preliminary analysis shows that the HTC Vive Tracker may be an appropriate, affordable option for gross motor motion capture when the Vicon system is not available, such as in clinical settings. Further research is needed to compare these two motion capture systems in different body poses and for different body segments.

Keywords: lumbar, vivetracker, viconsystem, 3dmotion, ROM

Procedia PDF Downloads 101
10831 Modeling and Simulation of Ship Structures Using Finite Element Method

Authors: Javid Iqbal, Zhu Shifan

Abstract:

The development in the construction of unconventional ships and the implementation of lightweight materials have shown a large impulse towards finite element (FE) method, making it a general tool for ship design. This paper briefly presents the modeling and analysis techniques of ship structures using FE method for complex boundary conditions which are difficult to analyze by existing Ship Classification Societies rules. During operation, all ships experience complex loading conditions. These loads are general categories into thermal loads, linear static, dynamic and non-linear loads. General strength of the ship structure is analyzed using static FE analysis. FE method is also suitable to consider the local loads generated by ballast tanks and cargo in addition to hydrostatic and hydrodynamic loads. Vibration analysis of a ship structure and its components can be performed using FE method which helps in obtaining the dynamic stability of the ship. FE method has developed better techniques for calculation of natural frequencies and different mode shapes of ship structure to avoid resonance both globally and locally. There is a lot of development towards the ideal design in ship industry over the past few years for solving complex engineering problems by employing the data stored in the FE model. This paper provides an overview of ship modeling methodology for FE analysis and its general application. Historical background, the basic concept of FE, advantages, and disadvantages of FE analysis are also reported along with examples related to hull strength and structural components.

Keywords: dynamic analysis, finite element methods, ship structure, vibration analysis

Procedia PDF Downloads 136
10830 Systems Thinking in Practice Supporting Competence and Sustainable Development Goal Implementation Capability in Student Teaching

Authors: Anette Hay, Zama Simamane

Abstract:

Capacity-building and integration of practical activities is one of the key SDGs of the 2030 Agenda for Sustainable Development. This paper will focus on SDG# 17 – “the means of implementation” - and the role of systems thinking in practice (STiP) in supporting both competence and SDG implementation capability in teacher education curricula at North-West University, South Africa. The “Environmental Management for Sustainability” module (EDTM 312), which is compulsory for all students enrolled in the education program at North-West University, will be used as a case study. There is a need for higher education to implement and practically integrate SDG goals into their curricula, and one way to achieve this is through the development of competencies. Education for Sustainable Development (ESD) has the potential to offer approaches that can be useful in the development of capacity-building activities to foster sustainability. The methodological approach adopted is based on a participatory paradigm followed by two cycles and reflection. This paper focuses on systems thinking in practice demonstrating how students apply and reflect on competencies to situations and how praxis captures the actual experiences. The results of this research indicated how to re-orientate the EDTM 312 curriculum to include an environmental justice focus. This research shares practical knowledge of systems thinking as a sustainability competency.

Keywords: education for sustainable development, environmental justice competencies, sustainable development goals, systems thinking in practice

Procedia PDF Downloads 64
10829 A New Family of Integration Methods for Nonlinear Dynamic Analysis

Authors: Shuenn-Yih Chang, Chiu-LI Huang, Ngoc-Cuong Tran

Abstract:

A new family of structure-dependent integration methods, whose coefficients of the difference equation for displacement increment are functions of the initial structural properties and the step size for time integration, is proposed in this work. This family method can simultaneously integrate the controllable numerical dissipation, explicit formulation and unconditional stability together. In general, its numerical dissipation can be continuously controlled by a parameter and it is possible to achieve zero damping. In addition, it can have high-frequency damping to suppress or even remove the spurious oscillations high frequency modes. Whereas, the low frequency modes can be very accurately integrated due to the almost zero damping for these low frequency modes. It is shown herein that the proposed family method can have exactly the same numerical properties as those of HHT-α method for linear elastic systems. In addition, it still preserves the most important property of a structure-dependent integration method, which is an explicit formulation for each time step. Consequently, it can save a huge computational efforts in solving inertial problems when compared to the HHT-α method. In fact, it is revealed by numerical experiments that the CPU time consumed by the proposed family method is only about 1.6% of that consumed by the HHT-α method for the 125-DOF system while it reduces to be 0.16% for the 1000-DOF system. Apparently, the saving of computational efforts is very significant.

Keywords: structure-dependent integration method, nonlinear dynamic analysis, unconditional stability, numerical dissipation, accuracy

Procedia PDF Downloads 639
10828 Ballast Water Management Triad: Administration, Ship Owner and the Seafarer

Authors: Rajoo Balaji, Omar Yaakob

Abstract:

The Ballast Water Convention requires less than 5% of the world tonnage for ratification. Consequently, ships will have to comply with the requirements. Compliance evaluation and enforcement will become mandatory. Ship owners have to invest in treatment systems and shipboard personnel have to operate them and ensure compliance. The monitoring and enforcement will be the responsibilities of the Administrations. Herein, a review of the current status of the Ballast Water Management and the issues faced by these are projected. Issues range from efficacy and economics of the treatment systems to sampling and testing. Health issues of chemical systems, paucity of data for decision support etc., are other issues. It is emphasized that management of ballast water must be extended to ashore and sustainable solutions must be researched upon. An exemplar treatment system based on ship’s waste heat is also suggested.

Keywords: Ballast Water Management, compliance evaluation, compliance enforcement, sustainability

Procedia PDF Downloads 439
10827 The Primitive Code-Level Design Patterns for Distributed Programming

Authors: Bing Li

Abstract:

The primitive code-level design patterns (PDP) are the rudimentary programming elements to develop any distributed systems in the generic distributed programming environment, GreatFree. The PDP works with the primitive distributed application programming interfaces (PDA), the distributed modeling, and the distributed concurrency for scaling-up. They not only hide developers from underlying technical details but also support sufficient adaptability to a variety of distributed computing environments. Programming with them, the simplest distributed system, the lightweight messaging two-node client/server (TNCS) system, is constructed rapidly with straightforward and repeatable behaviors, copy-paste-replace (CPR). As any distributed systems are made up of the simplest ones, those PDAs, as well as the PDP, are generic for distributed programming.

Keywords: primitive APIs, primitive code-level design patterns, generic distributed programming, distributed systems, highly patterned development environment, messaging

Procedia PDF Downloads 191
10826 The Need for Innovation Management in the Context of Integrated Management Systems

Authors: Adela Mariana Vadastreanu, Adrian Bot, Andreea Maier, Dorin Maier

Abstract:

This paper approaches the need for innovation management in the context of an existing integrated management system implemented in an organization. The road to success for companies in today’s economic environment is more demanding than ever and the capacity of adapting to the rapid changes is compensatory in order to resist on the market. The managers struggle, daily, with increasingly complex problems, caused by fierce competition in the market but also from the rising demands of customers. Innovation seems to be the solution for these problems. During the last decade almost all companies have been certificated according to various management systems, like quality management system, environmental management system, health and safety management system and others; furthermore many companies have implemented an integrated management system, by integrating two or more management systems. The problem rising today is how to integrate innovation in this integrated management systems. The challenge of the problem is that the development of an innovation management system is in the early phase. In this paper we have studied the possibility of integrating some of the innovation request in an existing management system, we have identify the innovation performance request and we proposed some recommendations regarding innovation management and its implementation as a part of an integrated management system. This paper lies down the bases for developing an model of integration management systems that include innovation as a main part of it. Organizations are becoming more aware of the importance of Integrated Management Systems (IMS). Integrating two or more management systems into an integrated management system can have much advantages.This paper examines various models of management systems integration in accordance with professional references ISO 9001, ISO 18001 and OHSAS 18001, highlighting strengths and weaknesses, creating a basis for future development of integrated management systems, and their involvement in various other processes within the organization, such as innovation management. The more and more demanding economic context emphasizes the awareness of the importance of innovation for organizations. This paper highlights the importance of the innovation for an organization and also gives some practical solution in order to improve the overall success of the business through a better approach of innovation. Various standards have been developed in order to certificate organizations that they respect the requirements. Applying an integrated standards model is shown to be a more effective way then applying the standards independently. The problem that arises is that in order to adopt the integrated version of standards there have to be made some changes at the organizational level. Every change that needs to be done has an effect on its activity, and in this sense the paper tries to deal with the changes needed for adopting an integrated management system and if those changes have an influence over the performance. After the analysis of the results, we can conclude that in order to improve the performance a necessary step is the implementation of innovation in the existing integrated management system.

Keywords: innovation, integrated management systems, innovation management, quality

Procedia PDF Downloads 315
10825 Biometric Recognition Techniques: A Survey

Authors: Shabir Ahmad Sofi, Shubham Aggarwal, Sanyam Singhal, Roohie Naaz

Abstract:

Biometric recognition refers to an automatic recognition of individuals based on a feature vector(s) derived from their physiological and/or behavioral characteristic. Biometric recognition systems should provide a reliable personal recognition schemes to either confirm or determine the identity of an individual. These features are used to provide an authentication for computer based security systems. Applications of such a system include computer systems security, secure electronic banking, mobile phones, credit cards, secure access to buildings, health and social services. By using biometrics a person could be identified based on 'who she/he is' rather than 'what she/he has' (card, token, key) or 'what she/he knows' (password, PIN). In this paper, a brief overview of biometric methods, both unimodal and multimodal and their advantages and disadvantages, will be presented.

Keywords: biometric, DNA, fingerprint, ear, face, retina scan, gait, iris, voice recognition, unimodal biometric, multimodal biometric

Procedia PDF Downloads 756
10824 Identification, Isolation and Characterization of Unknown Degradation Products of Cefprozil Monohydrate by HPTLC

Authors: Vandana T. Gawande, Kailash G. Bothara, Chandani O. Satija

Abstract:

The present research work was aimed to determine stability of cefprozil monohydrate (CEFZ) as per various stress degradation conditions recommended by International Conference on Harmonization (ICH) guideline Q1A (R2). Forced degradation studies were carried out for hydrolytic, oxidative, photolytic and thermal stress conditions. The drug was found susceptible for degradation under all stress conditions. Separation was carried out by using High Performance Thin Layer Chromatographic System (HPTLC). Aluminum plates pre-coated with silica gel 60F254 were used as the stationary phase. The mobile phase consisted of ethyl acetate: acetone: methanol: water: glacial acetic acid (7.5:2.5:2.5:1.5:0.5v/v). Densitometric analysis was carried out at 280 nm. The system was found to give compact spot for cefprozil monohydrate (0.45 Rf). The linear regression analysis data showed good linear relationship in the concentration range 200-5.000 ng/band for cefprozil monohydrate. Percent recovery for the drug was found to be in the range of 98.78-101.24. Method was found to be reproducible with % relative standard deviation (%RSD) for intra- and inter-day precision to be < 1.5% over the said concentration range. The method was validated for precision, accuracy, specificity and robustness. The method has been successfully applied in the analysis of drug in tablet dosage form. Three unknown degradation products formed under various stress conditions were isolated by preparative HPTLC and characterized by mass spectroscopic studies.

Keywords: cefprozil monohydrate, degradation products, HPTLC, stress study, stability indicating method

Procedia PDF Downloads 299
10823 Optimal Tuning of a Fuzzy Immune PID Parameters to Control a Delayed System

Authors: S. Gherbi, F. Bouchareb

Abstract:

This paper deals with the novel intelligent bio-inspired control strategies, it presents a novel approach based on an optimal fuzzy immune PID parameters tuning, it is a combination of a PID controller, inspired by the human immune mechanism with fuzzy logic. Such controller offers more possibilities to deal with the delayed systems control difficulties due to the delay term. Indeed, we use an optimization approach to tune the four parameters of the controller in addition to the fuzzy function; the obtained controller is implemented in a modified Smith predictor structure, which is well known that it is the most efficient to the control of delayed systems. The application of the presented approach to control a three tank delay system shows good performances and proves the efficiency of the method.

Keywords: delayed systems, fuzzy immune PID, optimization, Smith predictor

Procedia PDF Downloads 433
10822 Bartlett Factor Scores in Multiple Linear Regression Equation as a Tool for Estimating Economic Traits in Broilers

Authors: Oluwatosin M. A. Jesuyon

Abstract:

In order to propose a simpler tool that eliminates the age-long problems associated with the traditional index method for selection of multiple traits in broilers, the Barttlet factor regression equation is being proposed as an alternative selection tool. 100 day-old chicks each of Arbor Acres (AA) and Annak (AN) broiler strains were obtained from two rival hatcheries in Ibadan Nigeria. These were raised in deep litter system in a 56-day feeding trial at the University of Ibadan Teaching and Research Farm, located in South-west Tropical Nigeria. The body weight and body dimensions were measured and recorded during the trial period. Eight (8) zoometric measurements namely live weight (g), abdominal circumference, abdominal length, breast width, leg length, height, wing length and thigh circumference (all in cm) were recorded randomly from 20 birds within strain, at a fixed time on the first day of the new week respectively with a 5-kg capacity Camry scale. These records were analyzed and compared using completely randomized design (CRD) of SPSS analytical software, with the means procedure, Factor Scores (FS) in stepwise Multiple Linear Regression (MLR) procedure for initial live weight equations. Bartlett Factor Score (BFS) analysis extracted 2 factors for each strain, termed Body-length and Thigh-meatiness Factors for AA, and; Breast Size and Height Factors for AN. These derived orthogonal factors assisted in deducing and comparing trait-combinations that best describe body conformation and Meatiness in experimental broilers. BFS procedure yielded different body conformational traits for the two strains, thus indicating the different economic traits and advantages of strains. These factors could be useful as selection criteria for improving desired economic traits. The final Bartlett Factor Regression equations for prediction of body weight were highly significant with P < 0.0001, R2 of 0.92 and above, VIF of 1.00, and DW of 1.90 and 1.47 for Arbor Acres and Annak respectively. These FSR equations could be used as a simple and potent tool for selection during poultry flock improvement, it could also be used to estimate selection index of flocks in order to discriminate between strains, and evaluate consumer preference traits in broilers.

Keywords: alternative selection tool, Bartlet factor regression model, consumer preference trait, linear and body measurements, live body weight

Procedia PDF Downloads 203
10821 Nonlinear Optics of Dirac Fermion Systems

Authors: Vipin Kumar, Girish S. Setlur

Abstract:

Graphene has been recognized as a promising 2D material with many new properties. However, pristine graphene is gapless which hinders its direct application towards graphene-based semiconducting devices. Graphene is a zero-gapp and linearly dispersing semiconductor. Massless charge carriers (quasi-particles) in graphene obey the relativistic Dirac equation. These Dirac fermions show very unusual physical properties such as electronic, optical and transport. Graphene is analogous to two-level atomic systems and conventional semiconductors. We may expect that graphene-based systems will also exhibit phenomena that are well-known in two-level atomic systems and in conventional semiconductors. Rabi oscillation is a nonlinear optical phenomenon well-known in the context of two-level atomic systems and also in conventional semiconductors. It is the periodic exchange of energy between the system of interest and the electromagnetic field. The present work describes the phenomenon of Rabi oscillations in graphene based systems. Rabi oscillations have already been described theoretically and experimentally in the extensive literature available on this topic. To describe Rabi oscillations they use an approximation known as rotating wave approximation (RWA) well-known in studies of two-level systems. RWA is valid only near conventional resonance (small detuning)- when the frequency of the external field is nearly equal to the particle-hole excitation frequency. The Rabi frequency goes through a minimum close to conventional resonance as a function of detuning. Far from conventional resonance, the RWA becomes rather less useful and we need some other technique to describe the phenomenon of Rabi oscillation. In conventional systems, there is no second minimum - the only minimum is at conventional resonance. But in graphene we find anomalous Rabi oscillations far from conventional resonance where the Rabi frequency goes through a minimum that is much smaller than the conventional Rabi frequency. This is known as anomalous Rabi frequency and is unique to graphene systems. We have shown that this is attributable to the pseudo-spin degree of freedom in graphene systems. A new technique, which is an alternative to RWA called asymptotic RWA (ARWA), has been invoked by our group to discuss the phenomenon of Rabi oscillation. Experimentally accessible current density shows different types of threshold behaviour in frequency domain close to the anomalous Rabi frequency depending on the system chosen. For single layer graphene, the exponent at threshold is equal to 1/2 while in case of bilayer graphene, it is computed to be equal to 1. Bilayer graphene shows harmonic (anomalous) resonances absent in single layer graphene. The effect of asymmetry and trigonal warping (a weak direct inter-layer hopping in bilayer graphene) on these oscillations is also studied in graphene systems. Asymmetry has a remarkable effect only on anomalous Rabi oscillations whereas the Rabi frequency near conventional resonance is not significantly affected by the asymmetry parameter. In presence of asymmetry, these graphene systems show Rabi-like oscillations (offset oscillations) even for vanishingly small applied field strengths (less than the gap parameter). The frequency of offset oscillations may be identified with the asymmetry parameter.

Keywords: graphene, Bilayer graphene, Rabi oscillations, Dirac fermion systems

Procedia PDF Downloads 298
10820 A Method for Reconfigurable Manufacturing Systems Customization Measurement

Authors: Jesus Kombaya, Nadia Hamani, Lyes Kermad

Abstract:

The preservation of a company’s place on the market in such aggressive competition is becoming a survival challenge for manufacturers. In this context, survivors are only those who succeed to satisfy their customers’ needs as quickly as possible. The production system should be endowed with a certain level of flexibility to eliminate or reduce the rigidity of the production systems in order to facilitate the conversion and/or the change of system’s features to produce different products. Therefore, it is essential to guarantee the quality, the speed and the flexibility to survive in this competition. According to literature, this adaptability is referred to as the notion of "change". Indeed, companies are trying to establish a more flexible and agile manufacturing system through several reconfiguration actions. Reconfiguration contributes to the extension of the manufacturing system life cycle by modifying its physical, organizational and computer characteristics according to the changing market conditions. Reconfigurability is characterized by six key elements that are: modularity, integrability, diagnosability, convertibility, scalability and customization. In order to control the production systems, it is essential for manufacturers to make good use of this capability in order to be sure that the system has an optimal and adapted level of reconfigurability that allows it to produce in accordance with the set requirements. This document develops a measure of customization of reconfigurable production systems. These measures do not only impact the production system but also impact the product design and the process design, which can therefore serve as a guide for the customization of manufactured product. A case study is presented to show the use of the proposed approach.

Keywords: reconfigurable manufacturing systems, customization, measure, flexibility

Procedia PDF Downloads 128
10819 Evaluation of Different Cropping Systems under Organic, Inorganic and Integrated Production Systems

Authors: Sidramappa Gaddnakeri, Lokanath Malligawad

Abstract:

Any kind of research on production technology of individual crop / commodity /breed has not brought sustainability or stability in crop production. The sustainability of the system over years depends on the maintenance of the soil health. Organic production system includes use of organic manures, biofertilizers, green manuring for nutrient supply and biopesticides for plant protection helps to sustain the productivity even under adverse climatic condition. The study was initiated to evaluate the performance of different cropping systems under organic, inorganic and integrated production systems at The Institute of Organic Farming, University of Agricultural Sciences, Dharwad (Karnataka-India) under ICAR Network Project on Organic Farming. The trial was conducted for four years (2013-14 to 2016-17) on fixed site. Five cropping systems viz., sequence cropping of cowpea – safflower, greengram– rabi sorghum, maize-bengalgram, sole cropping of pigeonpea and intercropping of groundnut + cotton were evaluated under six nutrient management practices. The nutrient management practices are NM1 (100% Organic farming (Organic manures equivalent to 100% N (Cereals/cotton) or 100% P2O5 (Legumes), NM2 (75% Organic farming (Organic manures equivalent to 75% N (Cereals/cotton) or 100% P2O5 (Legumes) + Cow urine and Vermi-wash application), NM3 (Integrated farming (50% Organic + 50% Inorganic nutrients, NM4 (Integrated farming (75% Organic + 25% Inorganic nutrients, NM5 (100% Inorganic farming (Recommended dose of inorganic fertilizers)) and NM6 (Recommended dose of inorganic fertilizers + Recommended rate of farm yard manure (FYM). Among the cropping systems evaluated for different production systems indicated that the Groundnut + Hybrid cotton (2:1) intercropping system found more remunerative as compared to Sole pigeonpea cropping system, Greengram-Sorghum sequence cropping system, Maize-Chickpea sequence cropping system and Cowpea-Safflower sequence cropping system irrespective of the production systems. Production practices involving application of recommended rates of fertilizers + recommended rates of organic manures (Farmyard manure) produced higher net monetary returns and higher B:C ratio as compared to integrated production system involving application of 50 % organics + 50 % inorganic and application of 75 % organics + 25 % inorganic and organic production system only Both the two organic production systems viz., 100 % Organic production system (Organic manures equivalent to 100 % N (Cereals/cotton) or 100 % P2O5 (Legumes) and 75 % Organic production system (Organic manures equivalent to 75 % N (Cereals) or 100 % P2O5 (Legumes) + Cow urine and Vermi-wash application) are found to be on par. Further, integrated production system involving application of organic manures and inorganic fertilizers found more beneficial over organic production systems.

Keywords: cropping systems, production systems, cowpea, safflower, greengram, pigeonpea, groundnut, cotton

Procedia PDF Downloads 199
10818 Modelling and Simulation of the Freezing Systems and Heat Pumps Using Unisim® Design

Authors: C. Patrascioiu

Abstract:

The paper describes the modeling and simulation of the heat pumps domain processes. The main objective of the study is the use of the heat pump in propene–propane distillation processes. The modeling and simulation instrument is the Unisim® Design simulator. The paper is structured in three parts: An overview of the compressing gases, the modeling and simulation of the freezing systems, and the modeling and simulation of the heat pumps. For each of these systems, there are presented the Unisim® Design simulation diagrams, the input–output system structure and the numerical results. Future studies will consider modeling and simulation of the propene–propane distillation process with heat pump.

Keywords: distillation, heat pump, simulation, unisim design

Procedia PDF Downloads 363
10817 Private Coded Computation of Matrix Multiplication

Authors: Malihe Aliasgari, Yousef Nejatbakhsh

Abstract:

The era of Big Data and the immensity of real-life datasets compels computation tasks to be performed in a distributed fashion, where the data is dispersed among many servers that operate in parallel. However, massive parallelization leads to computational bottlenecks due to faulty servers and stragglers. Stragglers refer to a few slow or delay-prone processors that can bottleneck the entire computation because one has to wait for all the parallel nodes to finish. The problem of straggling processors, has been well studied in the context of distributed computing. Recently, it has been pointed out that, for the important case of linear functions, it is possible to improve over repetition strategies in terms of the tradeoff between performance and latency by carrying out linear precoding of the data prior to processing. The key idea is that, by employing suitable linear codes operating over fractions of the original data, a function may be completed as soon as enough number of processors, depending on the minimum distance of the code, have completed their operations. The problem of matrix-matrix multiplication in the presence of practically big sized of data sets faced with computational and memory related difficulties, which makes such operations are carried out using distributed computing platforms. In this work, we study the problem of distributed matrix-matrix multiplication W = XY under storage constraints, i.e., when each server is allowed to store a fixed fraction of each of the matrices X and Y, which is a fundamental building of many science and engineering fields such as machine learning, image and signal processing, wireless communication, optimization. Non-secure and secure matrix multiplication are studied. We want to study the setup, in which the identity of the matrix of interest should be kept private from the workers and then obtain the recovery threshold of the colluding model, that is, the number of workers that need to complete their task before the master server can recover the product W. The problem of secure and private distributed matrix multiplication W = XY which the matrix X is confidential, while matrix Y is selected in a private manner from a library of public matrices. We present the best currently known trade-off between communication load and recovery threshold. On the other words, we design an achievable PSGPD scheme for any arbitrary privacy level by trivially concatenating a robust PIR scheme for arbitrary colluding workers and private databases and the proposed SGPD code that provides a smaller computational complexity at the workers.

Keywords: coded distributed computation, private information retrieval, secret sharing, stragglers

Procedia PDF Downloads 122
10816 Model-Based Automotive Partitioning and Mapping for Embedded Multicore Systems

Authors: Robert Höttger, Lukas Krawczyk, Burkhard Igel

Abstract:

This paper introduces novel approaches to partitioning and mapping in terms of model-based embedded multicore system engineering and further discusses benefits, industrial relevance and features in common with existing approaches. In order to assess and evaluate results, both approaches have been applied to a real industrial application as well as to various prototypical demonstrative applications, that have been developed and implemented for different purposes. Evaluations show, that such applications improve significantly according to performance, energy efficiency, meeting timing constraints and covering maintaining issues by using the AMALTHEA platform and the implemented approaches. Further- more, the model-based design provides an open, expandable, platform independent and scalable exchange format between OEMs, suppliers and developers on different levels. Our proposed mechanisms provide meaningful multicore system utilization since load balancing by means of partitioning and mapping is effectively performed with regard to the modeled systems including hardware, software, operating system, scheduling, constraints, configuration and more data.

Keywords: partitioning, mapping, distributed systems, scheduling, embedded multicore systems, model-based, system analysis

Procedia PDF Downloads 620
10815 The Interoperability between CNC Machine Tools and Robot Handling Systems Based on an Object-Oriented Framework

Authors: Pouyan Jahanbin, Mahmoud Houshmand, Omid Fatahi Valilai

Abstract:

A flexible manufacturing system (FMS) is a manufacturing system having the capability of handling the variations of products features that is the result of ever-changing customer demands. The flexibility of the manufacturing systems help to utilize the resources in a more effective manner. However, the control of such systems would be complicated and challenging. FMS needs CNC machines and robots and other resources for establishing the flexibility and enhancing the efficiency of the whole system. Also it needs to integrate the resources to reach required efficiency and flexibility. In order to reach this goal, an integrator framework is proposed in which the machining data of CNC machine tools is received through a STEP-NC file. The interoperability of the system is achieved by the information system. This paper proposes an information system that its data model is designed based on object oriented approach and is implemented through a knowledge-based system. The framework is connected to a database which is filled with robot’s control commands. The framework programs the robots by rules embedded in its knowledge based system. It also controls the interactions of CNC machine tools for loading and unloading actions by robot. As a result, the proposed framework improves the integration of manufacturing resources in Flexible Manufacturing Systems.

Keywords: CNC machine tools, industrial robots, knowledge-based systems, manufacturing recourses integration, flexible manufacturing system (FMS), object-oriented data model

Procedia PDF Downloads 455
10814 The Emancipatory Methodological Approach to the Organizational Problems Management

Authors: Slavica P. Petrovic

Abstract:

One of the key dimensions of management problems in organizations refers to the relations between stakeholders. The management problems that are characterized by conflict and coercion, in which participants do not agree on the ends and means, in which different groups, i.e., individuals, strive to – using the power they have – impose on others their favoured strategy and decisions represent the relevant research subject. Creatively managing the coercive problems in organizations, in which the sources of power can be identified, implies the emancipatory paradigm and the use of corresponding systems methodology. The main research aim is to critically reassess the theoretical foundations and methodological and methodical development of Critical Systems Heuristics (CSH) – as a valid representative of the emancipatory paradigm – in order to determine the conditions, ways, and achievements of its application in managing the coercive problems in organizations. The basic hypothesis is that CSH, as the emancipatory methodology, given its own theoretical foundations and methodological-methodical development, can be employed in a scientifically based and practically useful manner in creative addressing the coercive problems. The scientific instrumentarium corresponding to this research aim is critical systems thinking with its three key commitments to: a) Critical awareness of the strengths and weaknesses of each research instrument (theory, methodology, method, technique, model) for structuring the problem situations in organizations, b) Improvement of managing the coercive problems in organizations, and c) Pluralism – respect the different perceptions and interpretations of problem situations, and enable the combined use of research instruments. The relevant research result is that CSH – considering its theoretical foundations, methodological and methodical development – enables to reveal the normative content of the proposed or existing designs of organizational systems. Accordingly, it can be concluded that through the use of critically heuristic categories and dialectical debate between those involved and those affected by the designs, but who are not included in designing organizational systems, CSH endeavours to – in the application – support the process of improving position of all stakeholders.

Keywords: coercion and conflict in organizations, creative management, critical systems heuristics, the emancipatory systems methodology

Procedia PDF Downloads 442
10813 Study and Solving High Complex Non-Linear Differential Equations Applied in the Engineering Field by Analytical New Approach AGM

Authors: Mohammadreza Akbari, Sara Akbari, Davood Domiri Ganji, Pooya Solimani, Reza Khalili

Abstract:

In this paper, three complicated nonlinear differential equations(PDE,ODE) in the field of engineering and non-vibration have been analyzed and solved completely by new method that we have named it Akbari-Ganji's Method (AGM) . As regards the previous published papers, investigating this kind of equations is a very hard task to do and the obtained solution is not accurate and reliable. This issue will be emerged after comparing the achieved solutions by Numerical Method. Based on the comparisons which have been made between the gained solutions by AGM and Numerical Method (Runge-Kutta 4th), it is possible to indicate that AGM can be successfully applied for various differential equations particularly for difficult ones. Furthermore, It is necessary to mention that a summary of the excellence of this method in comparison with the other approaches can be considered as follows: It is noteworthy that these results have been indicated that this approach is very effective and easy therefore it can be applied for other kinds of nonlinear equations, And also the reasons of selecting the mentioned method for solving differential equations in a wide variety of fields not only in vibrations but also in different fields of sciences such as fluid mechanics, solid mechanics, chemical engineering, etc. Therefore, a solution with high precision will be acquired. With regard to the afore-mentioned explanations, the process of solving nonlinear equation(s) will be very easy and convenient in comparison with the other methods. And also one of the important position that is explored in this paper is: Trigonometric and exponential terms in the differential equation (the method AGM) , is no need to use Taylor series Expansion to enhance the precision of the result.

Keywords: new method (AGM), complex non-linear partial differential equations, damping ratio, energy lost per cycle

Procedia PDF Downloads 469
10812 Application of Fourier Series Based Learning Control on Mechatronic Systems

Authors: Sandra Baßler, Peter Dünow, Mathias Marquardt

Abstract:

A Fourier series based learning control (FSBLC) algorithm for tracking trajectories of mechanical systems with unknown nonlinearities is presented. Two processes are introduced to which the FSBLC with PD controller is applied. One is a simplified service robot capable of climbing stairs due to special wheels and the other is a propeller driven pendulum with nearly the same requirements on control. Additionally to the investigation of learning the feed forward for the desired trajectories some considerations on the implementation of such an algorithm on low cost microcontroller hardware are made. Simulations of the service robot as well as practical experiments on the pendulum show the capability of the used FSBLC algorithm to perform the task of improving control behavior for repetitive task of such mechanical systems.

Keywords: climbing stairs, FSBLC, ILC, service robot

Procedia PDF Downloads 314
10811 Evaluation of the Photo Neutron Contamination inside and outside of Treatment Room for High Energy Elekta Synergy® Linear Accelerator

Authors: Sharib Ahmed, Mansoor Rafi, Kamran Ali Awan, Faraz Khaskhali, Amir Maqbool, Altaf Hashmi

Abstract:

Medical linear accelerators (LINAC’s) used in radiotherapy treatments produce undesired neutrons when they are operated at energies above 8 MeV, both in electron and photon configuration. Neutrons are produced by high-energy photons and electrons through electronuclear (e, n) a photonuclear giant dipole resonance (GDR) reactions. These reactions occurs when incoming photon or electron incident through the various materials of target, flattening filter, collimators, and other shielding components in LINAC’s structure. These neutrons may reach directly to the patient, or they may interact with the surrounding materials until they become thermalized. A work has been set up to study the effect of different parameter on the production of neutron around the room by photonuclear reactions induced by photons above ~8 MeV. One of the commercial available neutron detector (Ludlum Model 42-31H Neutron Detector) is used for the detection of thermal and fast neutrons (0.025 eV to approximately 12 MeV) inside and outside of the treatment room. Measurements were performed for different field sizes at 100 cm source to surface distance (SSD) of detector, at different distances from the isocenter and at the place of primary and secondary walls. Other measurements were performed at door and treatment console for the potential radiation safety concerns of the therapists who must walk in and out of the room for the treatments. Exposures have taken place from Elekta Synergy® linear accelerators for two different energies (10 MV and 18 MV) for a given 200 MU’s and dose rate of 600 MU per minute. Results indicates that neutron doses at 100 cm SSD depend on accelerator characteristics means jaw settings as jaws are made of high atomic number material so provides significant interaction of photons to produce neutrons, while doses at the place of larger distance from isocenter are strongly influenced by the treatment room geometry and backscattering from the walls cause a greater doses as compare to dose at 100 cm distance from isocenter. In the treatment room the ambient dose equivalent due to photons produced during decay of activation nuclei varies from 4.22 mSv.h−1 to 13.2 mSv.h−1 (at isocenter),6.21 mSv.h−1 to 29.2 mSv.h−1 (primary wall) and 8.73 mSv.h−1 to 37.2 mSv.h−1 (secondary wall) for 10 and 18 MV respectively. The ambient dose equivalent for neutrons at door is 5 μSv.h−1 to 2 μSv.h−1 while at treatment console room it is 2 μSv.h−1 to 0 μSv.h−1 for 10 and 18 MV respectively which shows that a 2 m thick and 5m longer concrete maze provides sufficient shielding for neutron at door as well as at treatment console for 10 and 18 MV photons.

Keywords: equivalent doses, neutron contamination, neutron detector, photon energy

Procedia PDF Downloads 449
10810 Generalized Linear Modeling of HCV Infection Among Medical Waste Handlers in Sidama Region, Ethiopia

Authors: Birhanu Betela Warssamo

Abstract:

Background: There is limited evidence on the prevalence and risk factors for hepatitis C virus (HCV) infection among waste handlers in the Sidama region, Ethiopia; however, this knowledge is necessary for the effective prevention of HCV infection in the region. Methods: A cross-sectional study was conducted among randomly selected waste collectors from October 2021 to 30 July 2022 in different public hospitals in the Sidama region of Ethiopia. Serum samples were collected from participants and screened for anti-HCV using a rapid immunochromatography assay. Socio-demographic and risk factor information of waste handlers was gathered by pretested and well-structured questionnaires. The generalized linear model (GLM) was conducted using R software, and P-value < 0.05 was declared statistically significant. Results: From a total of 282 participating waste handlers, 16 (5.7%) (95% CI, 4.2 – 8.7) were infected with the hepatitis C virus. The educational status of waste handlers was the significant demographic variable that was associated with the hepatitis C virus (AOR = 0.055; 95% CI = 0.012 – 0.248; P = 0.000). More married waste handlers, 12 (75%), were HCV positive than unmarried, 4 (25%) and married waste handlers were 2.051 times (OR = 2.051, 95%CI = 0.644 –6.527, P = 0.295) more prone to HCV infection, compared to unmarried, which was statistically insignificant. The GLM showed that exposure to blood (OR = 8.26; 95% CI = 1.878–10.925; P = 0.037), multiple sexual partners (AOR = 3.63; 95% CI = 2.751–5.808; P = 0.001), sharp injury (AOR = 2.77; 95% CI = 2.327–3.173; P = 0.036), not using PPE (AOR = 0.77; 95% CI = 0.032–0.937; P = 0.001), contact with jaundiced patient (AOR = 3.65; 95% CI = 1.093–4.368; P = 0 .0048) and unprotected sex (AOR = 11.91; 95% CI = 5.847–16.854; P = 0.001) remained statistically significantly associated with HCV positivity. Conclusions: The study revealed that there was a high prevalence of hepatitis C virus infection among waste handlers in the Sidama region, Ethiopia. This demonstrated that there is an urgent need to increase preventative efforts and strategic policy orientations to control the spread of the hepatitis C virus.

Keywords: Hepatitis C virus, risk factors, waste handlers, prevalence, Sidama Ethiopia

Procedia PDF Downloads 14
10809 Hybrid Artificial Bee Colony and Least Squares Method for Rule-Based Systems Learning

Authors: Ahcene Habbi, Yassine Boudouaoui

Abstract:

This paper deals with the problem of automatic rule generation for fuzzy systems design. The proposed approach is based on hybrid artificial bee colony (ABC) optimization and weighted least squares (LS) method and aims to find the structure and parameters of fuzzy systems simultaneously. More precisely, two ABC based fuzzy modeling strategies are presented and compared. The first strategy uses global optimization to learn fuzzy models, the second one hybridizes ABC and weighted least squares estimate method. The performances of the proposed ABC and ABC-LS fuzzy modeling strategies are evaluated on complex modeling problems and compared to other advanced modeling methods.

Keywords: automatic design, learning, fuzzy rules, hybrid, swarm optimization

Procedia PDF Downloads 437
10808 Laboratory Findings as Predictors of St2 and NT-Probnp Elevations in Heart Failure Clinic, National Cardiovascular Centre Harapan Kita, Indonesia

Authors: B. B. Siswanto, A. Halimi, K. M. H. J. Tandayu, C. Abdillah, F. Nanda , E. Chandra

Abstract:

Nowadays, modern cardiac biomarkers, such as ST2 and NT-proBNP, have important roles in predicting morbidity and mortality in heart failure patients. Abnormalities of serum electrolytes, sepsis or infection, and deteriorating renal function will worsen the conditions of patients with heart failure. It is intriguing to know whether cardiac biomarkers elevations are affected by laboratory findings in heart failure patients. We recruited 65 patients from the heart failure clinic in NCVC Harapan Kita in 2014-2015. All of them have consented for laboratory examination, including cardiac biomarkers. The findings were recorded in our Research and Development Centre and analyzed using linear regression to find whether there is a relationship between laboratory findings (sodium, potassium, creatinine, and leukocytes) and ST2 or NT-proBNP. From 65 patients, 26.9% of them are female, and 73.1% are male, 69.4% patients classified as NYHA I-II and 31.6% as NYHA III-IV. The mean age is 55.7+11.4 years old; mean sodium level is 136.1+6.5 mmol/l; mean potassium level is 4.7+1.9 mmol/l; mean leukocyte count is 9184.7+3622.4 /ul; mean creatinine level is 1.2+0.5 mg/dl. From linear regression logistics, the relationship between NT-proBNP and sodium level (p<0.001), as well as leukocyte count (p=0.002) are significant, while NT-proBNP and potassium level (p=0.05), as well as creatinine level (p=0.534) are not significant. The relationship between ST2 and sodium level (p=0.501), potassium level (p=0.76), leukocyte level (p=0.897), and creatinine level (p=0.817) are not significant. To conclude, laboratory findings are more sensitive in predicting NT-proBNP elevation than ST2 elevation. Larger studies are needed to prove that NT-proBNP correlation with laboratory findings is more superior than ST2.

Keywords: heart failure, laboratory, NT-proBNP, ST2

Procedia PDF Downloads 340
10807 Timetabling for Interconnected LRT Lines: A Package Solution Based on a Real-world Case

Authors: Huazhen Lin, Ruihua Xu, Zhibin Jiang

Abstract:

In this real-world case, timetabling the LRT network as a whole is rather challenging for the operator: they are supposed to create a timetable to avoid various route conflicts manually while satisfying a given interval and the number of rolling stocks, but the outcome is not satisfying. Therefore, the operator adopts a computerised timetabling tool, the Train Plan Maker (TPM), to cope with this problem. However, with various constraints in the dual-line network, it is still difficult to find an adequate pairing of turnback time, interval and rolling stocks’ number, which requires extra manual intervention. Aiming at current problems, a one-off model for timetabling is presented in this paper to simplify the procedure of timetabling. Before the timetabling procedure starts, this paper presents how the dual-line system with a ring and several branches is turned into a simpler structure. Then, a non-linear programming model is presented in two stages. In the first stage, the model sets a series of constraints aiming to calculate a proper timing for coordinating two lines by adjusting the turnback time at termini. Then, based on the result of the first stage, the model introduces a series of inequality constraints to avoid various route conflicts. With this model, an analysis is conducted to reveal the relation between the ratio of trains in different directions and the possible minimum interval, observing that the more imbalance the ratio is, the less possible to provide frequent service under such strict constraints.

Keywords: light rail transit (LRT), non-linear programming, railway timetabling, timetable coordination

Procedia PDF Downloads 87
10806 A Neurofeedback Learning Model Using Time-Frequency Analysis for Volleyball Performance Enhancement

Authors: Hamed Yousefi, Farnaz Mohammadi, Niloufar Mirian, Navid Amini

Abstract:

Investigating possible capacities of visual functions where adapted mechanisms can enhance the capability of sports trainees is a promising area of research, not only from the cognitive viewpoint but also in terms of unlimited applications in sports training. In this paper, the visual evoked potential (VEP) and event-related potential (ERP) signals of amateur and trained volleyball players in a pilot study were processed. Two groups of amateur and trained subjects are asked to imagine themselves in the state of receiving a ball while they are shown a simulated volleyball field. The proposed method is based on a set of time-frequency features using algorithms such as Gabor filter, continuous wavelet transform, and a multi-stage wavelet decomposition that are extracted from VEP signals that can be indicative of being amateur or trained. The linear discriminant classifier achieves the accuracy, sensitivity, and specificity of 100% when the average of the repetitions of the signal corresponding to the task is used. The main purpose of this study is to investigate the feasibility of a fast, robust, and reliable feature/model determination as a neurofeedback parameter to be utilized for improving the volleyball players’ performance. The proposed measure has potential applications in brain-computer interface technology where a real-time biomarker is needed.

Keywords: visual evoked potential, time-frequency feature extraction, short-time Fourier transform, event-related spectrum potential classification, linear discriminant analysis

Procedia PDF Downloads 138
10805 Comparing Double-Stranded RNA Uptake Mechanisms in Dipteran and Lepidopteran Cell Lines

Authors: Nazanin Amanat, Alison Tayler, Steve Whyard

Abstract:

While chemical insecticides effectively control many insect pests, they also harm many non-target species. Double-stranded RNA (dsRNA) pesticides, in contrast, can be designed to target unique gene sequences and thus act in a species-specific manner. DsRNA insecticides do not, however, work equally well for all insects, and for some species that are considered refractory to dsRNA, a primary factor affecting efficacy is the relative ease by which dsRNA can enter a target cell’s cytoplasm. In this study, we are examining how different structured dsRNAs (linear, hairpin, and paperclip) can enter mosquito and lepidopteran cells, as they represent dsRNA-sensitive and refractory species, respectively. To determine how the dsRNAs enter the cells, we are using chemical inhibitors and RNA interference (RNAi)-mediated knockdown of key proteins associated with different endocytosis processes. Understanding how different dsRNAs enter cells will ultimately help in the design of molecules that overcome refractoriness to RNAi or develop resistance to dsRNA-based insecticides. To date, we have conducted chemical inhibitor experiments on both cell lines and have evidence that linear dsRNAs enter the cells using clathrin-mediated endocytosis, while the paperclip dsRNAs (pcRNAs) can enter both species’ cells in a clathrin-independent manner to induce RNAi. An alternative uptake mechanism for the pcRNAs has been tentatively identified, and the outcomes of our RNAi-mediated knockdown experiments, which should provide corroborative evidence of our initial findings, will be discussed.

Keywords: dsRNA, RNAi, uptake, insecticides, dipteran, lepidopteran

Procedia PDF Downloads 73
10804 Sphere in Cube Grid Approach to Modelling of Shale Gas Production Using Non-Linear Flow Mechanisms

Authors: Dhruvit S. Berawala, Jann R. Ursin, Obrad Slijepcevic

Abstract:

Shale gas is one of the most rapidly growing forms of natural gas. Unconventional natural gas deposits are difficult to characterize overall, but in general are often lower in resource concentration and dispersed over large areas. Moreover, gas is densely packed into the matrix through adsorption which accounts for large volume of gas reserves. Gas production from tight shale deposits are made possible by extensive and deep well fracturing which contacts large fractions of the formation. The conventional reservoir modelling and production forecasting methods, which rely on fluid-flow processes dominated by viscous forces, have proved to be very pessimistic and inaccurate. This paper presents a new approach to forecast shale gas production by detailed modeling of gas desorption, diffusion and non-linear flow mechanisms in combination with statistical representation of these processes. The representation of the model involves a cube as a porous media where free gas is present and a sphere (SiC: Sphere in Cube model) inside it where gas is adsorbed on to the kerogen or organic matter. Further, the sphere is considered consisting of many layers of adsorbed gas in an onion-like structure. With pressure decline, the gas desorbs first from the outer most layer of sphere causing decrease in its molecular concentration. The new available surface area and change in concentration triggers the diffusion of gas from kerogen. The process continues until all the gas present internally diffuses out of the kerogen, gets adsorbs onto available surface area and then desorbs into the nanopores and micro-fractures in the cube. Each SiC idealizes a gas pathway and is characterized by sphere diameter and length of the cube. The diameter allows to model gas storage, diffusion and desorption; the cube length takes into account the pathway for flow in nanopores and micro-fractures. Many of these representative but general cells of the reservoir are put together and linked to a well or hydraulic fracture. The paper quantitatively describes these processes as well as clarifies the geological conditions under which a successful shale gas production could be expected. A numerical model has been derived which is then compiled on FORTRAN to develop a simulator for the production of shale gas by considering the spheres as a source term in each of the grid blocks. By applying SiC to field data, we demonstrate that the model provides an effective way to quickly access gas production rates from shale formations. We also examine the effect of model input properties on gas production.

Keywords: adsorption, diffusion, non-linear flow, shale gas production

Procedia PDF Downloads 165
10803 Students’ Perception of E-Learning Systems at Hashemite University

Authors: Muneer Abbad

Abstract:

In search of better, traditional learning universities have expanded their ways to deliver knowledge and integrate cost effective e-learning systems. Universities’ use of information and communication technologies has grown tremendously over the last decade. To ensure efficient use of the e-learning system, this project aimed to evaluate the good and bad practices, detect errors and determine areas for further improvements in usage. This project critically evaluated the students’ perception of the e-learning system and recommended changes to improve students’ e-learning usage, through conducting questionnaire given to the students that have experience with e-learning systems. Results of the study indicated that, in general, students have favourable perceptions toward using the e-learning system. They seemed to value the resources tool and its contribution to building their knowledge more than other e-learning tools. However, they seemed to perceive a limited value from the audio or video podcasts. This study has shown that technology acceptance is the most variable, factor that contributes to students’ perception and satisfaction of the e-learning system.

Keywords: e-learning, perception, Jordan, universities

Procedia PDF Downloads 489