Search results for: translation processes
2481 Human Gesture Recognition for Real-Time Control of Humanoid Robot
Authors: S. Aswath, Chinmaya Krishna Tilak, Amal Suresh, Ganesh Udupa
Abstract:
There are technologies to control a humanoid robot in many ways. But the use of Electromyogram (EMG) electrodes has its own importance in setting up the control system. The EMG based control system helps to control robotic devices with more fidelity and precision. In this paper, development of an electromyogram based interface for human gesture recognition for the control of a humanoid robot is presented. To recognize control signs in the gestures, a single channel EMG sensor is positioned on the muscles of the human body. Instead of using a remote control unit, the humanoid robot is controlled by various gestures performed by the human. The EMG electrodes attached to the muscles generates an analog signal due to the effect of nerve impulses generated on moving muscles of the human being. The analog signals taken up from the muscles are supplied to a differential muscle sensor that processes the given signal to generate a signal suitable for the microcontroller to get the control over a humanoid robot. The signal from the differential muscle sensor is converted to a digital form using the ADC of the microcontroller and outputs its decision to the CM-530 humanoid robot controller through a Zigbee wireless interface. The output decision of the CM-530 processor is sent to a motor driver in order to control the servo motors in required direction for human like actions. This method for gaining control of a humanoid robot could be used for performing actions with more accuracy and ease. In addition, a study has been conducted to investigate the controllability and ease of use of the interface and the employed gestures.Keywords: electromyogram, gesture, muscle sensor, humanoid robot, microcontroller, Zigbee
Procedia PDF Downloads 4072480 The Checkout and Separation of Environmental Hazards of the Range Overlooking the Meshkin City
Authors: F. Esfandyari Darabad, Z. Samadi
Abstract:
Natural environments have always been affected by one of the most important natural hazards, which is called, the mass movements that cause instability. Identifying the unstable regions and separating them so as to detect and determine the risk of environmental factors is one of the important issues in mountainous areas development. In this study, the northwest of Sabalan hillsides overlooking the Meshkin city and the surrounding area of that have been delimitated, in order to analyze the range processes such as landslides and debris flows based on structural and geomorphological conditions, by means of using GIS. This area due to the high slope of the hillsides and height of the region and the poor localization of roads and so because of them destabilizing the ranges own an inappropriate situation. This study is done with the purpose of identifying the effective factors in the range motion and determining the areas with high potential for zoning these movements by using GIS. The results showed that the most common range movements in the area, are debris flows, rocks falling and landslides. The effective factors in each one of the mass movements, considering a small amount of weight for each factor, the weight map of each factor and finally, the map of risk zoning for the range movements were provided. Based on the zoning map resulted in the study area, the risking level of damaging has specified into the four zones of very high risk, high risk, medium risk, low risk, in which areas with very high and high risk are settled near the road and along the Khyav river and in the mountainous district.Keywords: debris flow, environmental hazards, GIS, landslide
Procedia PDF Downloads 5252479 Application of Molecular Materials in the Manufacture of Flexible and Organic Devices for Photovoltaic Applications
Authors: Mariana Gomez Gomez, Maria Elena Sanchez Vergara
Abstract:
Many sustainable approaches to generate electric energy have emerged in the last few decades; one of them is through solar cells. Yet, this also has the disadvantage of highly polluting inorganic semiconductor manufacturing processes. Therefore, the use of molecular semiconductors must be considered. In this work, allene compounds C24H26O4 and C24H26O5 were used as dopants to manufacture semiconductors films based on PbPc by high-vacuum evaporation technique. IR spectroscopy was carried out to determine the phase and any significant chemical changes which may occur during the thermal evaporation. According to UV-visible spectroscopy and Tauc’s model, the deposition process generated thin films with an activation energy range of 1.47 to 1.55 eV for direct transitions and 1.29 to 1.33 eV for indirect transitions. These values place the manufactured films within the range of low bandgap semiconductors. The flexible devices were manufactured: polyethylene terephthalate (PET), Indium tin oxide (ITO)/organic semiconductor/ Cubic Close Packed (CCP). The characterization of the devices was carried out by evaluating electrical conductivity using the four-probe collinear method. I-V curves were obtained under different lighting conditions at room temperature. OS1 (PbPc/C24H26O4) showed an Ohmic behavior, while OS2 (PbPc/C24H26O5) reached higher current values at lower voltages. The results obtained show that the semiconductors devices doped with allene compounds can be used in the manufacture of optoelectronic devices.Keywords: electrical properties, optical gap, phthalocyanine, thin film.
Procedia PDF Downloads 2492478 Optimization Based Extreme Learning Machine for Watermarking of an Image in DWT Domain
Authors: RAM PAL SINGH, VIKASH CHAUDHARY, MONIKA VERMA
Abstract:
In this paper, we proposed the implementation of optimization based Extreme Learning Machine (ELM) for watermarking of B-channel of color image in discrete wavelet transform (DWT) domain. ELM, a regularization algorithm, works based on generalized single-hidden-layer feed-forward neural networks (SLFNs). However, hidden layer parameters, generally called feature mapping in context of ELM need not to be tuned every time. This paper shows the embedding and extraction processes of watermark with the help of ELM and results are compared with already used machine learning models for watermarking.Here, a cover image is divide into suitable numbers of non-overlapping blocks of required size and DWT is applied to each block to be transformed in low frequency sub-band domain. Basically, ELM gives a unified leaning platform with a feature mapping, that is, mapping between hidden layer and output layer of SLFNs, is tried for watermark embedding and extraction purpose in a cover image. Although ELM has widespread application right from binary classification, multiclass classification to regression and function estimation etc. Unlike SVM based algorithm which achieve suboptimal solution with high computational complexity, ELM can provide better generalization performance results with very small complexity. Efficacy of optimization method based ELM algorithm is measured by using quantitative and qualitative parameters on a watermarked image even though image is subjected to different types of geometrical and conventional attacks.Keywords: BER, DWT, extreme leaning machine (ELM), PSNR
Procedia PDF Downloads 3112477 Fault Tolerant (n,k)-star Power Network Topology for Multi-Agent Communication in Automated Power Distribution Systems
Authors: Ning Gong, Michael Korostelev, Qiangguo Ren, Li Bai, Saroj K. Biswas, Frank Ferrese
Abstract:
This paper investigates the joint effect of the interconnected (n,k)-star network topology and Multi-Agent automated control on restoration and reconfiguration of power systems. With the increasing trend in development in Multi-Agent control technologies applied to power system reconfiguration in presence of faulty components or nodes. Fault tolerance is becoming an important challenge in the design processes of the distributed power system topology. Since the reconfiguration of a power system is performed by agent communication, the (n,k)-star interconnected network topology is studied and modeled in this paper to optimize the process of power reconfiguration. In this paper, we discuss the recently proposed (n,k)-star topology and examine its properties and advantages as compared to the traditional multi-bus power topologies. We design and simulate the topology model for distributed power system test cases. A related lemma based on the fault tolerance and conditional diagnosability properties is presented and proved both theoretically and practically. The conclusion is reached that (n,k)-star topology model has measurable advantages compared to standard bus power systems while exhibiting fault tolerance properties in power restoration, as well as showing efficiency when applied to power system route discovery.Keywords: (n, k)-star topology, fault tolerance, conditional diagnosability, multi-agent system, automated power system
Procedia PDF Downloads 5122476 Fault Tolerant (n, k)-Star Power Network Topology for Multi-Agent Communication in Automated Power Distribution Systems
Authors: Ning Gong, Michael Korostelev, Qiangguo Ren, Li Bai, Saroj Biswas, Frank Ferrese
Abstract:
This paper investigates the joint effect of the interconnected (n,k)-star network topology and Multi-Agent automated control on restoration and reconfiguration of power systems. With the increasing trend in development in Multi-Agent control technologies applied to power system reconfiguration in presence of faulty components or nodes. Fault tolerance is becoming an important challenge in the design processes of the distributed power system topology. Since the reconfiguration of a power system is performed by agent communication, the (n,k)-star interconnected network topology is studied and modeled in this paper to optimize the process of power reconfiguration. In this paper, we discuss the recently proposed (n,k)-star topology and examine its properties and advantages as compared to the traditional multi-bus power topologies. We design and simulate the topology model for distributed power system test cases. A related lemma based on the fault tolerance and conditional diagnosability properties is presented and proved both theoretically and practically. The conclusion is reached that (n,k)-star topology model has measurable advantages compared to standard bus power systems while exhibiting fault tolerance properties in power restoration, as well as showing efficiency when applied to power system route discovery.Keywords: (n, k)-star topology, fault tolerance, conditional diagnosability, multi-agent system, automated power system
Procedia PDF Downloads 4652475 Precarious ID Cards - Studying Documentary Practices in India through the Lens of Internal Migration
Authors: Ambuja Raj
Abstract:
This research will attempt to understand how documents are materially indispensable civic artifacts for migrants in their encounters with the state. Documents such as ID cards are sites of mediation and bureaucratic manifestation which reveal the inherent dynamics of power between the state and a delocalized people. While ID cards allow the holder to retain a different identity and articulate their demands as a citizen, they at the same time transform subjects into ‘objects’ in the exercise of governmental power. The research is based on the study of internal migrants in India, who are ‘visible’ to the state through its host of ID documents such as the ‘Aadhaar card’, electoral IDs, Ration cards, and a variety of region-specific documents, without the possession of which, not only are they unable to access jobs, public goods and services, and accommodation, but are liable to exploitation from state forces and mediators. Through semi-structured interviews with social actors in the processes of documentation and welfare of migrants, as well as with settlements of migrants themselves located in the state of Kerala in India, the thesis will attempt to understand the salience of documentary practices in the lives of inter-state migrants who move within Indian states in the hope of bettering their economic conditions. The research will trace the material and evolving significance of ID cards in the tenacity of states dealing with these ‘illegible’ populations. It will try to bring theories of governmentality, biopolitics and Weberian bureaucracy into the migrant issue while critically grounding itself on secondary literature by scholars who have worked on South Asian ‘governments of paper’.Keywords: migration, historiography of documents, anthropology of state, documentary practices
Procedia PDF Downloads 1892474 Establishing a Change Management Model for Precision Machinery Industry in Taiwan
Authors: Feng-Tsung Cheng, Shu-Li Wang, Mei-Fang Wu, , Hui-Yu Chuang
Abstract:
Due to the rapid development of modern technology, the widespread usage of the Internet makes business environment changing quickly. In order to be a leader in the global competitive market and to pursuit survive, “changing” becomes an unspoken rules need to follow for the company survival. The purpose of this paper is to build change model by using SWOT, strategy map, and balance scorecard, KPI and change management theory. The research findings indicate that organizational change plan formulated by the case company should require the employee to resist change factors and performance management system issues into consideration and must be set organizational change related programs, such as performance appraisal reward system, consulting and counseling mechanisms programs to improve motivation and reduce staff negative emotions. Then according to the model revised strategy maps and performance indicators proposed in this paper, such as strategy maps add and modify corporate culture, improve internal processes management, increase the growth rate of net income and other strategies. The performance indicators are based on strategy maps new and modified by adding net income growth rate, to achieve target production rate, manpower training achievement rates and other indicators, through amendments to achieve the company’s goal, be a leading brand of precision machinery industry.Keywords: organizational change, SWOT analysis, strategy maps, performance indicators
Procedia PDF Downloads 2842473 Information Technology Impacts on the Supply Chain Performance: Case Study Approach
Authors: Kajal Zarei
Abstract:
Supply chain management is becoming an increasingly important issue in many businesses today. In such circumstances, a number of reasons such as management deficiency in different segments of the supply chain, lack of streamlined processes, resistance to change the current systems and technologies, and lack of advanced information system have paved the ground to ask for innovative research studies. To this end, information technology (IT) is becoming a major driver to overcome the supply chain limitations and deficiencies. The emergence of IT has provided an excellent opportunity for redefining the supply chain to be more effective and competitive. This paper has investigated the IT impact on two-digit industry codes in the International Standard Industrial Classification (ISIC) that are operating in four groups of the supply chains. Firstly, the primary fields of the supply chain were investigated, and then paired comparisons of different industry parts were accomplished. Using experts' ideas and Analytical Hierarchy Process (AHP), the status of industrial activities in Kurdistan Province in Iran was determined. The results revealed that manufacturing and inventory fields have been more important compared to other fields of the supply chain. In addition, IT has had greater impact on food and beverage industry, chemical industry, wood industry, wood products, and production of basic metals. The results indicated the need to IT awareness in supply chain management; in other words, IT applications needed to be developed for the identified industries.Keywords: supply chain, information technology, analytical hierarchy process, two-digit codes, international standard industrial classification
Procedia PDF Downloads 2812472 Dehalogenation of Aromatic Compounds in Wastewater by Bacterial Cultures
Authors: Anne Elain, Magali Le Fellic
Abstract:
Halogenated Aromatic Compounds (HAC) are major organic pollutants that are detected in several environmental compartments as a result of their widespread use as solvents, pesticides and other industrial chemicals. The degradation of HAC simultaneously at low temperature and under saline conditions would be useful for remediation of polluted sites. Hence, microbial processes based on the metabolic activities of anaerobic bacteria are especially attractive from an economic and environmental point of view. Metabolites are generally less toxic, less likely to bioaccumulate and more susceptible for further degradation. Studies on biological reductive dehalogenation have largely been restricted to chlorinated compounds while relatively few have focussed on other HAC i.e., fluorinated, brominated or iodinated compounds. The objectives of the present work were to investigate the biodegradation of a mixture of triiodoaromatic molecules in industrial wastewater by an enriched bacterial consortium. Biodegradation of the mixture was studied during batch experiments in an anaerobic reactor. The degree of mineralization and recovery of halogen were monitored by HPLC-UV, TOC analysis and potentiometric titration. Providing ethanol as an electron donor was found to stimulate anaerobic reductive dehalogenation of HAC with a deiodination rate up to 12.4 mg.L-1 per day. Sodium chloride even at high concentration (10 mM) was found to have no influence on the degradation rates nor on the microbial viability. An analysis of the 16S rDNA (MicroSeq®) revealed that at least 6 bacteria were predominant in the enrichment, including Pseudomonas aeruginosa, Pseudomonas monteilii, Kocuria rhizophila, Ochrobacterium anthropi, Ralstonia pickettii and Rhizobium rhizogenes.Keywords: halogenated aromatics, anaerobic biodegradation, deiodination, bacterial consortium
Procedia PDF Downloads 1772471 Sulfamethoxazole Degradation by Conventional Fenton and Microwave-Assisted Fenton Reaction
Authors: Derradji Chebli, Abdallah Bouguettoucha, Zoubir Manaa, Amrane Abdeltif
Abstract:
Pharmaceutical products, such as sulfamethoxazole (SMX) are rejected in the environment at trace level by human and animals (ng/L to mg/L), in their original form or as byproducts. Antibiotics are toxic contaminants for the aquatic environment, owing to their adverse effects on the aquatic life and humans. Even at low concentrations, they can negatively impact biological water treatment leading to the proliferation of antibiotics-resistant pathogens. It is therefore of major importance to develop efficient methods to limit their presence in the aquatic environment. In this aim, advanced oxidation processes (AOP) appear relevant compared to other methods, since they are based on the production of highly reactive free radicals, and especially ●OH. The objective of this work was to evaluate the degradation of SMX by microwave-assisted Fenton reaction (MW/Fe/H2O2). Hydrogen peroxide and ferrous ions concentrations, as well as the microwave power were optimized. The results showed that the SMX degradation by MW/Fe/H2O2 followed a pseudo-first order kinetic. The treatment of 20 mg/L initial SMX by the Fenton reaction in the presence of microwave showed the positive impact of this latter owing to the higher degradation yields observed in a reduced reaction time if compared to the conventional Fenton reaction, less than 5 min for a total degradation. In addition, increasing microwave power increased the degradation kinetics. Irrespective of the application of microwave, the optimal pH for the Fenton reaction remained 3. Examination of the impact of the ionic strength showed that carbonate and sulfate anions increased the rate of SMX degradation.Keywords: antibiotic, degradation, elimination, fenton, microwave, polluant
Procedia PDF Downloads 3992470 Estimating of Groundwater Recharge Value for Al-Najaf City, Iraq
Authors: Hayder H. Kareem
Abstract:
Groundwater recharge is a crucial parameter for any groundwater management system. The variability of the recharge rates and the difficulty in estimating this factor in many processes by direct observation leads to the complexity of estimating the recharge value. Various methods are existing to estimate the groundwater recharge, with some limitations for each method to be able for application. This paper focuses particularly on a real study area, Al-Najaf City, Iraq. In this city, there are few groundwater aquifers, but the aquifer which is considered in this study is the closest one to the ground surface, the Dibdibba aquifer. According to the Aridity Index, which is estimated in the paper, Al-Najaf City is classified as a region located in an arid climate, and this identified that the most appropriate method to estimate the groundwater recharge is Thornthwaite's formula or Thornthwaite's method. From the calculations, the estimated average groundwater recharge over the period 1980-2014 for Al-Najaf City is 40.32 mm/year. Groundwater recharge is completely affected the groundwater table level (groundwater head). Therefore, to make sure that this value of recharge is true, the MODFLOW program has been used to apply this value through finding the relationship between the calculated and observed heads where a groundwater model for the Al-Najaf City study area has been built by MODFLOW to simulate this area for different purposes, one of these purposes is to simulate the groundwater recharge. MODFLOW results show that this value of groundwater recharge is extremely high and needs to be reduced. Therefore, a further sensitivity test has been carried out for the Al-Najaf City study area by the MODFLOW program through changing the recharge value and found that the best estimation of groundwater recharge value for this city is 16.5 mm/year where this value gives the best fitting between the calculated and observed heads with minimum values of RMSE % (13.175) and RSS m² (1454).Keywords: Al-Najaf City, groundwater modelling, recharge estimation, visual MODFLOW
Procedia PDF Downloads 1352469 Deficits in Perceptual and Musical Memory in Individuals with Major Depressive Disorder
Authors: Toledo-Fernandez Aldebaran
Abstract:
Introduction: One of the least explored cognitive functions in relation with depression is the one related to musical stimuli. Music perception and memory can become impaired as well. The term amusia is used to define a type of agnosia caused by damage to basic processes that creates a general inability to perceive music. Therefore, the main objective is to explore performance-based and self-report deficits in music perception and memory on people with major depressive disorder (MDD). Method: Data was collected through April-October 2021 recruiting people who met the eligibility criteria and using the Montreal Battery of Evaluation of Amusia (MBEA) to evaluate performance-based music perception and memory, along with the module for depression of the Mini International Neuropsychiatric Interview, and the Amusic Dysfunction Inventory (ADI) which evaluates the participants’ self-report concerning their abilities in music perception. Results: 64 participants were evaluated. The main study, referring to analyzing the differences between people with MDD and the control group, only showed one statistical difference on the Interval subtest of the MBEA. No difference was found in the dimensions assessed by the ADI. Conclusion: Deficits in interval perception can be explained by mental fatigue, to which people with depression are more vulnerable, rather than by specific deficits in musical perception and memory associated with depressive disorder. Additionally, significant associations were found between musical deficits as observed by performance-based evidence and music dysfunction according to self-report, which could suggest that some people with depression are capable of detecting these deficits in themselves.Keywords: depression, amusia, music, perception, memory
Procedia PDF Downloads 642468 Electrochemical Sensor Based on Poly(Pyrogallol) for the Simultaneous Detection of Phenolic Compounds and Nitrite in Wastewater
Authors: Majid Farsadrooh, Najmeh Sabbaghi, Seyed Mohammad Mostashari, Abolhasan Moradi
Abstract:
Phenolic compounds are chief environmental contaminants on account of their hazardous and toxic nature on human health. The preparation of sensitive and potent chemosensors to monitor emerging pollution in water and effluent samples has received great consideration. A novel and versatile nanocomposite sensor based on poly pyrogallol is presented for the first time in this study, and its electrochemical behavior for simultaneous detection of hydroquinone (HQ), catechol (CT), and resorcinol (RS) in the presence of nitrite is evaluated. The physicochemical characteristics of the fabricated nanocomposite were investigated by emission-scanning electron microscopy (FE-SEM), energy-dispersive X-ray spectroscopy (EDS), and Brunauer-Emmett-Teller (BET). The electrochemical response of the proposed sensor to the detection of HQ, CT, RS, and nitrite is studied using cyclic voltammetry (CV), chronoamperometry (CA), differential pulse voltammetry (DPV), and electrochemical impedance spectroscopy (EIS). The kinetic characterization of the prepared sensor showed that both adsorption and diffusion processes can control reactions at the electrode. In the optimized conditions, the new chemosensor provides a wide linear range of 0.5-236.3, 0.8-236.3, 0.9-236.3, and 1.2-236.3 μM with a low limit of detection of 21.1, 51.4, 98.9, and 110.8 nM (S/N = 3) for HQ, CT and RS, and nitrite, respectively. Remarkably, the electrochemical sensor has outstanding selectivity, repeatability, and stability and is successfully employed for the detection of RS, CT, HQ, and nitrite in real water samples with the recovery of 96.2%–102.4%, 97.8%-102.6%, 98.0%–102.4% and 98.4%–103.2% for RS, CT, HQ, and nitrite, respectively. These outcomes illustrate that poly pyrogallol is a promising candidate for effective electrochemical detection of dihydroxybenzene isomers in the presence of nitrite.Keywords: electrochemical sensor, poly pyrogallol, phenolic compounds, simultaneous determination
Procedia PDF Downloads 682467 Energy Consumption Statistic of Gas-Solid Fluidized Beds through Computational Fluid Dynamics-Discrete Element Method Simulations
Authors: Lei Bi, Yunpeng Jiao, Chunjiang Liu, Jianhua Chen, Wei Ge
Abstract:
Two energy paths are proposed from thermodynamic viewpoints. Energy consumption means total power input to the specific system, and it can be decomposed into energy retention and energy dissipation. Energy retention is the variation of accumulated mechanical energy in the system, and energy dissipation is the energy converted to heat by irreversible processes. Based on the Computational Fluid Dynamics-Discrete Element Method (CFD-DEM) framework, different energy terms are quantified from the specific flow elements of fluid cells and particles as well as their interactions with the wall. Direct energy consumption statistics are carried out for both cold and hot flow in gas-solid fluidization systems. To clarify the statistic method, it is necessary to identify which system is studied: the particle-fluid system or the particle sub-system. For the cold flow, the total energy consumption of the particle sub-system can predict the onset of bubbling and turbulent fluidization, while the trends of local energy consumption can reflect the dynamic evolution of mesoscale structures. For the hot flow, different heat transfer mechanisms are analyzed, and the original solver is modified to reproduce the experimental results. The influence of the heat transfer mechanisms and heat source on energy consumption is also investigated. The proposed statistic method has proven to be energy-conservative and easy to conduct, and it is hopeful to be applied to other multiphase flow systems.Keywords: energy consumption statistic, gas-solid fluidization, CFD-DEM, regime transition, heat transfer mechanism
Procedia PDF Downloads 682466 Observation of the Flow Behavior for a Rising Droplet in a Mini-Slot
Authors: H. Soltani, J. Hadfield, M. Redmond, D. S. Nobes
Abstract:
The passage of oil droplets through a vertical mini-slot were investigated in this study. Oil-in-water emulsion can undergo coalescence of finer oil droplets forming droplets of a size that need to be considered individually. This occurs in a number of industrial processes and has important consequences at a scale where both body and surfaces forces are relevant. In the study, two droplet diameters of smaller than the slot width and a relatively larger diameter where the oil droplet can interact directly with the slot wall were generated. To monitor fluid motion, a particle shadow velocimetry (PSV) imaging technique was used to study fluid flow motion inside and around a single oil droplet rising in a net co-flow. The droplet was a transparent canola oil and the surrounding working fluid was glycerol, adjusted to allow a matching of refractive index between the two fluids. Particles seeded in both fluids were observed with the PSV system allowing the capture of the velocity field both within the droplet and in the surrounds. The effect of droplet size on the droplet internal circulation was observed. Part of the study was related the potential generation of flow structures, such as von Karman vortex shedding already observed in rising droplets in infinite reservoirs and their interaction with the mini-channel. Results show that two counter-rotating vortices exist inside the droplets as they pass through slot. The vorticity map analysis shows that the droplet of relatively larger size has a stronger internal circulation.Keywords: rising droplet, rectangular orifice, particle shadow velocimetry, match refractive index
Procedia PDF Downloads 1712465 Applying Cationic Porphyrin Derivative 5, 10-Dihexyl-15, 20bis Porphyrin, as Transfection Reagent for Gene Delivery into Mammalian Cells
Authors: Hajar Hosseini Khorami
Abstract:
Porphyrins are organic, aromatic compounds found in heme, cytochrome, cobalamin, chlorophyll , and many other natural products with essential roles in biological processes that their cationic forms have been used as groups of favorable non-viral vectors recently. Cationic porphyrins are self-chromogenic reagents with a high capacity for modifications, great interaction with DNA and protection of DNA from nuclease during delivery of it into a cell with low toxicity. In order to have high efficient gene transfection into the cell while causing low toxicity, genetically manipulations of the non-viral vector, cationic porphyrin, would be useful. In this study newly modified cationic porphyrin derivative, 5, 10-dihexyl-15, 20bis (N-methyl-4-pyridyl) porphyrin was applied. Cytotoxicity of synthesized cationic porphyrin on Chinese Hamster Ovarian (CHO) cells was evaluated by using MTT assay. This cationic derivative is dose-dependent, with low cytotoxicity at the ranges from 100 μM to 0.01μM. It was uptake by cells at high concentration. Using direct non-viral gene transfection method and different concentration of cationic porphyrin were tested on transfection of CHO cells by applying derived transfection reagent with X-tremeGENE HP DNA as a positive control. However, no transfection observed by porphyrin derivative and the parameters tested except for positive control. Results of this study suggested that applying different protocol, and also trying other concentration of cationic porphyrins and DNA for forming a strong complex would increase the possibility of efficient gene transfection by using cationic porphyrins.Keywords: cationic porphyrins, gene delivery, non-viral vectors, transfection reagents
Procedia PDF Downloads 2002464 Helicopter Exhaust Gases Cooler in Terms of Computational Fluid Dynamics (CFD) Analysis
Authors: Mateusz Paszko, Ksenia Siadkowska
Abstract:
Due to the low-altitude and relatively low-speed flight, helicopters are easy targets for actual combat assets e.g. infrared-guided missiles. Current techniques aim to increase the combat effectiveness of the military helicopters. Protection of the helicopter in flight from early detection, tracking and finally destruction can be realized in many ways. One of them is cooling hot exhaust gasses, emitting from the engines to the atmosphere in special heat exchangers. Nowadays, this process is realized in ejective coolers, where strong heat and momentum exchange between hot exhaust gases and cold air ejected from atmosphere takes place. Flow effects of air, exhaust gases; mixture of those two and the heat transfer between cold air and hot exhaust gases are given by differential equations of: Mass transportation–flow continuity, ejection of cold air through expanding exhaust gasses, conservation of momentum, energy and physical relationship equations. Calculation of those processes in ejective cooler by means of classic mathematical analysis is extremely hard or even impossible. Because of this, it is necessary to apply the numeric approach with modern, numeric computer programs. The paper discussed the general usability of the Computational Fluid Dynamics (CFD) in a process of projecting the ejective exhaust gases cooler cooperating with helicopter turbine engine. In this work, the CFD calculations have been performed for ejective-based cooler cooperating with the PA W3 helicopter’s engines.Keywords: aviation, CFD analysis, ejective-cooler, helicopter techniques
Procedia PDF Downloads 3322463 Measurement of Project Success in Construction Using Performance Indices
Authors: Annette Joseph
Abstract:
Background: The construction industry is dynamic in nature owing to the increasing uncertainties in technology, budgets, and development processes making projects more complex. Thus, predicting project performance and chances of its likely success has become difficult. The goal of all parties involved in construction projects is to successfully complete it on schedule, within planned budget and with the highest quality and in the safest manner. However, the concept of project success has remained ambiguously defined in the mind of the construction professionals. Purpose: This paper aims to study the analysis of a project in terms of its performance and measure the success. Methodology: The parameters for evaluating project success and the indices to measure success/performance of a project are identified through literature study. Through questionnaire surveys aimed at the stakeholders in the projects, data is collected from two live case studies (an ongoing and completed project) on the overall performance in terms of its success/failure. Finally, with the help of SPSS tool, the data collected from the surveys are analyzed and applied on the selected performance indices. Findings: The score calculated by using the indices and models helps in assessing the overall performance of the project and interpreting it to find out whether the project will be a success or failure. This study acts as a reference for firms to carry out performance evaluation and success measurement on a regular basis helping projects to identify the areas which are performing well and those that require improvement. Originality & Value: The study signifies that by measuring project performance; a project’s deviation towards success/failure can be assessed thus helping in suggesting early remedial measures to bring it on track ensuring that a project will be completed successfully.Keywords: project, performance, indices, success
Procedia PDF Downloads 1912462 Configuration as a Service in Multi-Tenant Enterprise Resource Planning System
Authors: Mona Misfer Alshardan, Djamal Ziani
Abstract:
Enterprise resource planning (ERP) systems are the organizations tickets to the global market. With the implementation of ERP, organizations can manage and coordinate all functions, processes, resources and data from different departments by a single software. However, many organizations consider the cost of traditional ERP to be expensive and look for alternative affordable solutions within their budget. One of these alternative solutions is providing ERP over a software as a service (SaaS) model. This alternative could be considered as a cost effective solution compared to the traditional ERP system. A key feature of any SaaS system is the multi-tenancy architecture where multiple customers (tenants) share the system software. However, different organizations have different requirements. Thus, the SaaS developers accommodate each tenant’s unique requirements by allowing tenant-level customization or configuration. While customization requires source code changes and in most cases a programming experience, the configuration process allows users to change many features within a predefined scope in an easy and controlled manner. The literature provides many techniques to accomplish the configuration process in different SaaS systems. However, the nature and complexity of SaaS ERP needs more attention to the details regarding the configuration process which is merely described in previous researches. Thus, this research is built on strong knowledge regarding the configuration in SaaS to define specifically the configuration borders in SaaS ERP and to design a configuration service with the consideration of the different configuration aspects. The proposed architecture will ensure the easiness of the configuration process by using wizard technology. Also, the privacy and performance are guaranteed by adopting the databases isolation technique.Keywords: configuration, software as a service, multi-tenancy, ERP
Procedia PDF Downloads 3932461 Monitoring the Drying and Grinding Process during Production of Celitement through a NIR-Spectroscopy Based Approach
Authors: Carolin Lutz, Jörg Matthes, Patrick Waibel, Ulrich Precht, Krassimir Garbev, Günter Beuchle, Uwe Schweike, Peter Stemmermann, Hubert B. Keller
Abstract:
Online measurement of the product quality is a challenging task in cement production, especially in the production of Celitement, a novel environmentally friendly hydraulic binder. The mineralogy and chemical composition of clinker in ordinary Portland cement production is measured by X-ray diffraction (XRD) and X ray fluorescence (XRF), where only crystalline constituents can be detected. But only a small part of the Celitement components can be measured via XRD, because most constituents have an amorphous structure. This paper describes the development of algorithms suitable for an on-line monitoring of the final processing step of Celitement based on NIR-data. For calibration intermediate products were dried at different temperatures and ground for variable durations. The products were analyzed using XRD and thermogravimetric analyses together with NIR-spectroscopy to investigate the dependency between the drying and the milling processes on one and the NIR-signal on the other side. As a result, different characteristic parameters have been defined. A short overview of the Celitement process and the challenging tasks of the online measurement and evaluation of the product quality will be presented. Subsequently, methods for systematic development of near-infrared calibration models and the determination of the final calibration model will be introduced. The application of the model on experimental data illustrates that NIR-spectroscopy allows for a quick and sufficiently exact determination of crucial process parameters.Keywords: calibration model, celitement, cementitious material, NIR spectroscopy
Procedia PDF Downloads 5002460 The Integration of Digital Humanities into the Sociology of Knowledge Approach to Discourse Analysis
Authors: Gertraud Koch, Teresa Stumpf, Alejandra Tijerina García
Abstract:
Discourse analysis research approaches belong to the central research strategies applied throughout the humanities; they focus on the countless forms and ways digital texts and images shape present-day notions of the world. Despite the constantly growing number of relevant digital, multimodal discourse resources, digital humanities (DH) methods are thus far not systematically developed and accessible for discourse analysis approaches. Specifically, the significance of multimodality and meaning plurality modelling are yet to be sufficiently addressed. In order to address this research gap, the D-WISE project aims to develop a prototypical working environment as digital support for the sociology of knowledge approach to discourse analysis and new IT-analysis approaches for the use of context-oriented embedding representations. Playing an essential role throughout our research endeavor is the constant optimization of hermeneutical methodology in the use of (semi)automated processes and their corresponding epistemological reflection. Among the discourse analyses, the sociology of knowledge approach to discourse analysis is characterised by the reconstructive and accompanying research into the formation of knowledge systems in social negotiation processes. The approach analyses how dominant understandings of a phenomenon develop, i.e., the way they are expressed and consolidated by various actors in specific arenas of discourse until a specific understanding of the phenomenon and its socially accepted structure are established. This article presents insights and initial findings from D-WISE, a joint research project running since 2021 between the Institute of Anthropological Studies in Culture and History and the Language Technology Group of the Department of Informatics at the University of Hamburg. As an interdisciplinary team, we develop central innovations with regard to the availability of relevant DH applications by building up a uniform working environment, which supports the procedure of the sociology of knowledge approach to discourse analysis within open corpora and heterogeneous, multimodal data sources for researchers in the humanities. We are hereby expanding the existing range of DH methods by developing contextualized embeddings for improved modelling of the plurality of meaning and the integrated processing of multimodal data. The alignment of this methodological and technical innovation is based on the epistemological working methods according to grounded theory as a hermeneutic methodology. In order to systematically relate, compare, and reflect the approaches of structural-IT and hermeneutic-interpretative analysis, the discourse analysis is carried out both manually and digitally. Using the example of current discourses on digitization in the healthcare sector and the associated issues regarding data protection, we have manually built an initial data corpus of which the relevant actors and discourse positions are analysed in conventional qualitative discourse analysis. At the same time, we are building an extensive digital corpus on the same topic based on the use and further development of entity-centered research tools such as topic crawlers and automated newsreaders. In addition to the text material, this consists of multimodal sources such as images, video sequences, and apps. In a blended reading process, the data material is filtered, annotated, and finally coded with the help of NLP tools such as dependency parsing, named entity recognition, co-reference resolution, entity linking, sentiment analysis, and other project-specific tools that are being adapted and developed. The coding process is carried out (semi-)automated by programs that propose coding paradigms based on the calculated entities and their relationships. Simultaneously, these can be specifically trained by manual coding in a closed reading process and specified according to the content issues. Overall, this approach enables purely qualitative, fully automated, and semi-automated analyses to be compared and reflected upon.Keywords: entanglement of structural IT and hermeneutic-interpretative analysis, multimodality, plurality of meaning, sociology of knowledge approach to discourse analysis
Procedia PDF Downloads 2262459 Risk Analysis in Off-Site Construction Manufacturing in Small to Medium-Sized Projects
Authors: Atousa Khodadadyan, Ali Rostami
Abstract:
The objective of off-site construction manufacturing is to utilise the workforce and machinery in a controlled environment without external interference for higher productivity and quality. The usage of prefabricated components can save up to 14% of the total energy consumption in comparison with the equivalent number of cast-in-place ones. Despite the benefits of prefabrication construction, its current project practices encompass technical and managerial issues. Building design, precast components’ production, logistics, and prefabrication installation processes are still mostly discontinued and fragmented. Furthermore, collaboration among prefabrication manufacturers, transportation parties, and on-site assemblers rely on real-time information such as the status of precast components, delivery progress, and the location of components. From the technical point of view, in this industry, geometric variability is still prevalent, which can be caused during the transportation or production of components. These issues indicate that there are still many aspects of prefabricated construction that can be developed using disruptive technologies. Practical real-time risk analysis can be used to address these issues as well as the management of safety, quality, and construction environment issues. On the other hand, the lack of research about risk assessment and the absence of standards and tools hinder risk management modeling in prefabricated construction. It is essential to note that no risk management standard has been established explicitly for prefabricated construction projects, and most software packages do not provide tailor-made functions for this type of projects.Keywords: project risk management, risk analysis, risk modelling, prefabricated construction projects
Procedia PDF Downloads 1732458 Thermochemical Modelling for Extraction of Lithium from Spodumene and Prediction of Promising Reagents for the Roasting Process
Authors: Allen Yushark Fosu, Ndue Kanari, James Vaughan, Alexandre Changes
Abstract:
Spodumene is a lithium-bearing mineral of great interest due to increasing demand of lithium in emerging electric and hybrid vehicles. The conventional method of processing the mineral for the metal requires inevitable thermal transformation of α-phase to the β-phase followed by roasting with suitable reagents to produce lithium salts for downstream processes. The selection of appropriate reagent for roasting is key for the success of the process and overall lithium recovery. Several researches have been conducted to identify good reagents for the process efficiency, leading to sulfation, alkaline, chlorination, fluorination, and carbonizing as the methods of lithium recovery from the mineral.HSC Chemistry is a thermochemical software that can be used to model metallurgical process feasibility and predict possible reaction products prior to experimental investigation. The software was employed to investigate and explain the various reagent characteristics as employed in literature during spodumene roasting up to 1200°C. The simulation indicated that all used reagents for sulfation and alkaline were feasible in the direction of lithium salt production. Chlorination was only feasible when Cl2 and CaCl2 were used as chlorination agents but not NaCl nor KCl. Depending on the kind of lithium salt formed during carbonizing and fluorination, the process was either spontaneous or nonspontaneous throughout the temperature range investigated. The HSC software was further used to simulate and predict some promising reagents which may be equally good for roasting the mineral for efficient lithium extraction but have not yet been considered by researchers.Keywords: thermochemical modelling, HSC chemistry software, lithium, spodumene, roasting
Procedia PDF Downloads 1592457 Managing Uncertainty in Unmanned Aircraft System Safety Performance Requirements Compliance Process
Authors: Achim Washington, Reece Clothier, Jose Silva
Abstract:
System Safety Regulations (SSR) are a central component to the airworthiness certification of Unmanned Aircraft Systems (UAS). There is significant debate on the setting of appropriate SSR for UAS. Putting this debate aside, the challenge lies in how to apply the system safety process to UAS, which lacks the data and operational heritage of conventionally piloted aircraft. The limited knowledge and lack of operational data result in uncertainty in the system safety assessment of UAS. This uncertainty can lead to incorrect compliance findings and the potential certification and operation of UAS that do not meet minimum safety performance requirements. The existing system safety assessment and compliance processes, as used for conventional piloted aviation, do not adequately account for the uncertainty, limiting the suitability of its application to UAS. This paper discusses the challenges of undertaking system safety assessments for UAS and presents current and envisaged research towards addressing these challenges. It aims to highlight the main advantages associated with adopting a risk based framework to the System Safety Performance Requirement (SSPR) compliance process that is capable of taking the uncertainty associated with each of the outputs of the system safety assessment process into consideration. Based on this study, it is made clear that developing a framework tailored to UAS, would allow for a more rational, transparent and systematic approach to decision making. This would reduce the need for conservative assumptions and take the risk posed by each UAS into consideration while determining its state of compliance to the SSR.Keywords: Part 1309 regulations, risk models, uncertainty, unmanned aircraft systems
Procedia PDF Downloads 1872456 Singular Perturbed Vector Field Method Applied to the Problem of Thermal Explosion of Polydisperse Fuel Spray
Authors: Ophir Nave
Abstract:
In our research, we present the concept of singularly perturbed vector field (SPVF) method, and its application to thermal explosion of diesel spray combustion. Given a system of governing equations, which consist of hidden Multi-scale variables, the SPVF method transfer and decompose such system to fast and slow singularly perturbed subsystems (SPS). The SPVF method enables us to understand the complex system, and simplify the calculations. Later powerful analytical, numerical and asymptotic methods (e.g method of integral (invariant) manifold (MIM), the homotopy analysis method (HAM) etc.) can be applied to each subsystem. We compare the results obtained by the methods of integral invariant manifold and SPVF apply to spray droplets combustion model. The research deals with the development of an innovative method for extracting fast and slow variables in physical mathematical models. The method that we developed called singular perturbed vector field. This method based on a numerical algorithm applied to global quasi linearization applied to given physical model. The SPVF method applied successfully to combustion processes. Our results were compared to experimentally results. The SPVF is a general numerical and asymptotical method that reveals the hierarchy (multi-scale system) of a given system.Keywords: polydisperse spray, model reduction, asymptotic analysis, multi-scale systems
Procedia PDF Downloads 2202455 Development of Multilayer Capillary Copper Wick Structure using Microsecond CO₂ Pulsed Laser
Authors: Talha Khan, Surendhar Kumaran, Rajeev Nair
Abstract:
The development of economical, efficient, and reliable next-generation thermal and water management systems to provide efficient cooling and water management technologies is being pursued application in compact and lightweight spacecraft. The elimination of liquid-vapor phase change-based thermal and water management systems is being done due to issues with the reliability and robustness of this technology. To achieve the motive of implementing the principle of using an innovative evaporator and condenser design utilizing bimodal wicks manufactured using a microsecond pulsed CO₂ laser has been proposed in this study. Cylin drical, multilayered capillary copper wicks with a substrate diameter of 39 mm are additively manufactured using a pulsed laser. The copper particles used for layer-by-layer addition on the substrate measure in a diameter range of 225 to 450 micrometers. The primary objective is to develop a novel, high-quality, fast turnaround, laser-based additive manufacturing process that will eliminate the current technical challenges involved with the traditional manufacturing processes for nano/micro-sized powders, like particle agglomeration. Raster-scanned, pulsed-laser sintering process has been developed to manufacture 3D wicks with controlled porosity and permeability.Keywords: liquid-vapor phase change, bimodal wicks, multilayered, capillary, raster-scanned, porosity, permeability
Procedia PDF Downloads 1912454 Characterization of Heterotrimeric G Protein α Subunit in Tomato
Authors: Thi Thao Ninh, Yuri Trusov, José Ramón Botella
Abstract:
Heterotrimeric G proteins, comprised of three subunits, α, β and γ, are involved in signal transduction pathways that mediate a vast number of processes across the eukaryotic kingdom. 23 Gα subunits are present in humans whereas most plant genomes encode for only one canonical Gα. The disparity observed between Arabidopsis, rice, and maize Gα-deficient mutant phenotypes suggest that Gα functions have diversified between eudicots and monocots during evolution. Alternatively, since the only Gα mutations available in dicots have been produced in Arabidopsis, the possibility exists that this species might be an exception to the rule. In order to test this hypothesis, we studied the G protein α subunit (TGA1) in tomato. Four tga1 knockout lines were generated in tomato cultivar Moneymaker using CRISPR/Cas9. The tga1 mutants exhibit a number of auxin-related phenotypes including changes in leaf shape, reduced plant height, fruit size and number of seeds per fruit. In addition, tga1 mutants have increased sensitivity to abscisic acid during seed germination, reduced sensitivity to exogenous auxin during adventitious root formation from cotyledons and excised hypocotyl explants. Our results suggest that Gα mutant phenotypes in tomato are very similar to those observed in monocots, i.e. rice and maize, and cast doubts about the validity of using Arabidopsis as a model system for plant G protein studies.Keywords: auxin-related phenotypes, CRISPR/Cas9, G protein α subunit, heterotrimeric G proteins, tomato
Procedia PDF Downloads 1372453 Assessment of Conventional Drinking Water Treatment Plants as Removal Systems of Virulent Microsporidia
Authors: M. A. Gad, A. Z. Al-Herrawy
Abstract:
Microsporidia comprises various pathogenic species can infect humans by means of water. Moreover, chlorine disinfection of drinking-water has limitations against this protozoan pathogen. A total of 48 water samples were collected from two drinking water treatment plants having two different filtration systems (slow sand filter and rapid sand filter) during one year period. Samples were collected from inlet and outlet of each plant. Samples were separately filtrated through nitrocellulose membrane (142 mm, 0.45 µm), then eluted and centrifuged. The obtained pellet from each sample was subjected to DNA extraction, then, amplification using genus-specific primer for microsporidia. Each microsporidia-PCR positive sample was performed by two species specific primers for Enterocytozoon bieneusi and Encephalitozoon intestinalis. The results of the present study showed that the percentage of removal for microsporidia through different treatment processes reached its highest rate in the station using slow sand filters (100%), while the removal by rapid sand filter system was 81.8%. Statistically, the two different drinking water treatment plants (slow and rapid) had significant effect for removal of microsporidia. Molecular identification of microsporidia-PCR positive samples using two different primers for Enterocytozoon bieneusi and Encephalitozoon intestinalis showed the presence of the two pervious species in the inlet water of the two stations, while Encephalitozoon intestinalis was detected in the outlet water only. In conclusion, the appearance of virulent microsporidia in treated drinking water may cause potential health threat.Keywords: removal, efficacy, microsporidia, drinking water treatment plants, PCR
Procedia PDF Downloads 2122452 A Glycerol-Free Process of Biodiesel Production through Chemical Interesterification of Jatropha Oil
Authors: Ratna Dewi Kusumaningtyas, Riris Pristiyani, Heny Dewajani
Abstract:
Biodiesel is commonly produced via the two main routes, i.e. the transesterification of triglycerides and the esterification of free fatty acid (FFA) using short-chain alcohols. Both the two routes have drawback in term of the side product yielded during the reaction. Transesterification reaction of triglyceride results in glycerol as side product. On the other hand, FFA esterification brings in water as side product. Both glycerol and water in the biodiesel production are managed as waste. Hence, a separation process is necessary to obtain a high purity biodiesel. Meanwhile, separation processes is generally the most capital and energy intensive part in industrial process. Therefore, to reduce the separation process, it is essential to produce biodiesel via an alternative route eliminating glycerol or water side-products. In this work, biodiesel synthesis was performed using a glycerol-free process through chemical interesterification of jatropha oil with ethyl acetate in the presence on sodium acetate catalyst. By using this method, triacetine, which is known as fuel bio-additive, is yielded instead of glycerol. This research studied the effects of catalyst concentration on the jatropha oil interesterification process in the range of 0.5 – 1.25% w/w oil. The reaction temperature and molar ratio of oil to ethyl acetate were varied at 50, 60, and 70°C, and 1:6, 1:9, 1:15, 1:30, and 1:60, respectively. The reaction time was evaluated from 0 to 8 hours. It was revealed that the best yield was obtained with the catalyst concentration of 0.5%, reaction temperature of 70 °C, molar ratio of oil to ethyl acetate at 1:60, at 6 hours reaction time.Keywords: biodiesel, interesterification, glycerol-free, triacetine, jatropha oil
Procedia PDF Downloads 425