Search results for: real time locating system (RTLS)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 32530

Search results for: real time locating system (RTLS)

30610 Applying an Automatic Speech Intelligent System to the Health Care of Patients Undergoing Long-Term Hemodialysis

Authors: Kuo-Kai Lin, Po-Lun Chang

Abstract:

Research Background and Purpose: Following the development of the Internet and multimedia, the Internet and information technology have become crucial avenues of modern communication and knowledge acquisition. The advantages of using mobile devices for learning include making learning borderless and accessible. Mobile learning has become a trend in disease management and health promotion in recent years. End-stage renal disease (ESRD) is an irreversible chronic disease, and patients who do not receive kidney transplants can only rely on hemodialysis or peritoneal dialysis to survive. Due to the complexities in caregiving for patients with ESRD that stem from their advanced age and other comorbidities, the patients’ incapacity of self-care leads to an increase in the need to rely on their families or primary caregivers, although whether the primary caregivers adequately understand and implement patient care is a topic of concern. Therefore, this study explored whether primary caregivers’ health care provisions can be improved through the intervention of an automatic speech intelligent system, thereby improving the objective health outcomes of patients undergoing long-term dialysis. Method: This study developed an automatic speech intelligent system with healthcare functions such as health information voice prompt, two-way feedback, real-time push notification, and health information delivery. Convenience sampling was adopted to recruit eligible patients from a hemodialysis center at a regional teaching hospital as research participants. A one-group pretest-posttest design was adopted. Descriptive and inferential statistics were calculated from the demographic information collected from questionnaires answered by patients and primary caregivers, and from a medical record review, a health care scale (recorded six months before and after the implementation of intervention measures), a subjective health assessment, and a report of objective physiological indicators. The changes in health care behaviors, subjective health status, and physiological indicators before and after the intervention of the proposed automatic speech intelligent system were then compared. Conclusion and Discussion: The preliminary automatic speech intelligent system developed in this study was tested with 20 pretest patients at the recruitment location, and their health care capacity scores improved from 59.1 to 72.8; comparisons through a nonparametric test indicated a significant difference (p < .01). The average score for their subjective health assessment rose from 2.8 to 3.3. A survey of their objective physiological indicators discovered that the compliance rate for the blood potassium level was the most significant indicator; its average compliance rate increased from 81% to 94%. The results demonstrated that this automatic speech intelligent system yielded a higher efficacy for chronic disease care than did conventional health education delivered by nurses. Therefore, future efforts will continue to increase the number of recruited patients and to refine the intelligent system. Future improvements to the intelligent system can be expected to enhance its effectiveness even further.

Keywords: automatic speech intelligent system for health care, primary caregiver, long-term hemodialysis, health care capabilities, health outcomes

Procedia PDF Downloads 99
30609 Smoker Recognition from Lung X-Ray Images Using Convolutional Neural Network

Authors: Moumita Chanda, Md. Fazlul Karim Patwary

Abstract:

Smoking is one of the most popular recreational drug use behaviors, and it contributes to birth defects, COPD, heart attacks, and erectile dysfunction. To completely eradicate this disease, it is imperative that it be identified and treated. Numerous smoking cessation programs have been created, and they demonstrate how beneficial it may be to help someone stop smoking at the ideal time. A tomography meter is an effective smoking detector. Other wearables, such as RF-based proximity sensors worn on the collar and wrist to detect when the hand is close to the mouth, have been proposed in the past, but they are not impervious to deceptive variables. In this study, we create a machine that can discriminate between smokers and non-smokers in real-time with high sensitivity and specificity by watching and collecting the human lung and analyzing the X-ray data using machine learning. If it has the highest accuracy, this machine could be utilized in a hospital, in the selection of candidates for the army or police, or in university entrance.

Keywords: CNN, smoker detection, non-smoker detection, OpenCV, artificial Intelligence, X-ray Image detection

Procedia PDF Downloads 62
30608 Investigation into the Optimum Hydraulic Loading Rate for Selected Filter Media Packed in a Continuous Upflow Filter

Authors: A. Alzeyadi, E. Loffill, R. Alkhaddar

Abstract:

Continuous upflow filters can combine the nutrient (nitrogen and phosphate) and suspended solid removal in one unit process. The contaminant removal could be achieved chemically or biologically; in both processes the filter removal efficiency depends on the interaction between the packed filter media and the influent. In this paper a residence time distribution (RTD) study was carried out to understand and compare the transfer behaviour of contaminants through a selected filter media packed in a laboratory-scale continuous up flow filter; the selected filter media are limestone and white dolomite. The experimental work was conducted by injecting a tracer (red drain dye tracer –RDD) into the filtration system and then measuring the tracer concentration at the outflow as a function of time; the tracer injection was applied at hydraulic loading rates (HLRs) (3.8 to 15.2 m h-1). The results were analysed according to the cumulative distribution function F(t) to estimate the residence time of the tracer molecules inside the filter media. The mean residence time (MRT) and variance σ2 are two moments of RTD that were calculated to compare the RTD characteristics of limestone with white dolomite. The results showed that the exit-age distribution of the tracer looks better at HLRs (3.8 to 7.6 m h-1) and (3.8 m h-1) for limestone and white dolomite respectively. At these HLRs the cumulative distribution function F(t) revealed that the residence time of the tracer inside the limestone was longer than in the white dolomite; whereas all the tracer took 8 minutes to leave the white dolomite at 3.8 m h-1. On the other hand, the same amount of the tracer took 10 minutes to leave the limestone at the same HLR. In conclusion, the determination of the optimal level of hydraulic loading rate, which achieved the better influent distribution over the filtration system, helps to identify the applicability of the material as filter media. Further work will be applied to examine the efficiency of the limestone and white dolomite for phosphate removal by pumping a phosphate solution into the filter at HLRs (3.8 to 7.6 m h-1).

Keywords: filter media, hydraulic loading rate, residence time distribution, tracer

Procedia PDF Downloads 262
30607 Dynamic Two-Way FSI Simulation for a Blade of a Small Wind Turbine

Authors: Alberto Jiménez-Vargas, Manuel de Jesús Palacios-Gallegos, Miguel Ángel Hernández-López, Rafael Campos-Amezcua, Julio Cesar Solís-Sanchez

Abstract:

An optimal wind turbine blade design must be able of capturing as much energy as possible from the wind source available at the area of interest. Many times, an optimal design means the use of large quantities of material and complicated processes that make the wind turbine more expensive, and therefore, less cost-effective. For the construction and installation of a wind turbine, the blades may cost up to 20% of the outline pricing, and become more important due to they are part of the rotor system that is in charge of transmitting the energy from the wind to the power train, and where the static and dynamic design loads for the whole wind turbine are produced. The aim of this work is the develop of a blade fluid-structure interaction (FSI) simulation that allows the identification of the major damage zones during the normal production situation, and thus better decisions for design and optimization can be taken. The simulation is a dynamic case, since we have a time-history wind velocity as inlet condition instead of a constant wind velocity. The process begins with the free-use software NuMAD (NREL), to model the blade and assign material properties to the blade, then the 3D model is exported to ANSYS Workbench platform where before setting the FSI system, a modal analysis is made for identification of natural frequencies and modal shapes. FSI analysis is carried out with the two-way technic which begins with a CFD simulation to obtain the pressure distribution on the blade surface, then these results are used as boundary condition for the FEA simulation to obtain the deformation levels for the first time-step. For the second time-step, CFD simulation is reconfigured automatically with the next time-step inlet wind velocity and the deformation results from the previous time-step. The analysis continues the iterative cycle solving time-step by time-step until the entire load case is completed. This work is part of a set of projects that are managed by a national consortium called “CEMIE-Eólico” (Mexican Center in Wind Energy Research), created for strengthen technological and scientific capacities, the promotion of creation of specialized human resources, and to link the academic with private sector in national territory. The analysis belongs to the design of a rotor system for a 5 kW wind turbine design thought to be installed at the Isthmus of Tehuantepec, Oaxaca, Mexico.

Keywords: blade, dynamic, fsi, wind turbine

Procedia PDF Downloads 463
30606 Business Intelligence Dashboard Solutions for Improving Decision Making Process: A Focus on Prostate Cancer

Authors: Mona Isazad Mashinchi, Davood Roshan Sangachin, Francis J. Sullivan, Dietrich Rebholz-Schuhmann

Abstract:

Background: Decision-making processes are nowadays driven by data, data analytics and Business Intelligence (BI). BI as a software platform can provide a wide variety of capabilities such as organization memory, information integration, insight creation and presentation capabilities. Visualizing data through dashboards is one of the BI solutions (for a variety of areas) which helps managers in the decision making processes to expose the most informative information at a glance. In the healthcare domain to date, dashboard presentations are more frequently used to track performance related metrics and less frequently used to monitor those quality parameters which relate directly to patient outcomes. Providing effective and timely care for patients and improving the health outcome are highly dependent on presenting and visualizing data and information. Objective: In this research, the focus is on the presentation capabilities of BI to design a dashboard for prostate cancer (PC) data that allows better decision making for the patients, the hospital and the healthcare system related to a cancer dataset. The aim of this research is to customize a retrospective PC dataset in a dashboard interface to give a better understanding of data in the categories (risk factors, treatment approaches, disease control and side effects) which matter most to patients as well as other stakeholders. By presenting the outcome in the dashboard we address one of the major targets of a value-based health care (VBHC) delivery model which is measuring the value and presenting the outcome to different actors in HC industry (such as patients and doctors) for a better decision making. Method: For visualizing the stored data to users, three interactive dashboards based on the PC dataset have been developed (using the Tableau Software) to provide better views to the risk factors, treatment approaches, and side effects. Results: Many benefits derived from interactive graphs and tables in dashboards which helped to easily visualize and see the patients at risk, better understanding the relationship between patient's status after treatment and their initial status before treatment, or to choose better decision about treatments with fewer side effects regarding patient status and etc. Conclusions: Building a well-designed and informative dashboard is related to three important factors including; the users, goals and the data types. Dashboard's hierarchies, drilling, and graphical features can guide doctors to better navigate through information. The features of the interactive PC dashboard not only let doctors ask specific questions and filter the results based on the key performance indicators (KPI) such as: Gleason Grade, Patient's Age and Status, but may also help patients to better understand different treatment outcomes, such as side effects during the time, and have an active role in their treatment decisions. Currently, we are extending the results to the real-time interactive dashboard that users (either patients and doctors) can easily explore the data by choosing preferred attribute and data to make better near real-time decisions.

Keywords: business intelligence, dashboard, decision making, healthcare, prostate cancer, value-based healthcare

Procedia PDF Downloads 128
30605 Importance of Developing a Decision Support System for Diagnosis of Glaucoma

Authors: Murat Durucu

Abstract:

Glaucoma is a condition of irreversible blindness, early diagnosis and appropriate interventions to make the patients able to see longer time. In this study, it addressed that the importance of developing a decision support system for glaucoma diagnosis. Glaucoma occurs when pressure happens around the eyes it causes some damage to the optic nerves and deterioration of vision. There are different levels ranging blindness of glaucoma disease. The diagnosis at an early stage allows a chance for therapies that slows the progression of the disease. In recent years, imaging technology from Heidelberg Retinal Tomography (HRT), Stereoscopic Disc Photo (SDP) and Optical Coherence Tomography (OCT) have been used for the diagnosis of glaucoma. This better accuracy and faster imaging techniques in response technique of OCT have become the most common method used by experts. Although OCT images or HRT precision and quickness, especially in the early stages, there are still difficulties and mistakes are occurred in diagnosis of glaucoma. It is difficult to obtain objective results on diagnosis and placement process of the doctor's. It seems very important to develop an objective decision support system for diagnosis and level the glaucoma disease for patients. By using OCT images and pattern recognition systems, it is possible to develop a support system for doctors to make their decisions on glaucoma. Thus, in this recent study, we develop an evaluation and support system to the usage of doctors. Pattern recognition system based computer software would help the doctors to make an objective evaluation for their patients. It is intended that after development and evaluation processes of the software, the system is planning to be serve for the usage of doctors in different hospitals.

Keywords: decision support system, glaucoma, image processing, pattern recognition

Procedia PDF Downloads 276
30604 Re-Os Application to Petroleum System: Implications from the Geochronology and Oil-Source Correlation of Duvernay Petroleum System, Western Canadian Sedimentary Basin

Authors: Junjie Liu, David Selby, Mark Obermajer, Andy Mort

Abstract:

The inaugural application of Re-Os dating, which is based on the beta decay of 187Re to 187Os with a long half-life of 41.577 ± 0.12 Byr and initially used for sulphide minerals and organic rich rocks, to petroleum systems was performed on bitumen of the Polaris Mississippi Valley Type Pb-Zn deposit, Canada. To further our understanding of the Re-Os system and its application to petroleum systems, here we present a study on Duvernay Petroleum System, Western Canadian Sedimentary Basin. The Late Devonian Duvernay Formation organic-rich shales are the only source of the petroleum system. The Duvernay shales reached maturation only during the Laramide Orogeny (80 – 35 Ma) and the generated oil migrated short distances into the interfingering Leduc reefs and overlying Nisku carbonates with no or little secondary alteration post oil-generation. Although very low in Re and Os, the asphaltenes of Duvernay-sourced Leduc and Nisku oils define a Laramide Re-Os age. In addition, the initial Os isotope compositions of the oil samples are similar to that of the Os isotope composition of the Duvernay Formation at the time of oil generation, but are very different to other oil-prone intervals of the basin, showing the ability of the Os isotope composition as an inorganic oil-source correlation tool. In summary, the ability of the Re-Os geochronometer to record the timing of oil generation and trace the source of an oil is confirmed in the Re-Os study of Duvernay Petroleum System.

Keywords: Duvernay petroleum system, oil generation, oil-source correlation, Re-Os

Procedia PDF Downloads 293
30603 Acclimatation of Bacterial Communities for Biohydrogen Production by Co-Digestion Process in Batch and Continuous Systems

Authors: Gómez Romero Jacob, García Peña Elvia Inés

Abstract:

The co-digestion process of crude cheese whey (CCW) with fruit vegetable waste (FVW) for biohydrogen production was investigated in batch and continuous systems, in stirred 1.8 L bioreactors at 37°C. Five different C/N ratios (7, 17, 21, 31, and 46) were tested in batch systems. While, in continuous system eight conditions were evaluated, hydraulic retention time (from 60 to 10 h) and organic load rate (from 21.96 to 155.87 g COD/L d). Data in batch tests showed a maximum specific biohydrogen production rate of 10.68 mmol H2/Lh and a biohydrogen yield of 449.84 mL H2/g COD at a C/N ratio of 21. In continuous co-digestion system, the optimum hydraulic retention time and organic loading rate were 17.5 h and 80.02 g COD/L d, respectively. Under these conditions, the highest volumetric production hydrogen rate (VPHR) and hydrogen yield were 11.02 mmol H2/L h, 800 mL H2/COD, respectively. A pyrosequencing analysis showed that the main acclimated microbial communities for co-digestion studies consisted of Bifidobacterium, with 85.4% of predominance. Hydrogen producing bacteria such as Klebsiella (9.1%), Lactobacillus (0.97%), Citrobacter (0.21%), Enterobacter (0.27%), and Clostridium (0.18%) were less abundant at this culture period. The microbial population structure was correlated with the lactate, acetate, and butyrate profiles obtained. Results demonstrated that the co-digestion of CCW with FVW improves biohydrogen production due to a better nutrient balance and improvement of the system’s buffering capacity.

Keywords: acclimatation, biohydrogen, co-digestion, microbial community

Procedia PDF Downloads 540
30602 A Method and System for Secure Authentication Using One Time QR Code

Authors: Divyans Mahansaria

Abstract:

User authentication is an important security measure for protecting confidential data and systems. However, the vulnerability while authenticating into a system has significantly increased. Thus, necessary mechanisms must be deployed during the process of authenticating a user to safeguard him/her from the vulnerable attacks. The proposed solution implements a novel authentication mechanism to counter various forms of security breach attacks including phishing, Trojan horse, replay, key logging, Asterisk logging, shoulder surfing, brute force search and others. QR code (Quick Response Code) is a type of matrix barcode or two-dimensional barcode that can be used for storing URLs, text, images and other information. In the proposed solution, during each new authentication request, a QR code is dynamically generated and presented to the user. A piece of generic information is mapped to plurality of elements and stored within the QR code. The mapping of generic information with plurality of elements, randomizes in each new login, and thus the QR code generated for each new authentication request is for one-time use only. In order to authenticate into the system, the user needs to decode the QR code using any QR code decoding software. The QR code decoding software needs to be installed on handheld mobile devices such as smartphones, personal digital assistant (PDA), etc. On decoding the QR code, the user will be presented a mapping between the generic piece of information and plurality of elements using which the user needs to derive cipher secret information corresponding to his/her actual password. Now, in place of the actual password, the user will use this cipher secret information to authenticate into the system. The authentication terminal will receive the cipher secret information and use a validation engine that will decipher the cipher secret information. If the entered secret information is correct, the user will be provided access to the system. Usability study has been carried out on the proposed solution, and the new authentication mechanism was found to be easy to learn and adapt. Mathematical analysis of the time taken to carry out brute force attack on the proposed solution has been carried out. The result of mathematical analysis showed that the solution is almost completely resistant to brute force attack. Today’s standard methods for authentication are subject to a wide variety of software, hardware, and human attacks. The proposed scheme can be very useful in controlling the various types of authentication related attacks especially in a networked computer environment where the use of username and password for authentication is common.

Keywords: authentication, QR code, cipher / decipher text, one time password, secret information

Procedia PDF Downloads 253
30601 A Unique Exact Approach to Handle a Time-Delayed State-Space System: The Extraction of Juice Process

Authors: Mohamed T. Faheem Saidahmed, Ahmed M. Attiya Ibrahim, Basma GH. Elkilany

Abstract:

This paper discusses the application of Time Delay Control (TDC) compensation technique in the juice extraction process in a sugar mill. The objective is to improve the control performance of the process and increase extraction efficiency. The paper presents the mathematical model of the juice extraction process and the design of the TDC compensation controller. Simulation results show that the TDC compensation technique can effectively suppress the time delay effect in the process and improve control performance. The extraction efficiency is also significantly increased with the application of the TDC compensation technique. The proposed approach provides a practical solution for improving the juice extraction process in sugar mills using MATLAB Processes.

Keywords: time delay control (TDC), exact and unique state space model, delay compensation, Smith predictor.

Procedia PDF Downloads 65
30600 Exploring the Success of Live Streaming Commerce in China: A Literature Analysis

Authors: Ming Gao, Matthew Tingchi Liu, Hoi Ngan Loi

Abstract:

Live streaming refers to the video contents generated by broadcasters and shared with viewers in real-time by uploading them to short-video platforms. In recent years, individual KOL broadcasters have successfully made use of live streams to sell a large amount of goods to the consumers. For example, Wei Ya, the Number 1 broadcaster in Taobao Live, sold products worth RMB 2.7 billion (USD 0.38 billion) in 2018. Regarding the success of live streaming commerce (LSC) in China, this study explores the elements of the booming LSC industry and attempts to explain the reasons behind its prosperity. A systematic review of industry reports and academic papers was conducted to summarize the latest findings in this field. And the results of this investigation showed that a live streaming eco-system has been established by the LSC players, namely, the platform, the broadcaster, the product supplier, and the viewer. In this eco-system, all players have complementary advantages and needs, and their close cooperation leads to a win-win situation. For instance, platforms and broadcasters have abundant internet traffic, which needs to be monetized, while product suppliers have mature supply chains and the need of promoting the products. In addition, viewers are attached to the LSC platforms to get product information, bargains, and entertainment. This study highlights the importance of the mass-personal hybrid communication nature of live streaming because its interpersonal communication feature increases consumers’ positive experiences, while its mass media broadcasting feature facilitates product promotion. Another innovative point of this study lies in its inclusion of the special characteristic of Chinese Internet culture - entertainment. The entertaining genres of the live streams created by broadcasters serve as down-to-earth approaches to reach their audiences easily. Further, the nature of video, i.e., the dynamic and salient stimulus, is emphasized in this study. Since video is more engaging, it can attract viewers in a quick and easy way. Meanwhile, the abundant, interesting, high-quality, and free short videos have added “stickiness” to platforms by retaining users and prolonging their staying time on the platforms. In addition, broadcasters’ important characters, such as physical attractiveness, humor, sex appeal, kindness, communication skills, and interactivity, are also identified as important factors that influence consumers’ engagement and purchase intention. In conclusion, all players have their own proper places in this live streaming eco-system, in which they work seamlessly to give full play to their respective advantages, with each player taking what it needs and offering what it has. This has contributed to the success of live streaming commerce in China.

Keywords: broadcasters, communication, entertainment, live streaming commerce, viewers

Procedia PDF Downloads 106
30599 Identification of Membrane Foulants in Direct Contact Membrane Distillation for the Treatment of Reject Brine

Authors: Shefaa Mansour, Hassan Arafat, Shadi Hasan

Abstract:

Management of reverse osmosis (RO) brine has become a major area of research due to the environmental concerns associated with it. This study worked on studying the feasibility of the direct contact membrane distillation (DCMD) system in the treatment of this RO brine. The system displayed great potential in terms of its flux and salt rejection, where different operating conditions such as the feed temperature, feed salinity, feed and permeate flow rates were varied. The highest flux of 16.7 LMH was reported with a salt rejection of 99.5%. Although the DCMD has displayed potential of enhanced water recovery from highly saline solutions, one of the major drawbacks associated with the operation is the fouling of the membranes which impairs the system performance. An operational run of 77 hours for the treatment of RO brine of 56,500 ppm salinity was performed in order to investigate the impact of fouling of the membrane on the overall operation of the system over long time operations. Over this time period, the flux was observed to have reduced by four times its initial flux. The fouled membrane was characterized through different techniques for the identification of the organic and inorganic foulants that have deposited on the membrane surface. The Infrared Spectroscopy method (IR) was used to identify the organic foulants where SEM images displayed the surface characteristics of the membrane. As for the inorganic foulants, they were identified using X-ray Diffraction (XRD), Ion Chromatography (IC) and Energy Dispersive Spectroscopy (EDS). The major foulants found on the surface of the membrane were inorganic salts such as sodium chloride and calcium sulfate.

Keywords: brine treatment, membrane distillation, fouling, characterization

Procedia PDF Downloads 419
30598 Development, Characterization and Performance Evaluation of a Weak Cation Exchange Hydrogel Using Ultrasonic Technique

Authors: Mohamed H. Sorour, Hayam F. Shaalan, Heba A. Hani, Eman S. Sayed, Amany A. El-Mansoup

Abstract:

Heavy metals (HMs) present an increasing threat to aquatic and soil environment. Thus, techniques should be developed for the removal and/or recovery of those HMs from point sources in the generating industries. This paper reports our endeavors concerning the development of in-house developed weak cation exchange polyacrylate hydrogel kaolin composites for heavy metals removal. This type of composite enables desirable characteristics and functions including mechanical strength, bed porosity and cost advantages. This paper emphasizes the effect of varying crosslinker (methylenebis(acrylamide)) concentration. The prepared cation exchanger has been subjected to intensive characterization using X-ray diffraction (XRD), Fourier transform infrared spectroscopy (FTIR), scanning electron microscopy (SEM), X-ray fluorescence (XRF) and Brunauer Emmett and Teller (BET) method. Moreover, the performance was investigated using synthetic and real wastewater for an industrial complex east of Cairo. Simulated and real wastewater compositions addressed; Cr, Co, Ni, and Pb are in the range of (92-115), (91-103), (86-88) and (99-125), respectively. Adsorption experiments have been conducted in both batch and column modes. In general, batch tests revealed enhanced cation exchange capacities of 70, 72, 78.2 and 99.9 mg/g from single synthetic wastes while, removal efficiencies of 82.2, 86.4, 44.4 and 96% were obtained for Cr, Co, Ni and Pb, respectively from mixed synthetic wastes. It is concluded that the mixed synthetic and real wastewaters have lower adsorption capacities than single solutions. It is worth mentioned that Pb attained higher adsorption capacities with comparable results in all tested concentrations of synthetic and real wastewaters. Pilot scale experiments were also conducted for mixed synthetic waste in a fluidized bed column for 48 hour cycle time which revealed 86.4%, 58.5%, 66.8% and 96.9% removal efficiency for Cr, Co, Ni, and Pb, respectively with maximum regeneration was also conducted using saline and acid regenerants. Maximum regeneration efficiencies for the column studies higher than the batch ones about by about 30% to 60%. Studies are currently under way to enhance the regeneration efficiency to enable successful scaling up of the adsorption column.

Keywords: polyacrylate hydrogel kaolin, ultrasonic irradiation, heavy metals, adsorption and regeneration

Procedia PDF Downloads 106
30597 Greenhouse Controlled with Graphical Plotting in Matlab

Authors: Bruno R. A. Oliveira, Italo V. V. Braga, Jonas P. Reges, Luiz P. O. Santos, Sidney C. Duarte, Emilson R. R. Melo, Auzuir R. Alexandria

Abstract:

This project aims to building a controlled greenhouse, or for better understanding, a structure where one can maintain a given range of temperature values (°C) coming from radiation emitted by an incandescent light, as previously defined, characterizing as a kind of on-off control and a differential, which is the plotting of temperature versus time graphs assisted by MATLAB software via serial communication. That way it is possible to connect the stove with a computer and monitor parameters. In the control, it was performed using a PIC 16F877A microprocessor which enabled convert analog signals to digital, perform serial communication with the IC MAX232 and enable signal transistors. The language used in the PIC's management is Basic. There are also a cooling system realized by two coolers 12V distributed in lateral structure, being used for venting and the other for exhaust air. To find out existing temperature inside is used LM35DZ sensor. Other mechanism used in the greenhouse construction was comprised of a reed switch and a magnet; their function is in recognition of the door position where a signal is sent to a buzzer when the door is open. Beyond it exist LEDs that help to identify the operation which the stove is located. To facilitate human-machine communication is employed an LCD display that tells real-time temperature and other information. The average range of design operating without any major problems, taking into account the limitations of the construction material and structure of electrical current conduction, is approximately 65 to 70 ° C. The project is efficient in these conditions, that is, when you wish to get information from a given material to be tested at temperatures not as high. With the implementation of the greenhouse automation, facilitating the temperature control and the development of a structure that encourages correct environment for the most diverse applications.

Keywords: greenhouse, microcontroller, temperature, control, MATLAB

Procedia PDF Downloads 385
30596 A Theoretical Study of Multi-Leaf Spring in Seismic Response Control

Authors: M. Ezati Kooshki , H. Pourmohamad

Abstract:

Leaf spring dampers are used for commercial vehicles and heavy tracks. The main function of this damper in these vehicles is protection against damage and providing comfort for drivers by creating suspension between road and vehicle. This paper presents a new device, circular leaf spring damper, which is frequently used on vehicles, aiming to gain seismic protection of structures. Finite element analyses were conducted on several one-story structures using finite element software (Abaqus, v6.10-1). The time history analysis was conducted on the records of Kobe (1995) and San Fernando (1971) ground motions to demonstrate the advantages of using leaf spring in structures as compared to simple bracing system. This paper also suggests extending the use of this damper in structures, considering its large control force despite high cycle fatigue properties and low prices.

Keywords: bracing system, finite element analysis, leaf spring, seismic protection, time history analysis

Procedia PDF Downloads 388
30595 Multi-Level Framework for Effective Use of Stock Ordering System: Case Study of Small Enterprises in Kgautswane

Authors: Lethamaga Tladi, Ray Kekwaletswe

Abstract:

This study sought to conceptualise a multi-level framework for the effective use of stock ordering system in small enterprises in a rural area context. The interpretive research methodology has been used to enable the researcher to analyse, in-depth, and the subjective meanings of small enterprises’ employees in using the stock ordering system. The empirical data was collected from 13 small enterprises’ employees as participants through semi-structured interviews and observations. Interpretive Phenomenological Analysis (IPA) approach was used to analyse the small enterprises’ employee’s own account of lived experiences in relations to stock ordering system use in terms of their relatedness to, and cognitive engagement with. A case study of Kgautswane, a rural area in Limpopo Province, South Africa, served as a social context where the phenomenon manifested. Technology-Organisation-Environment Theory (TOE), Technology-to-Performance Chain Model (TPC), and Representation Theory (RT) underpinned this study. In this multi-level study, the findings revealed that; At the organisational level, the effective use of stock ordering system was found to be associated with the organisational performance gains such as efficiency, productivity, quality, competitiveness, and market share. Equally so, at the individual level, the effective use of stock ordering system minimised the end-user’s efforts and time to accomplish their tasks, which yields improved individual performance. The Multi-level framework for effective use of stock ordering system was presented.

Keywords: effective use, multi-dimensions of use, multi-level of use, multi-level research, small enterprises, stock ordering system

Procedia PDF Downloads 151
30594 Alterations in the Abundance of Ruminal Microbial Species during the Peripartal Period in Dairy Cows

Authors: S. Alqarni, J. C. McCann, A. Palladino, J. J. Loor

Abstract:

Seven fistulated Holstein cows were used from 3 weeks prepartum to 4 weeks postpartum to determine the relative abundance of 7 different species of ruminal microorganisms. The prepartum diet was based on corn silage. In the postpartum, diet included ground corn, grain by-products, and alfalfa haylage. Ruminal digesta were collected at five times: -14, -7, 10, 20, and 28 days around parturition. Total DNA from ruminal digesta was isolated and real-time quantitative PCR was used to determine the relative abundance of bacterial species. Eubacterium ruminantium and Selenomonas ruminantium were not affected by time (P>0.05). Megasphaera elsdenii and Prevotella bryantii increased significantly postpartum (P<0.001). Conversely, Butyrivibrio proteoclasticus decreased gradually from -14 through 28 days (P<0.001). Fibrobacter succinogenes was affected by time being lowest at day 10 (P=0.02) while Anaerovibrio lipolytica recorded the lowest abundance at -7 d followed by an increase by 20 days postpartum (P<0.001). Overall, these results indicate that changes in diet after parturition affect the abundance of ruminal bacteria, particularly M. elsdenii (a lactate-utilizing bacteria) and P. bryantii (a starch-degrading bacteria) which increased markedly after parturition likely as a consequence of a higher concentrate intake.

Keywords: rumen bacteria, transition cows, rumen metabolism, peripartal period

Procedia PDF Downloads 549
30593 Information Literacy Skills of Legal Practitioners in Khyber Pakhtunkhwa-Pakistan: An Empirical Study

Authors: Saeed Ullah Jan, Shaukat Ullah

Abstract:

Purpose of the study: The main theme of this study is to explore the information literacy skills of the law practitioners in Khyber Pakhtunkhwa-Pakistan under the heading "Information Literacy Skills of Legal Practitioners in Khyber Pakhtunkhwa-Pakistan: An Empirical Study." Research Method and Procedure: To conduct this quantitative study, the simple random sample approach is used. An adapted questionnaire is distributed among 254 lawyers of Dera Ismail Khan through personal visits and electronic means. The data collected is analyzed through SPSS (Statistical Package for Social Sciences) software. Delimitations of the study: The study is delimited to the southern district of Khyber Pakhtunkhwa: Dera Ismael Khan. Key Findings: Most of the lawyers of District Dera Ismail Khan of Khyber Pakhtunkhwa can recognize and understand the needed information. A large number of lawyers are capable of presenting information in both written and electronic forms. They are not comfortable with different legal databases and using various searching and keyword techniques. They have less knowledge of Boolean operators for locating online information. Conclusion and Recommendations: Efforts should be made to arrange refresher courses and training workshops on the utilization of different legal databases and different search techniques for retrieval of information sources. This practice will enhance the information literacy skills of lawyers, which will ultimately result in a better legal system in Pakistan. Practical implication(s): The findings of the study will motivate the policymakers and authorities of legal forums to restructure the information literacy programs to fulfill the lawyers' information needs. Contribution to the knowledge: No significant work has been done on the lawyers' information literacy skills in Khyber Pakhtunkhwa-Pakistan. It will bring a clear picture of the information literacy skills of law practitioners and address the problems faced by them during the seeking process.

Keywords: information literacy-Pakistan, infromation literacy-lawyers, information literacy-lawyers-KP, law practitioners-Pakistan

Procedia PDF Downloads 131
30592 Fatigue Analysis of Spread Mooring Line

Authors: Chanhoe Kang, Changhyun Lee, Seock-Hee Jun, Yeong-Tae Oh

Abstract:

Offshore floating structure under the various environmental conditions maintains a fixed position by mooring system. Environmental conditions, vessel motions and mooring loads are applied to mooring lines as the dynamic tension. Because global responses of mooring system in deep water are specified as wave frequency and low frequency response, they should be calculated from the time-domain analysis due to non-linear dynamic characteristics. To take into account all mooring loads, environmental conditions, added mass and damping terms at each time step, a lot of computation time and capacities are required. Thus, under the premise that reliable fatigue damage could be derived through reasonable analysis method, it is necessary to reduce the analysis cases through the sensitivity studies and appropriate assumptions. In this paper, effects in fatigue are studied for spread mooring system connected with oil FPSO which is positioned in deep water of West Africa offshore. The target FPSO with two Mbbls storage has 16 spread mooring lines (4 bundles x 4 lines). The various sensitivity studies are performed for environmental loads, type of responses, vessel offsets, mooring position, loading conditions and riser behavior. Each parameter applied to the sensitivity studies is investigated from the effects of fatigue damage through fatigue analysis. Based on the sensitivity studies, the following results are presented: Wave loads are more dominant in terms of fatigue than other environment conditions. Wave frequency response causes the higher fatigue damage than low frequency response. The larger vessel offset increases the mean tension and so it results in the increased fatigue damage. The external line of each bundle shows the highest fatigue damage by the governed vessel pitch motion due to swell wave conditions. Among three kinds of loading conditions, ballast condition has the highest fatigue damage due to higher tension. The riser damping occurred by riser behavior tends to reduce the fatigue damage. The various analysis results obtained from these sensitivity studies can be used for a simplified fatigue analysis of spread mooring line as the reference.

Keywords: mooring system, fatigue analysis, time domain, non-linear dynamic characteristics

Procedia PDF Downloads 320
30591 Simulation of Wind Solar Hybrid Power Generation for Pumping Station

Authors: Masoud Taghavi, Gholamreza Salehi, Ali Lohrasbi Nichkoohi

Abstract:

Despite the growing use of renewable energies in different fields of application of this technology in the field of water supply has been less attention. Photovoltaic and wind hybrid system is that new topics in renewable energy, including photovoltaic arrays, wind turbines, a set of batteries as a storage system and a diesel generator as a backup system is. In this investigation, first climate data including average wind speed and solar radiation at any time during the year, data collection and analysis are performed in the energy. The wind turbines in four models, photovoltaic panels at the 6 position of relative power, batteries and diesel generator capacity in seven states in the two models are combined hours of operation with renewables, diesel generator and battery bank check and a hybrid system of solar power generation-wind, which is optimized conditions, are presented.

Keywords: renewable energy, wind and solar energy, hybrid systems, cloning station

Procedia PDF Downloads 379
30590 Towards Positive Identity Construction for Japanese Non-Native English Language Teachers

Authors: Yumi Okano

Abstract:

The low level of English proficiency among Japanese people has been a problem for a long time. Japanese non-native English language teachers, under social or ideological constraints, feel a gap between government policy and their language proficiency and cannot maintain high self-esteem. This paper focuses on current Japanese policies and the social context in which teachers are placed and examines the measures necessary for their positive identity formation from a macro-meso-micro perspective. Some suggestions for achieving this are: 1) Teachers should free themselves from the idea of native speakers and embrace local needs and accents, 2) Teachers should be involved in student discussions as facilitators and individuals so that they can be good role models for their students, and 3) Teachers should invest in their classrooms. 4) Guidelines and training should be provided to help teachers gain confidence. In addition to reducing the workload to make more time available, 5) expanding opportunities for investment outside the classroom into the real world is necessary.

Keywords: language teacher identity, native speakers, government policy, critical pedagogy, investment

Procedia PDF Downloads 88
30589 Travel Time Estimation of Public Transport Networks Based on Commercial Incidence Areas in Quito Historic Center

Authors: M. Fernanda Salgado, Alfonso Tierra, David S. Sandoval, Wilbert G. Aguilar

Abstract:

Public transportation buses usually vary the speed depending on the places with the number of passengers. They require having efficient travel planning, a plan that will help them choose the fast route. Initially, an estimation tool is necessary to determine the travel time of each route, clearly establishing the possibilities. In this work, we give a practical solution that makes use of a concept that defines as areas of commercial incidence. These areas are based on the hypothesis that in the commercial places there is a greater flow of people and therefore the buses remain more time in the stops. The areas have one or more segments of routes, which have an incidence factor that allows to estimate the times. In addition, initial results are presented that verify the hypotheses and that promise adequately the travel times. In a future work, we take this approach to make an efficient travel planning system.

Keywords: commercial incidence, planning, public transport, speed travel, travel time

Procedia PDF Downloads 225
30588 Development of 3D Laser Scanner for Robot Navigation

Authors: Ali Emre Öztürk, Ergun Ercelebi

Abstract:

Autonomous robotic systems needs an equipment like a human eye for their movement. Robotic camera systems, distance sensors and 3D laser scanners have been used in the literature. In this study a 3D laser scanner has been produced for those autonomous robotic systems. In general 3D laser scanners are using 2 dimension laser range finders that are moving on one-axis (1D) to generate the model. In this study, the model has been obtained by a one-dimensional laser range finder that is moving in two –axis (2D) and because of this the laser scanner has been produced cheaper. Furthermore for the laser scanner a motor driver, an embedded system control board has been used and at the same time a user interface card has been used to make the communication between those cards and computer. Due to this laser scanner, the density of the objects, the distance between the objects and the necessary path ways for the robot can be calculated. The data collected by the laser scanner system is converted in to cartesian coordinates to be modeled in AutoCAD program. This study shows also the synchronization between the computer user interface, AutoCAD and the embedded systems. As a result it makes the solution cheaper for such systems. The scanning results are enough for an autonomous robot but the scan cycle time should be developed. This study makes also contribution for further studies between the hardware and software needs since it has a powerful performance and a low cost.

Keywords: 3D laser scanner, embedded system, 1D laser range finder, 3D model

Procedia PDF Downloads 260
30587 The Concept of Neurostatistics as a Neuroscience

Authors: Igwenagu Chinelo Mercy

Abstract:

This study is on the concept of Neurostatistics in relation to neuroscience. Neuroscience also known as neurobiology is the scientific study of the nervous system. In the study of neuroscience, it has been noted that brain function and its relations to the process of acquiring knowledge and behaviour can be better explained by the use of various interrelated methods. The scope of neuroscience has broadened over time to include different approaches used to study the nervous system at different scales. On the other hand, Neurostatistics based on this study is viewed as a statistical concept that uses similar techniques of neuron mechanisms to solve some problems especially in the field of life science. This study is imperative in this era of Artificial intelligence/Machine leaning in the sense that clear understanding of the technique and its proper application could assist in solving some medical disorder that are mainly associated with the nervous system. This will also help in layman’s understanding of the technique of the nervous system in order to overcome some of the health challenges associated with it. For this concept to be well understood, an illustrative example using a brain associated disorder was used for demonstration. Structural equation modelling was adopted in the analysis. The results clearly show the link between the techniques of statistical model and nervous system. Hence, based on this study, the appropriateness of Neurostatistics application in relation to neuroscience could be based on the understanding of the behavioural pattern of both concepts.

Keywords: brain, neurons, neuroscience, neurostatistics, structural equation modeling

Procedia PDF Downloads 53
30586 Limits Problem Solving in Engineering Careers: Competences and Errors

Authors: Veronica Diaz Quezada

Abstract:

In this article, the performance and errors are featured and analysed in the limit problems solving of a real-valued function, in correspondence to competency-based education in engineering careers, in the south of Chile. The methodological component is contextualised in a qualitative research, with a descriptive and explorative design, with elaboration, content validation and application of quantitative instruments, consisting of two parallel forms of open answer tests, based on limit application problems. The mathematical competences and errors made by students from five engineering careers from a public University are identified and characterized. Results show better performance only to solve routine-context problem-solving competence, thus they are oriented towards a rational solution or they use a suitable problem-solving method, achieving the correct solution. Regarding errors, most of them are related to techniques and the incorrect use of theorems and definitions of real-valued function limits of real variable.

Keywords: engineering education, errors, limits, mathematics competences, problem solving

Procedia PDF Downloads 133
30585 Comparison of Different DNA Extraction Platforms with FFPE tissue

Authors: Wang Yanping Karen, Mohd Rafeah Siti, Park MI Kyoung

Abstract:

Formalin-fixed paraffin embedded (FFPE) tissue is important in the area of oncological diagnostics. This method of preserving tissues enabling them to be stored easily at ambient temperature for a long time. This decreases the risk of losing the DNA quantity and quality after extraction, reducing sample wastage, and making FFPE more cost effective. However, extracting DNA from FFPE tissue is a challenge as DNA purified is often highly cross-linked, fragmented, and degraded. In addition, this causes problems for many downstream processes. In this study, there will be a comparison of DNA extraction efficiency between One BioMed’s Xceler8 automated platform with commercial available extraction kits (Qiagen and Roche). The FFPE tissue slices were subjected to deparaffinization process, pretreatment and then DNA extraction using the three mentioned platforms. The DNA quantity were determined with real-time PCR (BioRad CFX ) and gel electrophoresis. The amount of DNA extracted with the One BioMed’s X8 platform was found to be comparable with the other two manual extraction kits.

Keywords: DNA extraction, FFPE tissue, qiagen, roche, one biomed X8

Procedia PDF Downloads 90
30584 A Method for Clinical Concept Extraction from Medical Text

Authors: Moshe Wasserblat, Jonathan Mamou, Oren Pereg

Abstract:

Natural Language Processing (NLP) has made a major leap in the last few years, in practical integration into medical solutions; for example, extracting clinical concepts from medical texts such as medical condition, medication, treatment, and symptoms. However, training and deploying those models in real environments still demands a large amount of annotated data and NLP/Machine Learning (ML) expertise, which makes this process costly and time-consuming. We present a practical and efficient method for clinical concept extraction that does not require costly labeled data nor ML expertise. The method includes three steps: Step 1- the user injects a large in-domain text corpus (e.g., PubMed). Then, the system builds a contextual model containing vector representations of concepts in the corpus, in an unsupervised manner (e.g., Phrase2Vec). Step 2- the user provides a seed set of terms representing a specific medical concept (e.g., for the concept of the symptoms, the user may provide: ‘dry mouth,’ ‘itchy skin,’ and ‘blurred vision’). Then, the system matches the seed set against the contextual model and extracts the most semantically similar terms (e.g., additional symptoms). The result is a complete set of terms related to the medical concept. Step 3 –in production, there is a need to extract medical concepts from the unseen medical text. The system extracts key-phrases from the new text, then matches them against the complete set of terms from step 2, and the most semantically similar will be annotated with the same medical concept category. As an example, the seed symptom concepts would result in the following annotation: “The patient complaints on fatigue [symptom], dry skin [symptom], and Weight loss [symptom], which can be an early sign for Diabetes.” Our evaluations show promising results for extracting concepts from medical corpora. The method allows medical analysts to easily and efficiently build taxonomies (in step 2) representing their domain-specific concepts, and automatically annotate a large number of texts (in step 3) for classification/summarization of medical reports.

Keywords: clinical concepts, concept expansion, medical records annotation, medical records summarization

Procedia PDF Downloads 117
30583 Aggregation of Electric Vehicles for Emergency Frequency Regulation of Two-Area Interconnected Grid

Authors: S. Agheb, G. Ledwich, G.Walker, Z.Tong

Abstract:

Frequency control has become more of concern for reliable operation of interconnected power systems due to the integration of low inertia renewable energy sources to the grid and their volatility. Also, in case of a sudden fault, the system has less time to recover before widespread blackouts. Electric Vehicles (EV)s have the potential to cooperate in the Emergency Frequency Regulation (EFR) by a nonlinear control of the power system in case of large disturbances. The time is not adequate to communicate with each individual EV on emergency cases, and thus, an aggregate model is necessary for a quick response to prevent from much frequency deviation and the occurrence of any blackout. In this work, an aggregate of EVs is modelled as a big virtual battery in each area considering various aspects of uncertainty such as the number of connected EVs and their initial State of Charge (SOC) as stochastic variables. A control law was proposed and applied to the aggregate model using Lyapunov energy function to maximize the rate of reduction of total kinetic energy in a two-area network after the occurrence of a fault. The control methods are primarily based on the charging/ discharging control of available EVs as shunt capacity in the distribution system. Three different cases were studied considering the locational aspect of the model with the virtual EV either in the center of the two areas or in the corners. The simulation results showed that EVs could help the generator lose its kinetic energy in a short time after a contingency. Earlier estimation of possible contributions of EVs can help the supervisory control level to transmit a prompt control signal to the subsystems such as the aggregator agents and the grid. Thus, the percentage of EVs contribution for EFR will be characterized in the future as the goal of this study.

Keywords: emergency frequency regulation, electric vehicle, EV, aggregation, Lyapunov energy function

Procedia PDF Downloads 87
30582 A Machine Learning Approach for Detecting and Locating Hardware Trojans

Authors: Kaiwen Zheng, Wanting Zhou, Nan Tang, Lei Li, Yuanhang He

Abstract:

The integrated circuit industry has become a cornerstone of the information society, finding widespread application in areas such as industry, communication, medicine, and aerospace. However, with the increasing complexity of integrated circuits, Hardware Trojans (HTs) implanted by attackers have become a significant threat to their security. In this paper, we proposed a hardware trojan detection method for large-scale circuits. As HTs introduce physical characteristic changes such as structure, area, and power consumption as additional redundant circuits, we proposed a machine-learning-based hardware trojan detection method based on the physical characteristics of gate-level netlists. This method transforms the hardware trojan detection problem into a machine-learning binary classification problem based on physical characteristics, greatly improving detection speed. To address the problem of imbalanced data, where the number of pure circuit samples is far less than that of HTs circuit samples, we used the SMOTETomek algorithm to expand the dataset and further improve the performance of the classifier. We used three machine learning algorithms, K-Nearest Neighbors, Random Forest, and Support Vector Machine, to train and validate benchmark circuits on Trust-Hub, and all achieved good results. In our case studies based on AES encryption circuits provided by trust-hub, the test results showed the effectiveness of the proposed method. To further validate the method’s effectiveness for detecting variant HTs, we designed variant HTs using open-source HTs. The proposed method can guarantee robust detection accuracy in the millisecond level detection time for IC, and FPGA design flows and has good detection performance for library variant HTs.

Keywords: hardware trojans, physical properties, machine learning, hardware security

Procedia PDF Downloads 126
30581 Detection of Voltage Sag and Voltage Swell in Power Quality Using Wavelet Transforms

Authors: Nor Asrina Binti Ramlee

Abstract:

Voltage sag, voltage swell, high-frequency noise and voltage transients are kinds of disturbances in power quality. They are also known as power quality events. Equipment used in the industry nowadays has become more sensitive to these events with the increasing complexity of equipment. This leads to the importance of distributing clean power quality to the consumer. To provide better service, the best analysis on power quality is very vital. Thus, this paper presents the events detection focusing on voltage sag and swell. The method is developed by applying time domain signal analysis using wavelet transform approach in MATLAB. Four types of mother wavelet namely Haar, Dmey, Daubechies, and Symlet are used to detect the events. This project analyzed real interrupted signal obtained from 22 kV transmission line in Skudai, Johor Bahru, Malaysia. The signals will be decomposed through the wavelet mothers. The best mother is the one that is capable to detect the time location of the event accurately.

Keywords: power quality, voltage sag, voltage swell, wavelet transform

Procedia PDF Downloads 353