Search results for: maximal data sets
24954 Combined Aerobic-Resistance Exercise Training and Broccoli Supplementation on Plasma Decitin-1 and Insulin Resistance in Men with Type 2 Diabetes
Authors: Mohammad Soltani, Ayoub Saeidi, Nikoo Khosravi, Hanieh Nohbaradar, Seyedeh Parya Barzanjeh, Hassane Zouhal
Abstract:
Exercise training and herbs supplement represent have role in the treatment for patients with type 2 diabetes (T2D). However, it is unclear combined effects of exercise training and herbs supplements on diabetic risk markers. This study aimed to determine the effect of 12 weeks of combined exercise and broccoli supplementation on decitin-1 and insulin resistance in men with type 2 diabetes. Forty-four type 2 diabetes men (age, 48.52 ± 4.36) were randomly allocated to training -supplement (TS, n = 11), training- placebo (TP, n = 11), supplement (S, n = 11) and control- placebo (CP, n = 11) groups. The combined exercise program included 12 weeks, three sessions per week, that each session contained 45 minutes of resistance training with intensity 60-70% of one maximal repetition and 30 minutes aerobic training (running) with intensity 60-70% of maximum heart rate. In addition supplement groups consumed 10 grams of Broccoli per day for 12 weeks. Plasma Decitin-1, HOMA-IR, Insulin, glucose and body composition were assessed before and after training. Plasma Dectin-1, HOMA-IR, glucose and BMI significantly decreased in TS, TP and S groups compared with CP group (P < .05). In addition Insulin and skeletal muscles mass showed significant increase in TS and TP groups compared with S and CP groups (P < .05). It is concluded that both combined exercise training (aerobic-resistance) or broccoli supplement can improve plasma Decitin-1 and insulin resistance in two diabetic patients however combine of exercise training and broccoli supplement have more effective on these markers.Keywords: broccoli supplements, combined training, decitin-1, insulin resistance, type 2 diabetes
Procedia PDF Downloads 13424953 Estimating Destinations of Bus Passengers Using Smart Card Data
Authors: Hasik Lee, Seung-Young Kho
Abstract:
Nowadays, automatic fare collection (AFC) system is widely used in many countries. However, smart card data from many of cities does not contain alighting information which is necessary to build OD matrices. Therefore, in order to utilize smart card data, destinations of passengers should be estimated. In this paper, kernel density estimation was used to forecast probabilities of alighting stations of bus passengers and applied to smart card data in Seoul, Korea which contains boarding and alighting information. This method was also validated with actual data. In some cases, stochastic method was more accurate than deterministic method. Therefore, it is sufficiently accurate to be used to build OD matrices.Keywords: destination estimation, Kernel density estimation, smart card data, validation
Procedia PDF Downloads 35224952 Evaluated Nuclear Data Based Photon Induced Nuclear Reaction Model of GEANT4
Authors: Jae Won Shin
Abstract:
We develop an evaluated nuclear data based photonuclear reaction model of GEANT4 for a more accurate simulation of photon-induced neutron production. The evaluated photonuclear data libraries from the ENDF/B-VII.1 are taken as input. Incident photon energies up to 140 MeV which is the threshold energy for the pion production are considered. For checking the validity of the use of the data-based model, we calculate the photoneutron production cross-sections and yields and compared them with experimental data. The results obtained from the developed model are found to be in good agreement with the experimental data for (γ,xn) reactions.Keywords: ENDF/B-VII.1, GEANT4, photoneutron, photonuclear reaction
Procedia PDF Downloads 27524951 Optimizing Communications Overhead in Heterogeneous Distributed Data Streams
Authors: Rashi Bhalla, Russel Pears, M. Asif Naeem
Abstract:
In this 'Information Explosion Era' analyzing data 'a critical commodity' and mining knowledge from vertically distributed data stream incurs huge communication cost. However, an effort to decrease the communication in the distributed environment has an adverse influence on the classification accuracy; therefore, a research challenge lies in maintaining a balance between transmission cost and accuracy. This paper proposes a method based on Bayesian inference to reduce the communication volume in a heterogeneous distributed environment while retaining prediction accuracy. Our experimental evaluation reveals that a significant reduction in communication can be achieved across a diverse range of dataset types.Keywords: big data, bayesian inference, distributed data stream mining, heterogeneous-distributed data
Procedia PDF Downloads 16124950 Empirical Examination of High Performance Work System, Organizational Commitment and Organizational Citizen Behavior: A Mediation of Model of Vietnam Organizations
Authors: Giang Vu, Duong Nguyen, Yuan-Ling Chen
Abstract:
Vietnam is a fast developing country with highly economic growth, and Vietnam organizations strive to utilize high performance work system (HPWS) in reinforcing employee in-role performance. HPWS, a bundle of human resource (HR) practices, are composed of eight sets of HR practices, namely selective staffing, extensive training, internal mobility, employment security, clear job description, result-oriented appraisal, incentive reward, and participation. However, whether HPWS stimulate employee extra-role behaviors remains understudied in a booming economic context. In this study, we aim to investigate organizational citizenship behavior (OCB) in a Vietnam context and, as a central issue, disentangle how HPWS elicits in employee OCB. On the other hand, recently, a deliberation of so-called 'black-box' HPWS issue has explored the role of employee commitment, suggesting that organizational commitment is a compelling source of employee OCB. We draw upon social exchange theory to predict that when employees perceive the organizational investment, like HPWS, in heightening their abilities, knowledge, and motivation, they are more likely to pay back with commitment; consequently, they will take initiatives in OCB. Hence, we hypothesize an individual level framework, in which organizational commitment mediates the positive relationship between HPWS and OCB. We collected data on HPWS, organizational commitment, OCB, and demographic variables, all at line managers of Vietnamese firms in Hanoi and Hochiminh. We conclude with research findings, implications, and future research suggestions.Keywords: high performance work system, organizational citizenship behavior, organizational commitment, Vietnam
Procedia PDF Downloads 31024949 Data Privacy: Stakeholders’ Conflicts in Medical Internet of Things
Authors: Benny Sand, Yotam Lurie, Shlomo Mark
Abstract:
Medical Internet of Things (MIoT), AI, and data privacy are linked forever in a gordian knot. This paper explores the conflicts of interests between the stakeholders regarding data privacy in the MIoT arena. While patients are at home during healthcare hospitalization, MIoT can play a significant role in improving the health of large parts of the population by providing medical teams with tools for collecting data, monitoring patients’ health parameters, and even enabling remote treatment. While the amount of data handled by MIoT devices grows exponentially, different stakeholders have conflicting understandings and concerns regarding this data. The findings of the research indicate that medical teams are not concerned by the violation of data privacy rights of the patients' in-home healthcare, while patients are more troubled and, in many cases, are unaware that their data is being used without their consent. MIoT technology is in its early phases, and hence a mixed qualitative and quantitative research approach will be used, which will include case studies and questionnaires in order to explore this issue and provide alternative solutions.Keywords: MIoT, data privacy, stakeholders, home healthcare, information privacy, AI
Procedia PDF Downloads 10224948 Aging-Related Changes in Calf Muscle Function: Implications for Venous Hemodynamic and the Role of External Mechanical Activation
Authors: Bhavatharani S., Boopathy V., Kavin S., Naveethkumar R.
Abstract:
Context: Resistance training with blood flow restriction (BFR) has increased in clinical rehabilitation due to the substantial benefits observed in augmenting muscle mass and strength using low loads. However, there is a great variability of training pressures for clinical populations as well as methods to estimate it. The aim of this study was to estimate the percentage of maximal BFR that could result by applying different methodologies based on arbitrary or individual occlusion levels using a cuff width between 9 and 13 cm. Design: A secondary analysis was performed on the combined databases of 2 previous larger studies using BFR training. Methods: To estimate these percentages, the occlusion values needed to reach complete BFR (100% limb occlusion pressure [LOP]) were estimated by Doppler ultrasound. Seventy-five participants (age 24.32 [4.86] y; weight: 78.51 [14.74] kg; height: 1.77 [0.09] m) were enrolled in the laboratory study for measuring LOP in the thigh, arm, or calf. Results: When arbitrary values of restriction are applied, a supra-occlusive LOP between 120% and 190% LOP may result. Furthermore, the application of 130% resting brachial systolic blood pressure creates a similar occlusive stimulus as 100% LOP. Conclusions: Methods using 100 mm Hg and the resting brachial systolic blood pressure could represent the safest application prescriptions as they resulted in applied pressures between 60% and 80% LOP. One hundred thirty percent of the resting brachial systolic blood pressure could be used to indirectly estimate 100% LOP at cuff widths between 9 and 13 cm. Finally, methodologies that use standard values of 200 and, 300 mm Hg far exceed LOP and may carry additional risk during BFR exercise.Keywords: lower limb rehabilitation, ESP32, pneumatics for medical, programmed rehabilitation
Procedia PDF Downloads 8324947 Design and Implementation of a Hardened Cryptographic Coprocessor with 128-bit RISC-V Core
Authors: Yashas Bedre Raghavendra, Pim Vullers
Abstract:
This study presents the design and implementation of an abstract cryptographic coprocessor, leveraging AMBA(Advanced Microcontroller Bus Architecture) protocols - APB (Advanced Peripheral Bus) and AHB (Advanced High-performance Bus), to enable seamless integration with the main CPU(Central processing unit) and enhance the coprocessor’s algorithm flexibility. The primary objective is to create a versatile coprocessor that can execute various cryptographic algorithms, including ECC(Elliptic-curve cryptography), RSA(Rivest–Shamir–Adleman), and AES (Advanced Encryption Standard) while providing a robust and secure solution for modern secure embedded systems. To achieve this goal, the coprocessor is equipped with a tightly coupled memory (TCM) for rapid data access during cryptographic operations. The TCM is placed within the coprocessor, ensuring quick retrieval of critical data and optimizing overall performance. Additionally, the program memory is positioned outside the coprocessor, allowing for easy updates and reconfiguration, which enhances adaptability to future algorithm implementations. Direct links are employed instead of DMA(Direct memory access) for data transfer, ensuring faster communication and reducing complexity. The AMBA-based communication architecture facilitates seamless interaction between the coprocessor and the main CPU, streamlining data flow and ensuring efficient utilization of system resources. The abstract nature of the coprocessor allows for easy integration of new cryptographic algorithms in the future. As the security landscape continues to evolve, the coprocessor can adapt and incorporate emerging algorithms, making it a future-proof solution for cryptographic processing. Furthermore, this study explores the addition of custom instructions into RISC-V ISE (Instruction Set Extension) to enhance cryptographic operations. By incorporating custom instructions specifically tailored for cryptographic algorithms, the coprocessor achieves higher efficiency and reduced cycles per instruction (CPI) compared to traditional instruction sets. The adoption of RISC-V 128-bit architecture significantly reduces the total number of instructions required for complex cryptographic tasks, leading to faster execution times and improved overall performance. Comparisons are made with 32-bit and 64-bit architectures, highlighting the advantages of the 128-bit architecture in terms of reduced instruction count and CPI. In conclusion, the abstract cryptographic coprocessor presented in this study offers significant advantages in terms of algorithm flexibility, security, and integration with the main CPU. By leveraging AMBA protocols and employing direct links for data transfer, the coprocessor achieves high-performance cryptographic operations without compromising system efficiency. With its TCM and external program memory, the coprocessor is capable of securely executing a wide range of cryptographic algorithms. This versatility and adaptability, coupled with the benefits of custom instructions and the 128-bit architecture, make it an invaluable asset for secure embedded systems, meeting the demands of modern cryptographic applications.Keywords: abstract cryptographic coprocessor, AMBA protocols, ECC, RSA, AES, tightly coupled memory, secure embedded systems, RISC-V ISE, custom instructions, instruction count, cycles per instruction
Procedia PDF Downloads 6924946 Optimizing Data Integration and Management Strategies for Upstream Oil and Gas Operations
Authors: Deepak Singh, Rail Kuliev
Abstract:
The abstract highlights the critical importance of optimizing data integration and management strategies in the upstream oil and gas industry. With its complex and dynamic nature generating vast volumes of data, efficient data integration and management are essential for informed decision-making, cost reduction, and maximizing operational performance. Challenges such as data silos, heterogeneity, real-time data management, and data quality issues are addressed, prompting the proposal of several strategies. These strategies include implementing a centralized data repository, adopting industry-wide data standards, employing master data management (MDM), utilizing real-time data integration technologies, and ensuring data quality assurance. Training and developing the workforce, “reskilling and upskilling” the employees and establishing robust Data Management training programs play an essential role and integral part in this strategy. The article also emphasizes the significance of data governance and best practices, as well as the role of technological advancements such as big data analytics, cloud computing, Internet of Things (IoT), and artificial intelligence (AI) and machine learning (ML). To illustrate the practicality of these strategies, real-world case studies are presented, showcasing successful implementations that improve operational efficiency and decision-making. In present study, by embracing the proposed optimization strategies, leveraging technological advancements, and adhering to best practices, upstream oil and gas companies can harness the full potential of data-driven decision-making, ultimately achieving increased profitability and a competitive edge in the ever-evolving industry.Keywords: master data management, IoT, AI&ML, cloud Computing, data optimization
Procedia PDF Downloads 7024945 Influence of Parameters of Modeling and Data Distribution for Optimal Condition on Locally Weighted Projection Regression Method
Authors: Farhad Asadi, Mohammad Javad Mollakazemi, Aref Ghafouri
Abstract:
Recent research in neural networks science and neuroscience for modeling complex time series data and statistical learning has focused mostly on learning from high input space and signals. Local linear models are a strong choice for modeling local nonlinearity in data series. Locally weighted projection regression is a flexible and powerful algorithm for nonlinear approximation in high dimensional signal spaces. In this paper, different learning scenario of one and two dimensional data series with different distributions are investigated for simulation and further noise is inputted to data distribution for making different disordered distribution in time series data and for evaluation of algorithm in locality prediction of nonlinearity. Then, the performance of this algorithm is simulated and also when the distribution of data is high or when the number of data is less the sensitivity of this approach to data distribution and influence of important parameter of local validity in this algorithm with different data distribution is explained.Keywords: local nonlinear estimation, LWPR algorithm, online training method, locally weighted projection regression method
Procedia PDF Downloads 50224944 Scheduling in a Single-Stage, Multi-Item Compatible Process Using Multiple Arc Network Model
Authors: Bokkasam Sasidhar, Ibrahim Aljasser
Abstract:
The problem of finding optimal schedules for each equipment in a production process is considered, which consists of a single stage of manufacturing and which can handle different types of products, where changeover for handling one type of product to the other type incurs certain costs. The machine capacity is determined by the upper limit for the quantity that can be processed for each of the products in a set up. The changeover costs increase with the number of set ups and hence to minimize the costs associated with the product changeover, the planning should be such that similar types of products should be processed successively so that the total number of changeovers and in turn the associated set up costs are minimized. The problem of cost minimization is equivalent to the problem of minimizing the number of set ups or equivalently maximizing the capacity utilization in between every set up or maximizing the total capacity utilization. Further, the production is usually planned against customers’ orders, and generally different customers’ orders are assigned one of the two priorities – “normal” or “priority” order. The problem of production planning in such a situation can be formulated into a Multiple Arc Network (MAN) model and can be solved sequentially using the algorithm for maximizing flow along a MAN and the algorithm for maximizing flow along a MAN with priority arcs. The model aims to provide optimal production schedule with an objective of maximizing capacity utilization, so that the customer-wise delivery schedules are fulfilled, keeping in view the customer priorities. Algorithms have been presented for solving the MAN formulation of the production planning with customer priorities. The application of the model is demonstrated through numerical examples.Keywords: scheduling, maximal flow problem, multiple arc network model, optimization
Procedia PDF Downloads 40224943 Emotional, Behavioural and Social Development: Modality of Hierarchy of Needs in Supporting Parents with Special Needs
Authors: Fadzilah Abdul Rahman
Abstract:
Emotional development is developed between the parents and their child. Behavioural development is also developed between the parents and their child. Social Development is how parents can help their special needs child to adapt to society and to face challenges. In promoting a lifelong learning mindset, enhancing skill sets and readiness to face challenges, parents would be able to counter balance these challenges during their care giving process and better manage their expectations through understanding the hierarchy of needs modality towards a positive attitude, and in turn, improve their quality of life and participation in society. This paper aims to demonstrate how the hierarchy of needs can be applied in various situations of caregiving for parents with a special needs child.Keywords: hierarchy of needs, parents, special needs, care-giving
Procedia PDF Downloads 38924942 Big Data Strategy for Telco: Network Transformation
Abstract:
Big data has the potential to improve the quality of services; enable infrastructure that businesses depend on to adapt continually and efficiently; improve the performance of employees; help organizations better understand customers; and reduce liability risks. Analytics and marketing models of fixed and mobile operators are falling short in combating churn and declining revenue per user. Big Data presents new method to reverse the way and improve profitability. The benefits of Big Data and next-generation network, however, are more exorbitant than improved customer relationship management. Next generation of networks are in a prime position to monetize rich supplies of customer information—while being mindful of legal and privacy issues. As data assets are transformed into new revenue streams will become integral to high performance.Keywords: big data, next generation networks, network transformation, strategy
Procedia PDF Downloads 36024941 A Framework for Automated Nuclear Waste Classification
Authors: Seonaid Hume, Gordon Dobie, Graeme West
Abstract:
Detecting and localizing radioactive sources is a necessity for safe and secure decommissioning of nuclear facilities. An important aspect for the management of the sort-and-segregation process is establishing the spatial distributions and quantities of the waste radionuclides, their type, corresponding activity, and ultimately classification for disposal. The data received from surveys directly informs decommissioning plans, on-site incident management strategies, the approach needed for a new cell, as well as protecting the workforce and the public. Manual classification of nuclear waste from a nuclear cell is time-consuming, expensive, and requires significant expertise to make the classification judgment call. Also, in-cell decommissioning is still in its relative infancy, and few techniques are well-developed. As with any repetitive and routine tasks, there is the opportunity to improve the task of classifying nuclear waste using autonomous systems. Hence, this paper proposes a new framework for the automatic classification of nuclear waste. This framework consists of five main stages; 3D spatial mapping and object detection, object classification, radiological mapping, source localisation based on gathered evidence and finally, waste classification. The first stage of the framework, 3D visual mapping, involves object detection from point cloud data. A review of related applications in other industries is provided, and recommendations for approaches for waste classification are made. Object detection focusses initially on cylindrical objects since pipework is significant in nuclear cells and indeed any industrial site. The approach can be extended to other commonly occurring primitives such as spheres and cubes. This is in preparation of stage two, characterizing the point cloud data and estimating the dimensions, material, degradation, and mass of the objects detected in order to feature match them to an inventory of possible items found in that nuclear cell. Many items in nuclear cells are one-offs, have limited or poor drawings available, or have been modified since installation, and have complex interiors, which often and inadvertently pose difficulties when accessing certain zones and identifying waste remotely. Hence, this may require expert input to feature match objects. The third stage, radiological mapping, is similar in order to facilitate the characterization of the nuclear cell in terms of radiation fields, including the type of radiation, activity, and location within the nuclear cell. The fourth stage of the framework takes the visual map for stage 1, the object characterization from stage 2, and radiation map from stage 3 and fuses them together, providing a more detailed scene of the nuclear cell by identifying the location of radioactive materials in three dimensions. The last stage involves combining the evidence from the fused data sets to reveal the classification of the waste in Bq/kg, thus enabling better decision making and monitoring for in-cell decommissioning. The presentation of the framework is supported by representative case study data drawn from an application in decommissioning from a UK nuclear facility. This framework utilises recent advancements of the detection and mapping capabilities of complex radiation fields in three dimensions to make the process of classifying nuclear waste faster, more reliable, cost-effective and safer.Keywords: nuclear decommissioning, radiation detection, object detection, waste classification
Procedia PDF Downloads 20024940 A Type-2 Fuzzy Model for Link Prediction in Social Network
Authors: Mansoureh Naderipour, Susan Bastani, Mohammad Fazel Zarandi
Abstract:
Predicting links that may occur in the future and missing links in social networks is an attractive problem in social network analysis. Granular computing can help us to model the relationships between human-based system and social sciences in this field. In this paper, we present a model based on granular computing approach and Type-2 fuzzy logic to predict links regarding nodes’ activity and the relationship between two nodes. Our model is tested on collaboration networks. It is found that the accuracy of prediction is significantly higher than the Type-1 fuzzy and crisp approach.Keywords: social network, link prediction, granular computing, type-2 fuzzy sets
Procedia PDF Downloads 32624939 Fuzzy Expert Systems Applied to Intelligent Design of Data Centers
Authors: Mario M. Figueroa de la Cruz, Claudia I. Solorzano, Raul Acosta, Ignacio Funes
Abstract:
This technological development project seeks to create a tool that allows companies, in need of implementing a Data Center, intelligently determining factors for allocating resources support cooling and power supply (UPS) in its conception. The results should show clearly the speed, robustness and reliability of a system designed for deployment in environments where they must manage and protect large volumes of data.Keywords: telecommunications, data center, fuzzy logic, expert systems
Procedia PDF Downloads 34524938 Hydrological Response of the Glacierised Catchment: Himalayan Perspective
Authors: Sonu Khanal, Mandira Shrestha
Abstract:
Snow and Glaciers are the largest dependable reserved sources of water for the river system originating from the Himalayas so an accurate estimate of the volume of water contained in the snowpack and the rate of release of water from snow and glaciers are, therefore, needed for efficient management of the water resources. This research assess the fusion of energy exchanges between the snowpack, air above and soil below according to mass and energy balance which makes it apposite than the models using simple temperature index for the snow and glacier melt computation. UEBGrid a Distributed energy based model is used to calculate the melt which is then routed by Geo-SFM. The model robustness is maintained by incorporating the albedo generated from the Landsat-7 ETM images on a seasonal basis for the year 2002-2003 and substrate map derived from TM. The Substrate file includes predominantly the 4 major thematic layers viz Snow, clean ice, Glaciers and Barren land. This approach makes use of CPC RFE-2 and MERRA gridded data sets as the source of precipitation and climatic variables. The subsequent model run for the year between 2002-2008 shows a total annual melt of 17.15 meter is generate from the Marshyangdi Basin of which 71% is contributed by the glaciers , 18% by the rain and rest being from the snow melt. The albedo file is decisive in governing the melt dynamics as 30% increase in the generated surface albedo results in the 10% decrease in the simulated discharge. The melt routed with the land cover and soil variables using Geo-SFM shows Nash-Sutcliffe Efficiency of 0.60 with observed discharge for the study period.Keywords: Glacier, Glacier melt, Snowmelt, Energy balance
Procedia PDF Downloads 45524937 Urban Livelihoods and Climate Change: Adaptation Strategies for Urban Poor in Douala, Cameroon
Authors: Agbortoko Manyigbe Ayuk Nkem, Eno Cynthia Osuh
Abstract:
This paper sets to examine the relationship between climate change and urban livelihood through a vulnerability assessment of the urban poor in Douala. Urban development in Douala places priority towards industrial and city-centre development with little focus on the urban poor in terms of housing units and areas of sustenance. With the high rate of urbanisation and increased land prices, the urban poor are forced to occupy marginal lands which are mainly wetlands, wastelands and along abandoned neighbourhoods prone to natural hazards. Due to climate change and its effects, these wetlands are constantly flooded thereby destroying homes, properties, and crops. Also, most of these urban dwellers have found solace in urban agriculture as a means for survival. However, since agriculture in tropical regions like Cameroon depends largely on seasonal rainfall, the changes in rainfall pattern has led to misplaced periods for crop planting and a huge wastage of resources as rainfall becomes very unreliable with increased temperature levels. Data for the study was obtained from both primary and secondary sources. Secondary sources included published materials related to climate change and vulnerability. Primary data was obtained through focus-group discussions with some urban farmers while a stratified sampling of residents within marginal lands was done. Each stratum was randomly sampled to obtain information on different stressors related to climate change and their effect on livelihood. Findings proved that the high rate of rural-urban migration into Douala has led to increased prevalence of the urban poor and their vulnerability to climate change as evident in their constant fight against flood from unexpected sea level rise and irregular rainfall pattern for urban agriculture. The study also proved that women were most vulnerable as they depended solely on urban agriculture and its related activities like retailing agricultural products in different urban markets which to them serves as a main source of income in the attainment of basic needs for the family. Adaptation measures include the constant use of sand bags, raised makeshifts as well as cultivation along streams, planting after evidence of constant rainfall has become paramount for sustainability.Keywords: adaptation, Douala, Cameroon, climate change, development, livelihood, vulnerability
Procedia PDF Downloads 29224936 Genetic Testing and Research in South Africa: The Sharing of Data Across Borders
Authors: Amy Gooden, Meshandren Naidoo
Abstract:
Genetic research is not confined to a particular jurisdiction. Using direct-to-consumer genetic testing (DTC-GT) as an example, this research assesses the status of data sharing into and out of South Africa (SA). While SA laws cover the sending of genetic data out of SA, prohibiting such transfer unless a legal ground exists, the position where genetic data comes into the country depends on the laws of the country from where it is sent – making the legal position less clear.Keywords: cross-border, data, genetic testing, law, regulation, research, sharing, South Africa
Procedia PDF Downloads 16124935 Wind Power Mapping and NPV of Embedded Generation Systems in Nigeria
Authors: Oluseyi O. Ajayi, Ohiose D. Ohijeagbon, Mercy Ogbonnaya, Ameh Attabo
Abstract:
The study assessed the potential and economic viability of stand-alone wind systems for embedded generation, taking into account its benefits to small off-grid rural communities at 40 meteorological sites in Nigeria. A specific electric load profile was developed to accommodate communities consisting of 200 homes, a school and a community health centre. This load profile was incorporated within the distributed generation analysis producing energy in the MW range, while optimally meeting daily load demand for the rural communities. Twenty-four years (1987 to 2010) of wind speed data at a height of 10m utilized for the study were sourced from the Nigeria Meteorological Department, Oshodi. The HOMER® software optimizing tool was engaged for the feasibility study and design. Each site was suited to 3MW wind turbines in sets of five, thus 15MW was designed for each site. This design configuration was adopted in order to easily compare the distributed generation system amongst the sites to determine their relative economic viability in terms of life cycle cost, as well as levelised cost of producing energy. A net present value was estimated in terms of life cycle cost for 25 of the 40 meteorological sites. On the other hand, the remaining sites yielded a net present cost; meaning the installations at these locations were not economically viable when utilizing the present tariff regime for embedded generation in Nigeria.Keywords: wind speed, wind power, distributed generation, cost per kilowatt-hour, clean energy, Nigeria
Procedia PDF Downloads 39724934 Strategic Planning in South African Higher Education
Authors: Noxolo Mafu
Abstract:
This study presents an overview of strategic planning in South African higher education institutions by tracing its trends and mystique in order to identify its impact. Over the democratic decades, strategic planning has become integral to institutional survival. It has been used as a potent tool by several institutions to catch up and surpass counterparts. While planning has always been part of higher education, strategic planning should be considered different. Strategic planning is primarily about development and maintenance of a strategic fitting between an institution and its dynamic opportunities. This presupposes existence of sets of stages that institutions pursue of which, can be regarded for assessment of the impact of strategic planning in an institution. The network theory serves guides the study in demystifying apparent organisational networks in strategic planning processes.Keywords: network theory, strategy, planning, strategic planning, assessment, impact
Procedia PDF Downloads 56224933 Omni-Modeler: Dynamic Learning for Pedestrian Redetection
Authors: Michael Karnes, Alper Yilmaz
Abstract:
This paper presents the application of the omni-modeler towards pedestrian redetection. The pedestrian redetection task creates several challenges when applying deep neural networks (DNN) due to the variety of pedestrian appearance with camera position, the variety of environmental conditions, and the specificity required to recognize one pedestrian from another. DNNs require significant training sets and are not easily adapted for changes in class appearances or changes in the set of classes held in its knowledge domain. Pedestrian redetection requires an algorithm that can actively manage its knowledge domain as individuals move in and out of the scene, as well as learn individual appearances from a few frames of a video. The Omni-Modeler is a dynamically learning few-shot visual recognition algorithm developed for tasks with limited training data availability. The Omni-Modeler adapts the knowledge domain of pre-trained deep neural networks to novel concepts with a calculated localized language encoder. The Omni-Modeler knowledge domain is generated by creating a dynamic dictionary of concept definitions, which are directly updatable as new information becomes available. Query images are identified through nearest neighbor comparison to the learned object definitions. The study presented in this paper evaluates its performance in re-identifying individuals as they move through a scene in both single-camera and multi-camera tracking applications. The results demonstrate that the Omni-Modeler shows potential for across-camera view pedestrian redetection and is highly effective for single-camera redetection with a 93% accuracy across 30 individuals using 64 example images for each individual.Keywords: dynamic learning, few-shot learning, pedestrian redetection, visual recognition
Procedia PDF Downloads 7624932 Development of a Force-Sensing Toothbrush for Gum Recession Measurement Using Programmable Automation Controller
Authors: Sorayya Kazemi, Hamed Kharrati, Mehdi Abedinpour Fallah
Abstract:
This paper presents the design and implementation of a novel electric pressure-sensitive toothbrush, capable of measuring the forces applied to the head of the brush. The developed device is used for gum recession measurement. In particular, the percentage of gum recession is measured by a Programmable Automation controller (PAC). Moreover, the brushing forces are measured by a Force Sensing Resistor (FSR) sensor. These forces are analog inputs of PAC. According to the applied forces during patient’s brushing and the patient’s percentage of gum recession, dentist sets the standard force range. The instrument alarms when the patient applies a force over the set range.Keywords: gum recession, force sensing resistor, controller, toothbrush
Procedia PDF Downloads 49724931 Design of a Low Cost Motion Data Acquisition Setup for Mechatronic Systems
Authors: Baris Can Yalcin
Abstract:
Motion sensors have been commonly used as a valuable component in mechatronic systems, however, many mechatronic designs and applications that need motion sensors cost enormous amount of money, especially high-tech systems. Design of a software for communication protocol between data acquisition card and motion sensor is another issue that has to be solved. This study presents how to design a low cost motion data acquisition setup consisting of MPU 6050 motion sensor (gyro and accelerometer in 3 axes) and Arduino Mega2560 microcontroller. Design parameters are calibration of the sensor, identification and communication between sensor and data acquisition card, interpretation of data collected by the sensor.Keywords: design, mechatronics, motion sensor, data acquisition
Procedia PDF Downloads 58824930 Assessing the Seed Yield of Some Varieties of Sesame (Sesami indicum) Under Disease Condition (Cercospora Leaf Spot) Caused by (Cercospora sesami, Zimm) and Identifying Disease Resistant Varieties
Authors: P. S. Akami, H. Nahunnaro, A. Zubainatu
Abstract:
Cercospora leaf spot (Cercospora sesami. Zimm) has been identified as one of the most prevalent diseases, posing serious constraints to sesame production in producing areas. Two sets of experiments were carried out. The first and second experiments were conducted in the Modibbo Adama University of Technology Yola at the Crop Production and Horticulture and Plant Science Departments, respectively. The field experiment was carried out using a Randomized Complete Block Design and was replicated three times on a plot size of 4m x 5m with four sesame varieties and three Mancob-M fungicide levels (0g, 2g and 4g) to give a total of Twelve treatments. The laboratory experiment involved the isolation of the pathogens from diseased leaves with symptoms of Cercospora leaf spot, which was identified as Cercospora sesami. Data collected were subjected to analysis of variance for a randomized complete block design using SAS (1999) statistical package. The treatment means that are significantly different were separated using the Least Significant Difference at P=0.05. The result revealed that 4g Mancob M recorded the lowest mean value for disease incidence and severity at 8WAS, which was 90.30% and 35.60%, respectively, while the control (0g) recorded the highest mean value for disease incidence and severity at 90.30% and 59.80% respectively. Ex-Sudan recorded the lowest value of 720 kg/ha, while NCRIBEN 03 recorded the highest yield of 834 kg/ha-¹. For the concentrations, 2g recorded a higher yield of 843 kg/ha-¹ followed by 0g, which recorded 765 kg/ha-¹. Conclusively, Cercospora leaf spot of sesame was found to be prevalent. E8 has a higher resistance to the disease, while NCRIBEN 03 tends to be more susceptible. It is therefore recommended that further trials should be carried out using different varieties in different locations.Keywords: disease, evaluation, prevalence, treatment, resistance
Procedia PDF Downloads 9324929 Foot-and-Mouth Virus Detection in Asymptomatic Dairy Cows without Foot-and-Mouth Disease Outbreak
Authors: Duanghathai Saipinta, Tanittian Panyamongkol, Witaya Suriyasathaporn
Abstract:
Animal management aims to provide a suitable environment for animals allowing maximal productivity in those animals. Prevention of disease is an important part of animal management. Foot-and-mouth disease (FMD) is a highly contagious viral disease in cattle and is an economically important animal disease worldwide. Monitoring the FMD virus in farms is useful management for the prevention of the FMD outbreak. A recent publication indicated collection samples from nasal swabs can be used for monitoring FMD in symptomatic cows. Therefore, the objectives of this study were to determine the FMD virus in asymptomatic dairy cattle using nasal swab samples during the absence of an FMD outbreak. The study was conducted from December 2020 to June 2021 using 185 asymptomatic signs of FMD dairy cattle in Chiang Mai Province, Thailand. By random cow selection, nasal mucosal swabs were used to collect samples from the selected cows and then were to evaluate the presence of FMD viruses using the real-time rt-PCR assay. In total, 4.9% of dairy cattle detected FMD virus, including 2 dairy farms in Mae-on (8 samples; 9.6%) and 1 farm in the Chai-Prakan district (1 sample; 1.2%). Interestingly, both farms in Mae-on were the outbreak of the FMD after this detection for 6 months. This indicated that the FMD virus presented in asymptomatic cattle might relate to the subsequent outbreak of FMD. The outbreak demonstrates the presence of the virus in the environment. In conclusion, monitoring of FMD can be performed by nasal swab collection. Further investigation is needed to show whether the FMD virus presented in asymptomatic FMD cattle could be the cause of the subsequent FMD outbreak or not.Keywords: cattle, foot-and-mouth disease, nasal swab, real-time rt-PCR assay
Procedia PDF Downloads 23224928 Speed Characteristics of Mixed Traffic Flow on Urban Arterials
Authors: Ashish Dhamaniya, Satish Chandra
Abstract:
Speed and traffic volume data are collected on different sections of four lane and six lane roads in three metropolitan cities in India. Speed data are analyzed to fit the statistical distribution to individual vehicle speed data and all vehicles speed data. It is noted that speed data of individual vehicle generally follows a normal distribution but speed data of all vehicle combined at a section of urban road may or may not follow the normal distribution depending upon the composition of traffic stream. A new term Speed Spread Ratio (SSR) is introduced in this paper which is the ratio of difference in 85th and 50th percentile speed to the difference in 50th and 15th percentile speed. If SSR is unity then speed data are truly normally distributed. It is noted that on six lane urban roads, speed data follow a normal distribution only when SSR is in the range of 0.86 – 1.11. The range of SSR is validated on four lane roads also.Keywords: normal distribution, percentile speed, speed spread ratio, traffic volume
Procedia PDF Downloads 42224927 An Exploratory Analysis of Brisbane's Commuter Travel Patterns Using Smart Card Data
Authors: Ming Wei
Abstract:
Over the past two decades, Location Based Service (LBS) data have been increasingly applied to urban and transportation studies due to their comprehensiveness and consistency. However, compared to other LBS data including mobile phone data, GPS and social networking platforms, smart card data collected from public transport users have arguably yet to be fully exploited in urban systems analysis. By using five weekdays of passenger travel transaction data taken from go card – Southeast Queensland’s transit smart card – this paper analyses the spatiotemporal distribution of passenger movement with regard to the land use patterns in Brisbane. Work and residential places for public transport commuters were identified after extracting journeys-to-work patterns. Our results show that the locations of the workplaces identified from the go card data and residential suburbs are largely consistent with those that were marked in the land use map. However, the intensity for some residential locations in terms of population or commuter densities do not match well between the map and those derived from the go card data. This indicates that the misalignment between residential areas and workplaces to a certain extent, shedding light on how enhancements to service management and infrastructure expansion might be undertaken.Keywords: big data, smart card data, travel pattern, land use
Procedia PDF Downloads 28524926 To Design an Architectural Model for On-Shore Oil Monitoring Using Wireless Sensor Network System
Authors: Saurabh Shukla, G. N. Pandey
Abstract:
In recent times, oil exploration and monitoring in on-shore areas have gained much importance considering the fact that in India the oil import is 62 percent of the total imports. Thus, architectural model like wireless sensor network to monitor on-shore deep sea oil well is being developed to get better estimate of the oil prospects. The problem we are facing nowadays that we have very few restricted areas of oil left today. Countries like India don’t have much large areas and resources for oil and this problem with most of the countries that’s why it has become a major problem when we are talking about oil exploration in on-shore areas also the increase of oil prices has further ignited the problem. For this the use of wireless network system having relative simplicity, smallness in size and affordable cost of wireless sensor nodes permit heavy deployment in on-shore places for monitoring oil wells. Deployment of wireless sensor network in large areas will surely reduce the cost it will be very much cost effective. The objective of this system is to send real time information of oil monitoring to the regulatory and welfare authorities so that suitable action could be taken. This system architecture is composed of sensor network, processing/transmission unit and a server. This wireless sensor network system could remotely monitor the real time data of oil exploration and monitoring condition in the identified areas. For wireless sensor networks, the systems are wireless, have scarce power, are real-time, utilize sensors and actuators as interfaces, have dynamically changing sets of resources, aggregate behaviour is important and location is critical. In this system a communication is done between the server and remotely placed sensors. The server gives the real time oil exploration and monitoring conditions to the welfare authorities.Keywords: sensor, wireless sensor network, oil, sensor, on-shore level
Procedia PDF Downloads 44624925 Pattern Recognition Using Feature Based Die-Map Clustering in the Semiconductor Manufacturing Process
Authors: Seung Hwan Park, Cheng-Sool Park, Jun Seok Kim, Youngji Yoo, Daewoong An, Jun-Geol Baek
Abstract:
Depending on the big data analysis becomes important, yield prediction using data from the semiconductor process is essential. In general, yield prediction and analysis of the causes of the failure are closely related. The purpose of this study is to analyze pattern affects the final test results using a die map based clustering. Many researches have been conducted using die data from the semiconductor test process. However, analysis has limitation as the test data is less directly related to the final test results. Therefore, this study proposes a framework for analysis through clustering using more detailed data than existing die data. This study consists of three phases. In the first phase, die map is created through fail bit data in each sub-area of die. In the second phase, clustering using map data is performed. And the third stage is to find patterns that affect final test result. Finally, the proposed three steps are applied to actual industrial data and experimental results showed the potential field application.Keywords: die-map clustering, feature extraction, pattern recognition, semiconductor manufacturing process
Procedia PDF Downloads 402