Search results for: photogrammetric point cloud
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5551

Search results for: photogrammetric point cloud

5011 Non-Pharmacological Approach to the Improvement and Maintenance of the Convergence Parameter

Authors: Andreas Aceranti, Guido Bighiani, Francesca Crotto, Marco Colorato, Stefania Zaghi, Marino Zanetti, Simonetta Vernocchi

Abstract:

The management of eye parameters such as convergence, accommodation, and miosis is very complex; in fact, both the neurovegetative system and the complex Oculocephalgiria system come into play. We have found the effectiveness of the "highvelocity low amplitude" technique directed on C7-T1 (where the cilio-spinal nucleus of the budge is located) in improving the convergence parameter through the measurement of the point of maximum convergence. With this research, we set out to investigate whether the improvement obtained through the High Velocity Low Amplitude maneuver lasts over time, carrying out a pre-manipulation measurement, one immediately after manipulation and one month after manipulation. We took a population of 30 subjects with both refractive and non-refractive problems. Of the 30 patients tested, 27 gave a positive result after the High Velocity Low Amplitude maneuver, giving an improvement in the point of maximum convergence. After a month, we retested all 27 subjects: some further improved the result, others kept, and three subjects slightly lost the gain obtained. None of the re-tested patients returned to the point of maximum convergence starting pre-manipulation. This result opens the door to a multidisciplinary approach between ophthalmologists and osteopaths with the aim of addressing oculomotricity and convergence deficits that increasingly afflict our society due to the massive use of devices and for the conduct of life in closed and restricted environments.

Keywords: point of maximum convergence, HVLA, improvement in PPC, convergence

Procedia PDF Downloads 77
5010 Sentinel-2 Based Burn Area Severity Assessment Tool in Google Earth Engine

Authors: D. Madhushanka, Y. Liu, H. C. Fernando

Abstract:

Fires are one of the foremost factors of land surface disturbance in diverse ecosystems, causing soil erosion and land-cover changes and atmospheric effects affecting people's lives and properties. Generally, the severity of the fire is calculated as the Normalized Burn Ratio (NBR) index. This is performed manually by comparing two images obtained afterward. Then by using the bitemporal difference of the preprocessed satellite images, the dNBR is calculated. The burnt area is then classified as either unburnt (dNBR<0.1) or burnt (dNBR>= 0.1). Furthermore, Wildfire Severity Assessment (WSA) classifies burnt areas and unburnt areas using classification levels proposed by USGS and comprises seven classes. This procedure generates a burn severity report for the area chosen by the user manually. This study is carried out with the objective of producing an automated tool for the above-mentioned process, namely the World Wildfire Severity Assessment Tool (WWSAT). It is implemented in Google Earth Engine (GEE), which is a free cloud-computing platform for satellite data processing, with several data catalogs at different resolutions (notably Landsat, Sentinel-2, and MODIS) and planetary-scale analysis capabilities. Sentinel-2 MSI is chosen to obtain regular processes related to burnt area severity mapping using a medium spatial resolution sensor (15m). This tool uses machine learning classification techniques to identify burnt areas using NBR and to classify their severity over the user-selected extent and period automatically. Cloud coverage is one of the biggest concerns when fire severity mapping is performed. In WWSAT based on GEE, we present a fully automatic workflow to aggregate cloud-free Sentinel-2 images for both pre-fire and post-fire image compositing. The parallel processing capabilities and preloaded geospatial datasets of GEE facilitated the production of this tool. This tool consists of a Graphical User Interface (GUI) to make it user-friendly. The advantage of this tool is the ability to obtain burn area severity over a large extent and more extended temporal periods. Two case studies were carried out to demonstrate the performance of this tool. The Blue Mountain national park forest affected by the Australian fire season between 2019 and 2020 is used to describe the workflow of the WWSAT. This site detected more than 7809 km2, using Sentinel-2 data, giving an error below 6.5% when compared with the area detected on the field. Furthermore, 86.77% of the detected area was recognized as fully burnt out, of which high severity (17.29%), moderate-high severity (19.63%), moderate-low severity (22.35%), and low severity (27.51%). The Arapaho and Roosevelt National Forest Park, California, the USA, which is affected by the Cameron peak fire in 2020, is chosen for the second case study. It was found that around 983 km2 had burned out, of which high severity (2.73%), moderate-high severity (1.57%), moderate-low severity (1.18%), and low severity (5.45%). These spots also can be detected through the visual inspection made possible by cloud-free images generated by WWSAT. This tool is cost-effective in calculating the burnt area since satellite images are free and the cost of field surveys is avoided.

Keywords: burnt area, burnt severity, fires, google earth engine (GEE), sentinel-2

Procedia PDF Downloads 235
5009 The Effects of Wood Ash on Ignition Point of Wood

Authors: K. A. Ibe, J. I. Mbonu, G. K. Umukoro

Abstract:

The effects of wood ash on the ignition point of five common tropical woods in Nigeria were investigated. The ash and moisture contents of the wood saw dust from Mahogany (Khaya ivorensis), Opepe (Sarcocephalus latifolius), Abura (Hallealedermannii verdc), Rubber (Heavea brasilensis) and Poroporo (Sorghum bicolour) were determined using a furnace (Vecstar furnaces, model ECF2, serial no. f3077) and oven (Genlab laboratory oven, model MINO/040) respectively. The metal contents of the five wood sawdust ash samples were determined using a Perkin Elmer optima 3000 dv atomic absorption spectrometer while the ignition points were determined using Vecstar furnaces model ECF2. Poroporo had the highest ash content, 2.263 g while rubber had the least, 0.710 g. The results for the moisture content range from 2.971 g to 0.903 g. Magnesium metal had the highest concentration of all the metals, in all the wood ash samples; with mahogany ash having the highest concentration, 9.196 ppm while rubber ash had the least concentration of magnesium metal, 2.196 ppm. The ignition point results showed that the wood ashes from mahogany and opepe increased the ignition points of the test wood samples when coated on them while the ashes from poroporo, rubber and abura decreased the ignition points of the test wood samples when coated on them. However, Opepe saw dust ash decreased the ignition point in one of the test wood samples, suggesting that the metal content of the test wood sample was more than that of the Opepe saw dust ash. Therefore, Mahogany and Opepe saw dust ashes could be used in the surface treatment of wood to enhance their fire resistance or retardancy. However, the caution to be exercised in this application is that the metal content of the test wood samples should be evaluated as well.

Keywords: ash, fire, ignition point, retardant, wood saw dust

Procedia PDF Downloads 389
5008 Unmanned Aerial Vehicle Landing Based on Ultra-Wideband Localization System and Optimal Strategy for Searching Optimal Landing Point

Authors: Meng Wu

Abstract:

Unmanned aerial vehicle (UAV) landing technology is a common task that is required to be fulfilled by fly robots. In this paper, the crazyflie2.0 is located by ultra-wideband (UWB) localization system that contains 4 UWB anchors. Another UWB anchor is introduced and installed on a stationary platform. One cost function is designed to find the minimum distance between crazyflie2.0 and the anchor installed on the stationary platform. The coordinates of the anchor are unknown in advance, and the goal of the cost function is to define the location of the anchor, which can be considered as an optimal landing point. When the cost function reaches the minimum value, the corresponding coordinates of the UWB anchor fixed on the stationary platform can be calculated and defined as the landing point. The simulation shows the effectiveness of the method in this paper.

Keywords: UAV landing, UWB localization system, UWB anchor, cost function, stationary platform

Procedia PDF Downloads 87
5007 Native Point Defects in ZnO

Authors: A. M. Gsiea, J. P. Goss, P. R. Briddon, Ramadan. M. Al-habashi, K. M. Etmimi, Khaled. A. S. Marghani

Abstract:

Using first-principles methods based on density functional theory and pseudopotentials, we have performed a details study of native defects in ZnO. Native point defects are unlikely to be cause of the unintentional n-type conductivity. Oxygen vacancies, which considered most often been invoked as shallow donors, have high formation energies in n-type ZnO, in edition are a deep donors. Zinc interstitials are shallow donors, with high formation energies in n-type ZnO, and thus unlikely to be responsible on their own for unintentional n-type conductivity under equilibrium conditions, as well as Zn antisites which have higher formation energies than zinc interstitials. Zinc vacancies are deep acceptors with low formation energies for n-type and in which case they will not play role in p-type coductivity of ZnO. Oxygen interstitials are stable in the form of electrically inactive split interstitials as well as deep acceptors at the octahedral interstitial site under n-type conditions. Our results may provide a guide to experimental studies of point defects in ZnO.

Keywords: DFT, native, n-type, ZnO

Procedia PDF Downloads 594
5006 Effects of Research-Based Blended Learning Model Using Adaptive Scaffolding to Enhance Graduate Students' Research Competency and Analytical Thinking Skills

Authors: Panita Wannapiroon, Prachyanun Nilsook

Abstract:

This paper is a report on the findings of a Research and Development (R&D) aiming to develop the model of Research-Based Blended Learning Model Using Adaptive Scaffolding (RBBL-AS) to enhance graduate students’ research competency and analytical thinking skills, to study the result of using such model. The sample consisted of 10 experts in the fields during the model developing stage, while there were 23 graduate students of KMUTNB for the RBBL-AS model try out stage. The research procedures included 4 phases: 1) literature review, 2) model development, 3) model experiment, and 4) model revision and confirmation. The research results were divided into 3 parts according to the procedures as described in the following session. First, the data gathering from the literature review were reported as a draft model; followed by the research finding from the experts’ interviews indicated that the model should be included 8 components to enhance graduate students’ research competency and analytical thinking skills. The 8 components were 1) cloud learning environment, 2) Ubiquitous Cloud Learning Management System (UCLMS), 3) learning courseware, 4) learning resources, 5) adaptive Scaffolding, 6) communication and collaboration tolls, 7) learning assessment, and 8) research-based blended learning activity. Second, the research finding from the experimental stage found that there were statistically significant difference of the research competency and analytical thinking skills posttest scores over the pretest scores at the .05 level. The Graduate students agreed that learning with the RBBL-AS model was at a high level of satisfaction. Third, according to the finding from the experimental stage and the comments from the experts, the developed model was revised and proposed in the report for further implication and references.

Keywords: research based learning, blended learning, adaptive scaffolding, research competency, analytical thinking skills

Procedia PDF Downloads 418
5005 Solar Energy Potential Studies of Sindh Province, Pakistan for Power Generation

Authors: M. Akhlaque Ahmed, Sidra A. Shaikh, Maliha Afshan Siddiqui

Abstract:

Solar radiation studies of Sindh province have been studied to evaluate the solar energy potential of the area. Global and diffuse solar radiation on horizontal surface over five cities namely Karachi, Hyderabad, Nawabshah, Chore and Padidan of Sindh province were carried out using sun shine hour data of the area to assess the feasibility of solar energy utilization. The result obtained shows a large variation of direct and diffuse component of solar radiation in winter and summer months. 50% direct and 50% diffuse solar radiation for Karachi and Hyderabad were observed and for Chore in summer month July and August the diffuse radiation is about 33 to 39%. For other areas of Sindh such as Nawabshah and Patidan the contribution of direct solar radiation is high throughout the year. The Kt values for Nawabshah and Patidan indicates a clear sky almost throughout the year. In Nawabshah area the percentage of diffuse radiation does not exceed more than 29%. The appearance of cloud is rare even in the monsoon months July and August whereas Karachi and Hyderabad and Chore has low solar potential during the monsoon months. During the monsoon period Karachi and Hyderabad can utilize hybrid system with wind power as wind speed is higher. From the point of view of power generation the estimated values indicate that Karachi and Hyderabad and chore has low solar potential for July and August while Nawabshah, and Padidan has high solar potential Throughout the year.

Keywords: global and diffuse solar radiation, province of Sindh, solar energy potential, solar radiation studies for power generation

Procedia PDF Downloads 260
5004 Fracture Crack Monitoring Using Digital Image Correlation Technique

Authors: B. G. Patel, A. K. Desai, S. G. Shah

Abstract:

The main of objective of this paper is to develop new measurement technique without touching the object. DIC is advance measurement technique use to measure displacement of particle with very high accuracy. This powerful innovative technique which is used to correlate two image segments to determine the similarity between them. For this study, nine geometrically similar beam specimens of different sizes with (steel fibers and glass fibers) and without fibers were tested under three-point bending in a closed loop servo-controlled machine with crack mouth opening displacement control with a rate of opening of 0.0005 mm/sec. Digital images were captured before loading (unreformed state) and at different instances of loading and were analyzed using correlation techniques to compute the surface displacements, crack opening and sliding displacements, load-point displacement, crack length and crack tip location. It was seen that the CMOD and vertical load-point displacement computed using DIC analysis matches well with those measured experimentally.

Keywords: Digital Image Correlation, fibres, self compacting concrete, size effect

Procedia PDF Downloads 389
5003 ICT-based Methodologies and Students’ Academic Performance and Retention in Physics: A Case with Newton Laws of Motion

Authors: Gabriel Ocheleka Aniedi A. Udo, Patum Wasinda

Abstract:

The study was carried out to appraise the impact of ICT-based teaching methodologies (video-taped instructions and Power Point presentations) on academic performance and retention of secondary school students in Physics, with particular interest in Newton Laws of Motion. The study was conducted in Cross River State, Nigeria, with a quasi-experimental research design using non-randomised pre-test and post-test control group. The sample for the study consisted of 176 SS2 students drawn from four intact classes of four secondary schools within the study area. Physics Achievement Test (PAT), with a reliability coefficient of 0.85, was used for data collection. Mean and Analysis of Covariance (ANCOVA) was used in the treatment of the obtained data. The results of the study showed that there was a significant difference in the academic performance and retention of students taught using video-taped instructions and those taught using power point presentations. Findings of the study showed that students taught using video-taped instructions had a higher academic performance and retention than those taught using power point presentations. The study concludes that the use of blended ICT-based teaching methods can improve learner’s academic performance and retention.

Keywords: video taped instruction (VTI), power point presentation (PPT), academic performance, retention, physics

Procedia PDF Downloads 92
5002 Analysis of DC\DC Converter of Photovoltaic System with MPPT Algorithms Comparison

Authors: Badr M. Alshammari, Mohamed A. Khlifi

Abstract:

This paper presents the analysis of DC/DC converter including a comparative study of control methods to extract the maximum power and to track the maximum power point (MPP) from photovoltaic (PV) systems under changeable environmental conditions. This paper proposes two methods of maximum power point tracking algorithm for photovoltaic systems, based on the first hand on P&O control and the other hand on the first order IC. The MPPT system ensures that solar cells can deliver the maximum power possible to the load. Different algorithms are used to design it. Here we compare them and simulate the photovoltaic system with two algorithms. The algorithms are used to control the duty cycle of a DC-DC converter in order to boost the output voltage of the PV generator and guarantee the operation of the solar panels in the Maximum Power Point (MPP). Simulation and experimental results show that the proposed algorithms can effectively improve the efficiency of a photovoltaic array output.

Keywords: solar cell, DC/DC boost converter, MPPT, photovoltaic system

Procedia PDF Downloads 202
5001 Evaluation of Minimization of Moment Ratio Method by Physical Modeling

Authors: Amin Eslami, Jafar Bolouri Bazaz

Abstract:

Under active stress conditions, a rigid cantilever retaining wall tends to rotate about a pivot point located within the embedded depth of the wall. For purely granular and cohesive soils, a methodology was previously reported called minimization of moment ratio to determine the location of the pivot point of rotation. The usage of this new methodology is to estimate the rotational stability safety factor. Moreover, the degree of improvement required in a backfill to get a desired safety factor can be estimated by the concept of the shear strength demand. In this article, the accuracy of this method for another type of cantilever walls called Contiguous Bored Pile (CBP) retaining wall is evaluated by using physical modeling technique. Based on observations, the results of moment ratio minimization method are in good agreement with the results of the carried out physical modeling.

Keywords: cantilever retaining wall, physical modeling, minimization of moment ratio method, pivot point

Procedia PDF Downloads 332
5000 Intelligent Control of Agricultural Farms, Gardens, Greenhouses, Livestock

Authors: Vahid Bairami Rad

Abstract:

The intelligentization of agricultural fields can control the temperature, humidity, and variables affecting the growth of agricultural products online and on a mobile phone or computer. Smarting agricultural fields and gardens is one of the best and best ways to optimize agricultural equipment and has a 100 percent direct effect on the growth of plants and agricultural products and farms. Smart farms are the topic that we are going to discuss today, the Internet of Things and artificial intelligence. Agriculture is becoming smarter every day. From large industrial operations to individuals growing organic produce locally, technology is at the forefront of reducing costs, improving results and ensuring optimal delivery to market. A key element to having a smart agriculture is the use of useful data. Modern farmers have more tools to collect intelligent data than in previous years. Data related to soil chemistry also allows people to make informed decisions about fertilizing farmland. Moisture meter sensors and accurate irrigation controllers have made the irrigation processes to be optimized and at the same time reduce the cost of water consumption. Drones can apply pesticides precisely on the desired point. Automated harvesting machines navigate crop fields based on position and capacity sensors. The list goes on. Almost any process related to agriculture can use sensors that collect data to optimize existing processes and make informed decisions. The Internet of Things (IoT) is at the center of this great transformation. Internet of Things hardware has grown and developed rapidly to provide low-cost sensors for people's needs. These sensors are embedded in IoT devices with a battery and can be evaluated over the years and have access to a low-power and cost-effective mobile network. IoT device management platforms have also evolved rapidly and can now be used securely and manage existing devices at scale. IoT cloud services also provide a set of application enablement services that can be easily used by developers and allow them to build application business logic. Focus on yourself. These development processes have created powerful and new applications in the field of Internet of Things, and these programs can be used in various industries such as agriculture and building smart farms. But the question is, what makes today's farms truly smart farms? Let us put this question in another way. When will the technologies associated with smart farms reach the point where the range of intelligence they provide can exceed the intelligence of experienced and professional farmers?

Keywords: food security, IoT automation, wireless communication, hybrid lifestyle, arduino Uno

Procedia PDF Downloads 56
4999 A Study for Area-level Mosquito Abundance Prediction by Using Supervised Machine Learning Point-level Predictor

Authors: Theoktisti Makridou, Konstantinos Tsaprailis, George Arvanitakis, Charalampos Kontoes

Abstract:

In the literature, the data-driven approaches for mosquito abundance prediction relaying on supervised machine learning models that get trained with historical in-situ measurements. The counterpart of this approach is once the model gets trained on pointlevel (specific x,y coordinates) measurements, the predictions of the model refer again to point-level. These point-level predictions reduce the applicability of those solutions once a lot of early warning and mitigation actions applications need predictions for an area level, such as a municipality, village, etc... In this study, we apply a data-driven predictive model, which relies on public-open satellite Earth Observation and geospatial data and gets trained with historical point-level in-Situ measurements of mosquito abundance. Then we propose a methodology to extract information from a point-level predictive model to a broader area-level prediction. Our methodology relies on the randomly spatial sampling of the area of interest (similar to the Poisson hardcore process), obtaining the EO and geomorphological information for each sample, doing the point-wise prediction for each sample, and aggregating the predictions to represent the average mosquito abundance of the area. We quantify the performance of the transformation from the pointlevel to the area-level predictions, and we analyze it in order to understand which parameters have a positive or negative impact on it. The goal of this study is to propose a methodology that predicts the mosquito abundance of a given area by relying on point-level prediction and to provide qualitative insights regarding the expected performance of the area-level prediction. We applied our methodology to historical data (of Culex pipiens) of two areas of interest (Veneto region of Italy and Central Macedonia of Greece). In both cases, the results were consistent. The mean mosquito abundance of a given area can be estimated with similar accuracy to the point-level predictor, sometimes even better. The density of the samples that we use to represent one area has a positive effect on the performance in contrast to the actual number of sampling points which is not informative at all regarding the performance without the size of the area. Additionally, we saw that the distance between the sampling points and the real in-situ measurements that were used for training did not strongly affect the performance.

Keywords: mosquito abundance, supervised machine learning, culex pipiens, spatial sampling, west nile virus, earth observation data

Procedia PDF Downloads 148
4998 Internet of Things, Edge and Cloud Computing in Rock Mechanical Investigation for Underground Surveys

Authors: Esmael Makarian, Ayub Elyasi, Fatemeh Saberi, Olusegun Stanley Tomomewo

Abstract:

Rock mechanical investigation is one of the most crucial activities in underground operations, especially in surveys related to hydrocarbon exploration and production, geothermal reservoirs, energy storage, mining, and geotechnics. There is a wide range of traditional methods for driving, collecting, and analyzing rock mechanics data. However, these approaches may not be suitable or work perfectly in some situations, such as fractured zones. Cutting-edge technologies have been provided to solve and optimize the mentioned issues. Internet of Things (IoT), Edge, and Cloud Computing technologies (ECt & CCt, respectively) are among the most widely used and new artificial intelligence methods employed for geomechanical studies. IoT devices act as sensors and cameras for real-time monitoring and mechanical-geological data collection of rocks, such as temperature, movement, pressure, or stress levels. Structural integrity, especially for cap rocks within hydrocarbon systems, and rock mass behavior assessment, to further activities such as enhanced oil recovery (EOR) and underground gas storage (UGS), or to improve safety risk management (SRM) and potential hazards identification (P.H.I), are other benefits from IoT technologies. EC techniques can process, aggregate, and analyze data immediately collected by IoT on a real-time scale, providing detailed insights into the behavior of rocks in various situations (e.g., stress, temperature, and pressure), establishing patterns quickly, and detecting trends. Therefore, this state-of-the-art and useful technology can adopt autonomous systems in rock mechanical surveys, such as drilling and production (in hydrocarbon wells) or excavation (in mining and geotechnics industries). Besides, ECt allows all rock-related operations to be controlled remotely and enables operators to apply changes or make adjustments. It must be mentioned that this feature is very important in environmental goals. More often than not, rock mechanical studies consist of different data, such as laboratory tests, field operations, and indirect information like seismic or well-logging data. CCt provides a useful platform for storing and managing a great deal of volume and different information, which can be very useful in fractured zones. Additionally, CCt supplies powerful tools for predicting, modeling, and simulating rock mechanical information, especially in fractured zones within vast areas. Also, it is a suitable source for sharing extensive information on rock mechanics, such as the direction and size of fractures in a large oil field or mine. The comprehensive review findings demonstrate that digital transformation through integrated IoT, Edge, and Cloud solutions is revolutionizing traditional rock mechanical investigation. These advanced technologies have empowered real-time monitoring, predictive analysis, and data-driven decision-making, culminating in noteworthy enhancements in safety, efficiency, and sustainability. Therefore, by employing IoT, CCt, and ECt, underground operations have experienced a significant boost, allowing for timely and informed actions using real-time data insights. The successful implementation of IoT, CCt, and ECt has led to optimized and safer operations, optimized processes, and environmentally conscious approaches in underground geological endeavors.

Keywords: rock mechanical studies, internet of things, edge computing, cloud computing, underground surveys, geological operations

Procedia PDF Downloads 63
4997 Iron(III)-Tosylate Doped PEDOT and PEG: A Nanoscale Conductivity Study of an Electrochemical System with Biosensing Applications

Authors: Giulio Rosati, Luciano Sappia, Rossana Madrid, Noemi Rozlòsnik

Abstract:

The addition of PEG of different molecular weights has important effects on the physical, electrical and electrochemical properties of iron(III)-tosylate doped PEDOT. This particular polymer can be easily spin coated over plastic discs, optimizing thickness and uniformity of the PEDOT-PEG films. The conductivity and morphological analysis of the hybrid PEDOT-PEG polymer by 4-point probe (4PP), 12-point probe (12PP), and conductive AFM (C-AFM) show strong effects of the PEG doping. Moreover, the conductive films kinetics at the nanoscale, in response to different bias voltages, change radically depending on the PEG molecular weight. The hybrid conductive films show also interesting electrochemical properties, making the PEDOT PEG doping appealing for biosensing applications both for EIS-based and amperometric affinity/catalytic biosensors.

Keywords: atomic force microscopy, biosensors, four-point probe, nano-films, PEDOT

Procedia PDF Downloads 345
4996 Addressing Supply Chain Data Risk with Data Security Assurance

Authors: Anna Fowler

Abstract:

When considering assets that may need protection, the mind begins to contemplate homes, cars, and investment funds. In most cases, the protection of those assets can be covered through security systems and insurance. Data is not the first thought that comes to mind that would need protection, even though data is at the core of most supply chain operations. It includes trade secrets, management of personal identifiable information (PII), and consumer data that can be used to enhance the overall experience. Data is considered a critical element of success for supply chains and should be one of the most critical areas to protect. In the supply chain industry, there are two major misconceptions about protecting data: (i) We do not manage or store confidential/personally identifiable information (PII). (ii) Reliance on Third-Party vendor security. These misconceptions can significantly derail organizational efforts to adequately protect data across environments. These statistics can be exciting yet overwhelming at the same time. The first misconception, “We do not manage or store confidential/personally identifiable information (PII)” is dangerous as it implies the organization does not have proper data literacy. Enterprise employees will zero in on the aspect of PII while neglecting trade secret theft and the complete breakdown of information sharing. To circumvent the first bullet point, the second bullet point forges an ideology that “Reliance on Third-Party vendor security” will absolve the company from security risk. Instead, third-party risk has grown over the last two years and is one of the major causes of data security breaches. It is important to understand that a holistic approach should be considered when protecting data which should not involve purchasing a Data Loss Prevention (DLP) tool. A tool is not a solution. To protect supply chain data, start by providing data literacy training to all employees and negotiating the security component of contracts with vendors to highlight data literacy training for individuals/teams that may access company data. It is also important to understand the origin of the data and its movement to include risk identification. Ensure processes effectively incorporate data security principles. Evaluate and select DLP solutions to address specific concerns/use cases in conjunction with data visibility. These approaches are part of a broader solutions framework called Data Security Assurance (DSA). The DSA Framework looks at all of the processes across the supply chain, including their corresponding architecture and workflows, employee data literacy, governance and controls, integration between third and fourth-party vendors, DLP as a solution concept, and policies related to data residency. Within cloud environments, this framework is crucial for the supply chain industry to avoid regulatory implications and third/fourth party risk.

Keywords: security by design, data security architecture, cybersecurity framework, data security assurance

Procedia PDF Downloads 89
4995 Maximum Power Point Tracking Based on Estimated Power for PV Energy Conversion System

Authors: Zainab Almukhtar, Adel Merabet

Abstract:

In this paper, a method for maximum power point tracking of a photovoltaic energy conversion system is presented. This method is based on using the difference between the power from the solar panel and an estimated power value to control the DC-DC converter of the photovoltaic system. The difference is continuously compared with a preset error permitted value. If the power difference is more than the error, the estimated power is multiplied by a factor and the operation is repeated until the difference is less or equal to the threshold error. The difference in power will be used to trigger a DC-DC boost converter in order to raise the voltage to where the maximum power point is achieved. The proposed method was experimentally verified through a PV energy conversion system driven by the OPAL-RT real time controller. The method was tested on varying radiation conditions and load requirements, and the Photovoltaic Panel was operated at its maximum power in different conditions of irradiation.

Keywords: control system, error, solar panel, MPPT tracking

Procedia PDF Downloads 283
4994 Effect of Jet Diameter on Surface Quenching at Different Spatial Locations

Authors: C. Agrawal, R. Kumar, A. Gupta, B. Chatterjee

Abstract:

An experimental investigation has been carried out to study the cooling of a hot horizontal Stainless Steel surface of 3 mm thickness, which has 800±10 °C initial temperature. A round water jet of 22 ± 1 °C temperature was injected over the hot surface through straight tube type nozzles of 2.5-4.8 mm diameter and 250 mm length. The experiments were performed for the jet exit to target surface spacing of 4 times of jet diameter and jet Reynolds number of 5000-24000. The effect of change in jet Reynolds number on the surface quenching has been investigated form the stagnation point to 16 mm spatial location.

Keywords: hot-surface, jet impingement, quenching, stagnation point

Procedia PDF Downloads 610
4993 Monitoring the Fiscal Health of Taiwan’s Local Government: Application of the 10-Point Scale of Fiscal Distress

Authors: Yuan-Hong Ho, Chiung-Ju Huang

Abstract:

This article presents a monitoring indicators system that predicts whether a local government in Taiwan is heading for fiscal distress and identifies a suitable fiscal policy that would allow the local government to achieve fiscal balance in the long run. This system is relevant to stockholders’ interest, simple for national audit bodies to use, and provides an early warning of fiscal distress that allows preventative action to be taken.

Keywords: fiscal health, fiscal distress, monitoring signals, 10-point scale

Procedia PDF Downloads 459
4992 IT/IS Organisation Design in the Digital Age: A Literature Review

Authors: Dominik Krimpmann

Abstract:

Information technology and information systems are currently at a tipping point. The digital age fundamentally transforms a large number of industries in the ways they work. Lines between business and technology blur. Researchers have acknowledged that this is the time in which the IT/IS organisation needs to re-strategise itself. In this paper, the author provides a structured review of the IS and organisation design literature addressing the question of how the digital age changes the design categories of an IT/IS organisation design. The findings show that most papers just analyse single aspects of either IT/IS relevant information or generic organisation design elements but miss a holistic ‘big-picture’ onto an IT/IS organisation design. This paper creates a holistic IT/IS organisation design framework bringing together the IS research strand, the digital strand and the generic organisation design strand. The research identified four IT/IS organisation design categories (strategy, structure, processes and people) and discusses the importance of two additional categories (sourcing and governance). The authors findings point to a first anchor point from which further research needs to be conducted to develop a holistic IT/IS organisation design framework.

Keywords: IT/IS strategy, IT/IS organisation design, digital age, organisational effectiveness, literature review

Procedia PDF Downloads 409
4991 Experimental and Numerical Determination of the Freeze Point Depression of a Multi-Phase Flow in a Scraped Surface Heat Exchanger

Authors: Carlos A. Acosta, Amar Bhalla, Ruyan Guo

Abstract:

Scraped surface heat exchangers (SSHE) use a rotor shaft assembly with scraping blades to homogenize viscous fluids during the heat transfer process. Obtaining in-situ measurements is difficult because the rotor and scraping blades spin continuously inside the mixing chamber, obstructing the instrumentation pathway. Computational fluid dynamics simulations provide useful insight into the flow behavior around the scraper blades for a variety of fluids and blade geometries. However, numerical solutions often focus on the fluid dynamics and heat transfer phenomena of rotating flow, ignoring the glass-transition temperature and freezing point depression. This research studies the multi-phase fluid dynamics and freezing point depression inside the SSHE with non-isothermal conditions in a time dependent process using an aqueous solution that contains 13.5 wt.% high fructose corn syrup and CO₂. The computational results were validated with in-situ pressure, temperature, and optical spectroscopy measurements. Results from the numerical model show good quantitatively agreement with experimental values.

Keywords: computational fluid dynamics, freezing point depression, phase-transition temperature, multi-phase flow

Procedia PDF Downloads 147
4990 Appropriate Depth of Needle Insertion during Rhomboid Major Trigger Point Block

Authors: Seongho Jang

Abstract:

Objective: To investigate an appropriate depth of needle insertion during trigger point injection into the rhomboid major muscle. Methods: Sixty-two patients who visited our department with shoulder or upper back pain participated in this study. The distance between the skin and the rhomboid major muscle (SM) and the distance between the skin and rib (SB) were measured using ultrasonography. The subjects were divided into 3 groups according to BMI: BMI less than 23 kg/m2 (underweight or normal group); 23 kg/m2 or more to less than 25 kg/m2 (overweight group); and 25 kg/m2 or more (obese group). The mean ±standard deviation (SD) of SM and SB of each group were calculated. A range between mean+1 SD of SM and the mean-1 SD of SB was defined as a safe margin. Results: The underweight or normal group’s SM, SB, and the safe margin were 1.2±0.2, 2.1±0.4, and 1.4 to 1.7 cm, respectively. The overweight group’s SM and SB were 1.4±0.2 and 2.4±0.9 cm, respectively. The safe margin could not be calculated for this group. The obese group’s SM, SB, and the safe margin were 1.8±0.3, 2.7±0.5, and 2.1 to 2.2 cm, respectively. Conclusion: This study will help us to set the standard depth of safe needle insertion into the rhomboid major muscle in an effective manner without causing any complications.

Keywords: pneumothorax, rhomboid major muscle, trigger point injection, ultrasound

Procedia PDF Downloads 290
4989 Global Navigation Satellite System and Precise Point Positioning as Remote Sensing Tools for Monitoring Tropospheric Water Vapor

Authors: Panupong Makvichian

Abstract:

Global Navigation Satellite System (GNSS) is nowadays a common technology that improves navigation functions in our life. Additionally, GNSS is also being employed on behalf of an accurate atmospheric sensor these times. Meteorology is a practical application of GNSS, which is unnoticeable in the background of people’s life. GNSS Precise Point Positioning (PPP) is a positioning method that requires data from a single dual-frequency receiver and precise information about satellite positions and satellite clocks. In addition, careful attention to mitigate various error sources is required. All the above data are combined in a sophisticated mathematical algorithm. At this point, the research is going to demonstrate how GNSS and PPP method is capable to provide high-precision estimates, such as 3D positions or Zenith tropospheric delays (ZTDs). ZTDs combined with pressure and temperature information allows us to estimate the water vapor in the atmosphere as precipitable water vapor (PWV). If the process is replicated for a network of GNSS sensors, we can create thematic maps that allow extract water content information in any location within the network area. All of the above are possible thanks to the advances in GNSS data processing. Therefore, we are able to use GNSS data for climatic trend analysis and acquisition of the further knowledge about the atmospheric water content.

Keywords: GNSS, precise point positioning, Zenith tropospheric delays, precipitable water vapor

Procedia PDF Downloads 198
4988 Maximum Power Point Tracking Using Fuzzy Logic Control for a Stand-Alone PV System with PI Controller for Battery Charging Based on Evolutionary Technique

Authors: Mohamed A. Moustafa Hassan, Omnia S .S. Hussian, Hany M. Elsaved

Abstract:

This paper introduces the application of Fuzzy Logic Controller (FLC) to extract the Maximum Power Point Tracking (MPPT) from the PV panel. In addition, the proportional integral (PI) controller is used to be the strategy for battery charge control according to acceptable performance criteria. The parameters of the PI controller have been tuned via Modified Adaptive Accelerated Coefficient Particle Swarm Optimization (MAACPSO) technique. The simulation results, using MATLAB/Simulink tools, show that the FLC technique has advantages for use in the MPPT problem, as it provides a fast response under changes in environmental conditions such as radiation and temperature. In addition, the use of PI controller based on MAACPSO results in a good performance in terms of controlling battery charging with constant voltage and current to execute rapid charging.

Keywords: battery charging, fuzzy logic control, maximum power point tracking, PV system, PI controller, evolutionary technique

Procedia PDF Downloads 166
4987 Empirical Study of Correlation between the Cost Performance Index Stability and the Project Cost Forecast Accuracy in Construction Projects

Authors: Amin AminiKhafri, James M. Dawson-Edwards, Ryan M. Simpson, Simaan M. AbouRizk

Abstract:

Earned value management (EVM) has been introduced as an integrated method to combine schedule, budget, and work breakdown structure (WBS). EVM provides various indices to demonstrate project performance including the cost performance index (CPI). CPI is also used to forecast final project cost at completion based on the cost performance during the project execution. Knowing the final project cost during execution can initiate corrective actions, which can enhance project outputs. CPI, however, is not constant during the project, and calculating the final project cost using a variable index is an inaccurate and challenging task for practitioners. Since CPI is based on the cumulative progress values and because of the learning curve effect, CPI variation dampens and stabilizes as project progress. Although various definitions for the CPI stability have been proposed in literature, many scholars have agreed upon the definition that considers a project as stable if the CPI at 20% completion varies less than 0.1 from the final CPI. While 20% completion point is recognized as the stability point for military development projects, construction projects stability have not been studied. In the current study, an empirical study was first conducted using construction project data to determine the stability point for construction projects. Early findings have demonstrated that a majority of construction projects stabilize towards completion (i.e., after 70% completion point). To investigate the effect of CPI stability on cost forecast accuracy, the correlation between CPI stability and project cost at completion forecast accuracy was also investigated. It was determined that as projects progress closer towards completion, variation of the CPI decreases and final project cost forecast accuracy increases. Most projects were found to have 90% accuracy in the final cost forecast at 70% completion point, which is inlined with findings from the CPI stability findings. It can be concluded that early stabilization of the project CPI results in more accurate cost at completion forecasts.

Keywords: cost performance index, earned value management, empirical study, final project cost

Procedia PDF Downloads 156
4986 Distributed Perceptually Important Point Identification for Time Series Data Mining

Authors: Tak-Chung Fu, Ying-Kit Hung, Fu-Lai Chung

Abstract:

In the field of time series data mining, the concept of the Perceptually Important Point (PIP) identification process is first introduced in 2001. This process originally works for financial time series pattern matching and it is then found suitable for time series dimensionality reduction and representation. Its strength is on preserving the overall shape of the time series by identifying the salient points in it. With the rise of Big Data, time series data contributes a major proportion, especially on the data which generates by sensors in the Internet of Things (IoT) environment. According to the nature of PIP identification and the successful cases, it is worth to further explore the opportunity to apply PIP in time series ‘Big Data’. However, the performance of PIP identification is always considered as the limitation when dealing with ‘Big’ time series data. In this paper, two distributed versions of PIP identification based on the Specialized Binary (SB) Tree are proposed. The proposed approaches solve the bottleneck when running the PIP identification process in a standalone computer. Improvement in term of speed is obtained by the distributed versions.

Keywords: distributed computing, performance analysis, Perceptually Important Point identification, time series data mining

Procedia PDF Downloads 435
4985 Study and Simulation of a Dynamic System Using Digital Twin

Authors: J.P. Henriques, E. R. Neto, G. Almeida, G. Ribeiro, J.V. Coutinho, A.B. Lugli

Abstract:

Industry 4.0, or the Fourth Industrial Revolution, is transforming the relationship between people and machines. In this scenario, some technologies such as Cloud Computing, Internet of Things, Augmented Reality, Artificial Intelligence, Additive Manufacturing, among others, are making industries and devices increasingly intelligent. One of the most powerful technologies of this new revolution is the Digital Twin, which allows the virtualization of a real system or process. In this context, the present paper addresses the linear and nonlinear dynamic study of a didactic level plant using Digital Twin. In the first part of the work, the level plant is identified at a fixed point of operation, BY using the existing method of least squares means. The linearized model is embedded in a Digital Twin using Automation Studio® from Famous Technologies. Finally, in order to validate the usage of the Digital Twin in the linearized study of the plant, the dynamic response of the real system is compared to the Digital Twin. Furthermore, in order to develop the nonlinear model on a Digital Twin, the didactic level plant is identified by using the method proposed by Hammerstein. Different steps are applied to the plant, and from the Hammerstein algorithm, the nonlinear model is obtained for all operating ranges of the plant. As for the linear approach, the nonlinear model is embedded in the Digital Twin, and the dynamic response is compared to the real system in different points of operation. Finally, yet importantly, from the practical results obtained, one can conclude that the usage of Digital Twin to study the dynamic systems is extremely useful in the industrial environment, taking into account that it is possible to develop and tune controllers BY using the virtual model of the real systems.

Keywords: industry 4.0, digital twin, system identification, linear and nonlinear models

Procedia PDF Downloads 148
4984 Creation and Annihilation of Spacetime Elements

Authors: Dnyanesh P. Mathur, Gregory L. Slater

Abstract:

Gravitation and the expansion of the universe at a large scale are generally regarded as two completely distinct phenomena. Yet, in general, relativity theory, they both manifest as 'curvature' of spacetime. We propose a hypothesis which treats these two 'curvature-producing' phenomena as aspects of an underlying process. This process treats spacetime itself as composed of discrete units (Plancktons) and is 'dynamic' in the sense that these elements of spacetime are continually being both created and annihilated. It is these two complementary processes of Planckton creation and Planckton annihilation which manifest themselves as - 'cosmic expansion' on the one hand and as 'gravitational attraction’ on the other. The Planckton hypothesis treats spacetime as a perfect fluid in the same manner as the co-moving frame of reference of Friedman equations and the Gullstrand-Painleve metric; i.e.Planckton hypothesis replaces 'curvature' of spacetime by the 'flow' of Plancktons (spacetime). Here we discuss how this perspective may allow a unified description of both cosmological and gravitational acceleration as well as providing a mechanism for inducing an irreducible action at every point associated with the creation and annihilation of Plancktons, which could be identified as the zero point energy.

Keywords: discrete spacetime, spacetime flow, zero point energy, planktons

Procedia PDF Downloads 114
4983 Modeling Average Paths Traveled by Ferry Vessels Using AIS Data

Authors: Devin Simmons

Abstract:

At the USDOT’s Bureau of Transportation Statistics, a biannual census of ferry operators in the U.S. is conducted, with results such as route mileage used to determine federal funding levels for operators. AIS data allows for the possibility of using GIS software and geographical methods to confirm operator-reported mileage for individual ferry routes. As part of the USDOT’s work on the ferry census, an algorithm was developed that uses AIS data for ferry vessels in conjunction with known ferry terminal locations to model the average route travelled for use as both a cartographic product and confirmation of operator-reported mileage. AIS data from each vessel is first analyzed to determine individual journeys based on the vessel’s velocity, and changes in velocity over time. These trips are then converted to geographic linestring objects. Using the terminal locations, the algorithm then determines whether the trip represented a known ferry route. Given a large enough dataset, routes will be represented by multiple trip linestrings, which are then filtered by DBSCAN spatial clustering to remove outliers. Finally, these remaining trips are ready to be averaged into one route. The algorithm interpolates the point on each trip linestring that represents the start point. From these start points, a centroid is calculated, and the first point of the average route is determined. Each trip is interpolated again to find the point that represents one percent of the journey’s completion, and the centroid of those points is used as the next point in the average route, and so on until 100 points have been calculated. Routes created using this algorithm have shown demonstrable improvement over previous methods, which included the implementation of a LOESS model. Additionally, the algorithm greatly reduces the amount of manual digitizing needed to visualize ferry activity.

Keywords: ferry vessels, transportation, modeling, AIS data

Procedia PDF Downloads 176
4982 Effect of Intrinsic Point Defects on the Structural and Optical Properties of SnO₂ Thin Films Grown by Ultrasonic Spray Pyrolysis Method

Authors: Fatiha Besahraoui, M'hamed Guezzoul, Kheira Chebbah, M'hamed Bouslama

Abstract:

SnO₂ thin film is characterized by Atomic Force Microscopy (AFM) and Photoluminescence Spectroscopies. AFM images show a dense surface of columnar grains with a roughness of 78.69 nm. The PL measurements at 7 K reveal the presence of PL peaks centered in IR and visible regions. They are attributed to radiative transitions via oxygen vacancies, Sn interstitials, and dangling bonds. A bands diagram model is presented with the approximate positions of intrinsic point defect levels in SnO₂ thin films. The integrated PL measurements demonstrate the good thermal stability of our sample, which makes it very useful in optoelectronic devices functioning at room temperature. The unusual behavior of the evolution of PL peaks and their full width at half maximum as a function of temperature indicates the thermal sensitivity of the point defects present in the band gap. The shallower energy levels due to dangling bonds and/or oxygen vacancies are more sensitive to the temperature. However, volume defects like Sn interstitials are thermally stable and constitute deep and stable energy levels for excited electrons. Small redshifting of PL peaks is observed with increasing temperature. This behavior is attributed to the reduction of oxygen vacancies.

Keywords: transparent conducting oxide, photoluminescence, intrinsic point defects, semiconductors, oxygen vacancies

Procedia PDF Downloads 85