Search results for: analyst forecast accuracy
622 Detection of Safety Goggles on Humans in Industrial Environment Using Faster-Region Based on Convolutional Neural Network with Rotated Bounding Box
Authors: Ankit Kamboj, Shikha Talwar, Nilesh Powar
Abstract:
To successfully deliver our products in the market, the employees need to be in a safe environment, especially in an industrial and manufacturing environment. The consequences of delinquency in wearing safety glasses while working in industrial plants could be high risk to employees, hence the need to develop a real-time automatic detection system which detects the persons (violators) not wearing safety glasses. In this study a convolutional neural network (CNN) algorithm called faster region based CNN (Faster RCNN) with rotated bounding box has been used for detecting safety glasses on persons; the algorithm has an advantage of detecting safety glasses with different orientation angles on the persons. The proposed method of rotational bounding boxes with a convolutional neural network first detects a person from the images, and then the method detects whether the person is wearing safety glasses or not. The video data is captured at the entrance of restricted zones of the industrial environment (manufacturing plant), which is further converted into images at 2 frames per second. In the first step, the CNN with pre-trained weights on COCO dataset is used for person detection where the detections are cropped as images. Then the safety goggles are labelled on the cropped images using the image labelling tool called roLabelImg, which is used to annotate the ground truth values of rotated objects more accurately, and the annotations obtained are further modified to depict four coordinates of the rectangular bounding box. Next, the faster RCNN with rotated bounding box is used to detect safety goggles, which is then compared with traditional bounding box faster RCNN in terms of detection accuracy (average precision), which shows the effectiveness of the proposed method for detection of rotatory objects. The deep learning benchmarking is done on a Dell workstation with a 16GB Nvidia GPU.Keywords: CNN, deep learning, faster RCNN, roLabelImg rotated bounding box, safety goggle detection
Procedia PDF Downloads 127621 River Network Delineation from Sentinel 1 Synthetic Aperture Radar Data
Authors: Christopher B. Obida, George A. Blackburn, James D. Whyatt, Kirk T. Semple
Abstract:
In many regions of the world, especially in developing countries, river network data are outdated or completely absent, yet such information is critical for supporting important functions such as flood mitigation efforts, land use and transportation planning, and the management of water resources. In this study, a method was developed for delineating river networks using Sentinel 1 imagery. Unsupervised classification was applied to multi-temporal Sentinel 1 data to discriminate water bodies from other land covers then the outputs were combined to generate a single persistent water bodies product. A thinning algorithm was then used to delineate river centre lines, which were converted into vector features and built into a topologically structured geometric network. The complex river system of the Niger Delta was used to compare the performance of the Sentinel-based method against alternative freely available water body products from United States Geological Survey, European Space Agency and OpenStreetMap and a river network derived from a Shuttle Rader Topography Mission Digital Elevation Model. From both raster-based and vector-based accuracy assessments, it was found that the Sentinel-based river network products were superior to the comparator data sets by a substantial margin. The geometric river network that was constructed permitted a flow routing analysis which is important for a variety of environmental management and planning applications. The extracted network will potentially be applied for modelling dispersion of hydrocarbon pollutants in Ogoniland, a part of the Niger Delta. The approach developed in this study holds considerable potential for generating up to date, detailed river network data for the many countries where such data are deficient.Keywords: Sentinel 1, image processing, river delineation, large scale mapping, data comparison, geometric network
Procedia PDF Downloads 137620 Radar Track-based Classification of Birds and UAVs
Authors: Altilio Rosa, Chirico Francesco, Foglia Goffredo
Abstract:
In recent years, the number of Unmanned Aerial Vehicles (UAVs) has significantly increased. The rapid development of commercial and recreational drones makes them an important part of our society. Despite the growing list of their applications, these vehicles pose a huge threat to civil and military installations: detection, classification and neutralization of such flying objects become an urgent need. Radar is an effective remote sensing tool for detecting and tracking flying objects, but scenarios characterized by the presence of a high number of tracks related to flying birds make especially challenging the drone detection task: operator PPI is cluttered with a huge number of potential threats and his reaction time can be severely affected. Flying birds compared to UAVs show similar velocity, RADAR cross-section and, in general, similar characteristics. Building from the absence of a single feature that is able to distinguish UAVs and birds, this paper uses a multiple features approach where an original feature selection technique is developed to feed binary classifiers trained to distinguish birds and UAVs. RADAR tracks acquired on the field and related to different UAVs and birds performing various trajectories were used to extract specifically designed target movement-related features based on velocity, trajectory and signal strength. An optimization strategy based on a genetic algorithm is also introduced to select the optimal subset of features and to estimate the performance of several classification algorithms (Neural network, SVM, Logistic regression…) both in terms of the number of selected features and misclassification error. Results show that the proposed methods are able to reduce the dimension of the data space and to remove almost all non-drone false targets with a suitable classification accuracy (higher than 95%).Keywords: birds, classification, machine learning, UAVs
Procedia PDF Downloads 218619 River Habitat Modeling for the Entire Macroinvertebrate Community
Authors: Pinna Beatrice., Laini Alex, Negro Giovanni, Burgazzi Gemma, Viaroli Pierluigi, Vezza Paolo
Abstract:
Habitat models rarely consider macroinvertebrates as ecological targets in rivers. Available approaches mainly focus on single macroinvertebrate species, not addressing the ecological needs and functionality of the entire community. This research aimed to provide an approach to model the habitat of the macroinvertebrate community. The approach is based on the recently developed Flow-T index, together with a Random Forest (RF) regression, which is employed to apply the Flow-T index at the meso-habitat scale. Using different datasets gathered from both field data collection and 2D hydrodynamic simulations, the model has been calibrated in the Trebbia river (2019 campaign), and then validated in the Trebbia, Taro, and Enza rivers (2020 campaign). The three rivers are characterized by a braiding morphology, gravel riverbeds, and summer low flows. The RF model selected 12 mesohabitat descriptors as important for the macroinvertebrate community. These descriptors belong to different frequency classes of water depth, flow velocity, substrate grain size, and connectivity to the main river channel. The cross-validation R² coefficient (R²𝒸ᵥ) of the training dataset is 0.71 for the Trebbia River (2019), whereas the R² coefficient for the validation datasets (Trebbia, Taro, and Enza Rivers 2020) is 0.63. The agreement between the simulated results and the experimental data shows sufficient accuracy and reliability. The outcomes of the study reveal that the model can identify the ecological response of the macroinvertebrate community to possible flow regime alterations and to possible river morphological modifications. Lastly, the proposed approach allows extending the MesoHABSIM methodology, widely used for the fish habitat assessment, to a different ecological target community. Further applications of the approach can be related to flow design in both perennial and non-perennial rivers, including river reaches in which fish fauna is absent.Keywords: ecological flows, macroinvertebrate community, mesohabitat, river habitat modeling
Procedia PDF Downloads 93618 Analysis of Autonomous Orbit Determination for Lagrangian Navigation Constellation with Different Dynamical Models
Authors: Gao Youtao, Zhao Tanran, Jin Bingyu, Xu Bo
Abstract:
Global navigation satellite system(GNSS) can deliver navigation information for spacecraft orbiting on low-Earth orbits and medium Earth orbits. However, the GNSS cannot navigate the spacecraft on high-Earth orbit or deep space probes effectively. With the deep space exploration becoming a hot spot of aerospace, the demand for a deep space satellite navigation system is becoming increasingly prominent. Many researchers discussed the feasibility and performance of a satellite navigation system on periodic orbits around the Earth-Moon libration points which can be called Lagrangian point satellite navigation system. Autonomous orbit determination (AOD) is an important performance for the Lagrangian point satellite navigation system. With this ability, the Lagrangian point satellite navigation system can reduce the dependency on ground stations. AOD also can greatly reduce total system cost and assure mission continuity. As the elliptical restricted three-body problem can describe the Earth-Moon system more accurately than the circular restricted three-body problem, we study the autonomous orbit determination of Lagrangian navigation constellation using only crosslink range based on elliptical restricted three body problem. Extended Kalman filter is used in the autonomous orbit determination. In order to compare the autonomous orbit determination results based on elliptical restricted three-body problem to the results of autonomous orbit determination based on circular restricted three-body problem, we give the autonomous orbit determination position errors of a navigation constellation include four satellites based on the circular restricted three-body problem. The simulation result shows that the Lagrangian navigation constellation can achieve long-term precise autonomous orbit determination using only crosslink range. In addition, the type of the libration point orbit will influence the autonomous orbit determination accuracy.Keywords: extended Kalman filter, autonomous orbit determination, quasi-periodic orbit, navigation constellation
Procedia PDF Downloads 280617 Robust Numerical Solution for Flow Problems
Authors: Gregor Kosec
Abstract:
Simple and robust numerical approach for solving flow problems is presented, where involved physical fields are represented through the local approximation functions, i.e., the considered field is approximated over a local support domain. The approximation functions are then used to evaluate the partial differential operators. The type of approximation, the size of support domain, and the type and number of basis function can be general. The solution procedure is formulated completely through local computational operations. Besides local numerical method also the pressure velocity is performed locally with retaining the correct temporal transient. The complete locality of the introduced numerical scheme has several beneficial effects. One of the most attractive is the simplicity since it could be understood as a generalized Finite Differences Method, however, much more powerful. Presented methodology offers many possibilities for treating challenging cases, e.g. nodal adaptivity to address regions with sharp discontinuities or p-adaptivity to treat obscure anomalies in physical field. The stability versus computation complexity and accuracy can be regulated by changing number of support nodes, etc. All these features can be controlled on the fly during the simulation. The presented methodology is relatively simple to understand and implement, which makes it potentially powerful tool for engineering simulations. Besides simplicity and straightforward implementation, there are many opportunities to fully exploit modern computer architectures through different parallel computing strategies. The performance of the method is presented on the lid driven cavity problem, backward facing step problem, de Vahl Davis natural convection test, extended also to low Prandtl fluid and Darcy porous flow. Results are presented in terms of velocity profiles, convergence plots, and stability analyses. Results of all cases are also compared against published data.Keywords: fluid flow, meshless, low Pr problem, natural convection
Procedia PDF Downloads 231616 Experimental Assessment of the Effectiveness of Judicial Instructions and of Expert Testimony in Improving Jurors’ Evaluation of Eyewitness Evidence
Authors: Alena Skalon, Jennifer L. Beaudry
Abstract:
Eyewitness misidentifications can sometimes lead to wrongful convictions of innocent people. This occurs in part because jurors tend to believe confident eyewitnesses even when the identification took place under suggestive conditions. Empirical research demonstrated that jurors are often unaware of the factors that can influence the reliability of eyewitness identification. Most common legal safeguards that are designed to educate jurors about eyewitness evidence are judicial instructions and expert testimony. To date, very few studies assessed the effectiveness of judicial instructions and most of them found that judicial instructions make jurors more skeptical of eyewitness evidence or do not have any effect on jurors’ judgments. Similar results were obtained for expert testimony. However, none of the previous studies focused on the ability of legal safeguards to improve jurors’ assessment of evidence obtained from suggestive identification procedures—this is one of the gaps addressed by this paper. Furthermore, only three studies investigated whether legal safeguards improve the ultimate accuracy of jurors’ judgments—that is, whether after listening to judicial instructions or expert testimony jurors can differentiate between accurate and inaccurate eyewitnesses. This presentation includes two studies. Both studies used genuine eyewitnesses (i.e., eyewitnesses who watched the crime) and manipulated the suggestiveness of identification procedures. The first study manipulated the presence of judicial instructions; the second study manipulated the presence of one of two types of expert testimony: a traditional, verbal expert testimony or expert testimony accompanied by visual aids. All participant watched a video-recording of an identification procedure and of an eyewitness testimony. The results indicated that neither judicial instructions nor expert testimony affected jurors’ judgments. However, consistent with the previous findings, when the identification procedure was non-suggestive, jurors believed accurate eyewitnesses more often than inaccurate eyewitnesses. When the procedure was suggestive, jurors believed accurate and inaccurate eyewitnesses at the same rate. The paper will discuss the implications of these studies and directions for future research.Keywords: expert testimony, eyewitness evidence, judicial instructions, jurors’ decision making, legal safeguards
Procedia PDF Downloads 175615 Advancements in Predicting Diabetes Biomarkers: A Machine Learning Epigenetic Approach
Authors: James Ladzekpo
Abstract:
Background: The urgent need to identify new pharmacological targets for diabetes treatment and prevention has been amplified by the disease's extensive impact on individuals and healthcare systems. A deeper insight into the biological underpinnings of diabetes is crucial for the creation of therapeutic strategies aimed at these biological processes. Current predictive models based on genetic variations fall short of accurately forecasting diabetes. Objectives: Our study aims to pinpoint key epigenetic factors that predispose individuals to diabetes. These factors will inform the development of an advanced predictive model that estimates diabetes risk from genetic profiles, utilizing state-of-the-art statistical and data mining methods. Methodology: We have implemented a recursive feature elimination with cross-validation using the support vector machine (SVM) approach for refined feature selection. Building on this, we developed six machine learning models, including logistic regression, k-Nearest Neighbors (k-NN), Naive Bayes, Random Forest, Gradient Boosting, and Multilayer Perceptron Neural Network, to evaluate their performance. Findings: The Gradient Boosting Classifier excelled, achieving a median recall of 92.17% and outstanding metrics such as area under the receiver operating characteristics curve (AUC) with a median of 68%, alongside median accuracy and precision scores of 76%. Through our machine learning analysis, we identified 31 genes significantly associated with diabetes traits, highlighting their potential as biomarkers and targets for diabetes management strategies. Conclusion: Particularly noteworthy were the Gradient Boosting Classifier and Multilayer Perceptron Neural Network, which demonstrated potential in diabetes outcome prediction. We recommend future investigations to incorporate larger cohorts and a wider array of predictive variables to enhance the models' predictive capabilities.Keywords: diabetes, machine learning, prediction, biomarkers
Procedia PDF Downloads 53614 Comparison Of Virtual Non-Contrast To True Non-Contrast Images Using Dual Layer Spectral Computed Tomography
Authors: O’Day Luke
Abstract:
Purpose: To validate virtual non-contrast reconstructions generated from dual-layer spectral computed tomography (DL-CT) data as an alternative for the acquisition of a dedicated true non-contrast dataset during multiphase contrast studies. Material and methods: Thirty-three patients underwent a routine multiphase clinical CT examination, using Dual-Layer Spectral CT, from March to August 2021. True non-contrast (TNC) and virtual non-contrast (VNC) datasets, generated from both portal venous and arterial phase imaging were evaluated. For every patient in both true and virtual non-contrast datasets, a region-of-interest (ROI) was defined in aorta, liver, fluid (i.e. gallbladder, urinary bladder), kidney, muscle, fat and spongious bone, resulting in 693 ROIs. Differences in attenuation for VNC and TNV images were compared, both separately and combined. Consistency between VNC reconstructions obtained from the arterial and portal venous phase was evaluated. Results: Comparison of CT density (HU) on the VNC and TNC images showed a high correlation. The mean difference between TNC and VNC images (excluding bone results) was 5.5 ± 9.1 HU and > 90% of all comparisons showed a difference of less than 15 HU. For all tissues but spongious bone, the mean absolute difference between TNC and VNC images was below 10 HU. VNC images derived from the arterial and the portal venous phase showed a good correlation in most tissue types. The aortic attenuation was somewhat dependent however on which dataset was used for reconstruction. Bone evaluation with VNC datasets continues to be a problem, as spectral CT algorithms are currently poor in differentiating bone and iodine. Conclusion: Given the increasing availability of DL-CT and proven accuracy of virtual non-contrast processing, VNC is a promising tool for generating additional data during routine contrast-enhanced studies. This study shows the utility of virtual non-contrast scans as an alternative for true non-contrast studies during multiphase CT, with potential for dose reduction, without loss of diagnostic information.Keywords: dual-layer spectral computed tomography, virtual non-contrast, true non-contrast, clinical comparison
Procedia PDF Downloads 140613 Design of Traffic Counting Android Application with Database Management System and Its Comparative Analysis with Traditional Counting Methods
Authors: Muhammad Nouman, Fahad Tiwana, Muhammad Irfan, Mohsin Tiwana
Abstract:
Traffic congestion has been increasing significantly in major metropolitan areas as a result of increased motorization, urbanization, population growth and changes in the urban density. Traffic congestion compromises efficiency of transport infrastructure and causes multiple traffic concerns; including but not limited to increase of travel time, safety hazards, air pollution, and fuel consumption. Traffic management has become a serious challenge for federal and provincial governments, as well as exasperated commuters. Effective, flexible, efficient and user-friendly traffic information/database management systems characterize traffic conditions by making use of traffic counts for storage, processing, and visualization. While, the emerging data collection technologies continue to proliferate, its accuracy can be guaranteed through the comparison of observed data with the manual handheld counters. This paper presents the design of tablet based manual traffic counting application and framework for development of traffic database management system for Pakistan. The database management system comprises of three components including traffic counting android application; establishing online database and its visualization using Google maps. Oracle relational database was chosen to develop the data structure whereas structured query language (SQL) was adopted to program the system architecture. The GIS application links the data from the database and projects it onto a dynamic map for traffic conditions visualization. The traffic counting device and example of a database application in the real-world problem provided a creative outlet to visualize the uses and advantages of a database management system in real time. Also, traffic data counts by means of handheld tablet/ mobile application can be used for transportation planning and forecasting.Keywords: manual count, emerging data sources, traffic information quality, traffic surveillance, traffic counting device, android; data visualization, traffic management
Procedia PDF Downloads 191612 Development of Digital Twin Concept to Detect Abnormal Changes in Structural Behaviour
Authors: Shady Adib, Vladimir Vinogradov, Peter Gosling
Abstract:
Digital Twin (DT) technology is a new technology that appeared in the early 21st century. The DT is defined as the digital representation of living and non-living physical assets. By connecting the physical and virtual assets, data are transmitted smoothly, allowing the virtual asset to fully represent the physical asset. Although there are lots of studies conducted on the DT concept, there is still limited information about the ability of the DT models for monitoring and detecting unexpected changes in structural behaviour in real time. This is due to the large computational efforts required for the analysis and an excessively large amount of data transferred from sensors. This paper aims to develop the DT concept to be able to detect the abnormal changes in structural behaviour in real time using advanced modelling techniques, deep learning algorithms, and data acquisition systems, taking into consideration model uncertainties. finite element (FE) models were first developed offline to be used with a reduced basis (RB) model order reduction technique for the construction of low-dimensional space to speed the analysis during the online stage. The RB model was validated against experimental test results for the establishment of a DT model of a two-dimensional truss. The established DT model and deep learning algorithms were used to identify the location of damage once it has appeared during the online stage. Finally, the RB model was used again to identify the damage severity. It was found that using the RB model, constructed offline, speeds the FE analysis during the online stage. The constructed RB model showed higher accuracy for predicting the damage severity, while deep learning algorithms were found to be useful for estimating the location of damage with small severity.Keywords: data acquisition system, deep learning, digital twin, model uncertainties, reduced basis, reduced order model
Procedia PDF Downloads 97611 A Cooperative Signaling Scheme for Global Navigation Satellite Systems
Authors: Keunhong Chae, Seokho Yoon
Abstract:
Recently, the global navigation satellite system (GNSS) such as Galileo and GPS is employing more satellites to provide a higher degree of accuracy for the location service, thus calling for a more efficient signaling scheme among the satellites used in the overall GNSS network. In that the network throughput is improved, the spatial diversity can be one of the efficient signaling schemes; however, it requires multiple antenna that could cause a significant increase in the complexity of the GNSS. Thus, a diversity scheme called the cooperative signaling was proposed, where the virtual multiple-input multiple-output (MIMO) signaling is realized with using only a single antenna in the transmit satellite of interest and with modeling the neighboring satellites as relay nodes. The main drawback of the cooperative signaling is that the relay nodes receive the transmitted signal at different time instants, i.e., they operate in an asynchronous way, and thus, the overall performance of the GNSS network could degrade severely. To tackle the problem, several modified cooperative signaling schemes were proposed; however, all of them are difficult to implement due to a signal decoding at the relay nodes. Although the implementation at the relay nodes could be simpler to some degree by employing the time-reversal and conjugation operations instead of the signal decoding, it would be more efficient if we could implement the operations of the relay nodes at the source node having more resources than the relay nodes. So, in this paper, we propose a novel cooperative signaling scheme, where the data signals are combined in a unique way at the source node, thus obviating the need of the complex operations such as signal decoding, time-reversal and conjugation at the relay nodes. The numerical results confirm that the proposed scheme provides the same performance in the cooperative diversity and the bit error rate (BER) as the conventional scheme, while reducing the complexity at the relay nodes significantly. Acknowledgment: This work was supported by the National GNSS Research Center program of Defense Acquisition Program Administration and Agency for Defense Development.Keywords: global navigation satellite network, cooperative signaling, data combining, nodes
Procedia PDF Downloads 279610 Study of Land Use Changes around an Archaeological Site Using Satellite Imagery Analysis: A Case Study of Hathnora, Madhya Pradesh, India
Authors: Pranita Shivankar, Arun Suryawanshi, Prabodhachandra Deshmukh, S. V. C. Kameswara Rao
Abstract:
Many undesirable significant changes in landscapes and the regions in the vicinity of historically important structures occur as impacts due to anthropogenic activities over a period of time. A better understanding of such influences using recently developed satellite remote sensing techniques helps in planning the strategies for minimizing the negative impacts on the existing environment. In 1982, a fossilized hominid skull cap was discovered at a site located along the northern bank of the east-west flowing river Narmada in the village Hathnora. Close to the same site, the presence of Late Acheulian and Middle Palaeolithic tools have been discovered in the immediately overlying pebbly gravel, suggesting that the ‘Narmada skull’ may be from the Middle Pleistocene age. The reviews of recently carried out research studies relevant to hominid remains all over the world from Late Acheulian and Middle Palaeolithic sites suggest succession and contemporaneity of cultures there, enhancing the importance of Hathnora as a rare precious site. In this context, the maximum likelihood classification using digital interpretation techniques was carried out for this study area using the satellite imagery from Landsat ETM+ for the year 2006 and Landsat TM (OLI and TIRS) for the year 2016. The overall accuracy of Land Use Land Cover (LULC) classification of 2016 imagery was around 77.27% based on ground truth data. The significant reduction in the main river course and agricultural activities and increase in the built-up area observed in remote sensing data analysis are undoubtedly the outcome of human encroachments in the vicinity of the eminent heritage site.Keywords: cultural succession, digital interpretation, Hathnora, Homo Sapiens, Late Acheulian, Middle Palaeolithic
Procedia PDF Downloads 170609 Fully Coupled Porous Media Model
Authors: Nia Mair Fry, Matthew Profit, Chenfeng Li
Abstract:
This work focuses on the development and implementation of a fully implicit-implicit, coupled mechanical deformation and porous flow, finite element software tool. The fully implicit software accurately predicts classical fundamental analytical solutions such as the Terzaghi consolidation problem. Furthermore, it can capture other analytical solutions less well known in the literature, such as Gibson’s sedimentation rate problem and Coussy’s problems investigating wellbore stability for poroelastic rocks. The mechanical volume strains are transferred to the porous flow governing equation in an implicit framework. This will overcome some of the many current industrial issues, which use explicit solvers for the mechanical governing equations and only implicit solvers on the porous flow side. This can potentially lead to instability and non-convergence issues in the coupled system, plus giving results with an accountable degree of error. The specification of a fully monolithic implicit-implicit coupled porous media code sees the solution of both seepage-mechanical equations in one matrix system, under a unified time-stepping scheme, which makes the problem definition much easier. When using an explicit solver, additional input such as the damping coefficient and mass scaling factor is required, which are circumvented with a fully implicit solution. Further, improved accuracy is achieved as the solution is not dependent on predictor-corrector methods for the pore fluid pressure solution, but at the potential cost of reduced stability. In testing of this fully monolithic porous media code, there is the comparison of the fully implicit coupled scheme against an existing staggered explicit-implicit coupled scheme solution across a range of geotechnical problems. These cases include 1) Biot coefficient calculation, 2) consolidation theory with Terzaghi analytical solution, 3) sedimentation theory with Gibson analytical solution, and 4) Coussy well-bore poroelastic analytical solutions.Keywords: coupled, implicit, monolithic, porous media
Procedia PDF Downloads 137608 Aerial Photogrammetry-Based Techniques to Rebuild the 30-Years Landform Changes of a Landslide-Dominated Watershed in Taiwan
Authors: Yichin Chen
Abstract:
Taiwan is an island characterized by an active tectonics and high erosion rates. Monitoring the dynamic landscape of Taiwan is an important issue for disaster mitigation, geomorphological research, and watershed management. Long-term and high spatiotemporal landform data is essential for quantifying and simulating the geomorphological processes and developing warning systems. Recently, the advances in unmanned aerial vehicle (UAV) and computational photogrammetry technology have provided an effective way to rebuild and monitor the topography changes in high spatio-temporal resolutions. This study rebuilds the 30-years landform change in the Aiyuzi watershed in 1986-2017 by using the aerial photogrammetry-based techniques. The Aiyuzi watershed, located in central Taiwan and has an area of 3.99 Km², is famous for its frequent landslide and debris flow disasters. This study took the aerial photos by using UAV and collected multi-temporal historical, stereo photographs, taken by the Aerial Survey Office of Taiwan’s Forestry Bureau. To rebuild the orthoimages and digital surface models (DSMs), Pix4DMapper, a photogrammetry software, was used. Furthermore, to control model accuracy, a set of ground control points was surveyed by using eGPS. The results show that the generated DSMs have the ground sampling distance (GSD) of ~10 cm and ~0.3 cm from the UAV’s and historical photographs, respectively, and vertical error of ~1 m. By comparing the DSMs, there are many deep-seated landslides (with depth over 20 m) occurred on the upstream in the Aiyuzi watershed. Even though a large amount of sediment is delivered from the landslides, the steep main channel has sufficient capacity to transport sediment from the channel and to erode the river bed to ~20 m in depth. Most sediments are transported to the outlet of watershed and deposits on the downstream channel. This case study shows that UAV and photogrammetry technology are useful for topography change monitoring effectively.Keywords: aerial photogrammetry, landslide, landform change, Taiwan
Procedia PDF Downloads 155607 Analysis of Speaking Skills in Turkish Language Acquisition as a Foreign Language
Authors: Lokman Gozcu, Sule Deniz Gozcu
Abstract:
This study aims to analyze the skills of speaking in the acquisition of Turkish as a foreign language. One of the most important things for the individual who learns a foreign language is to be successful in the oral communication (speaking) skills and to interact in an understandable way. Speech skill requires much more time and effort than other language skills. In this direction, it is necessary to make an analysis of these oral communication skills, which is important in Turkish language acquisition as a foreign language and to draw out a road map according to the result. The aim of this study is to determine the competence and attitudes of speaking competence according to the individuals who learn Turkish as a foreign language and to be considered as speaking skill elements; Grammar, emphasis, intonation, body language, speed, ranking, accuracy, fluency, pronunciation, etc. and the results and suggestions based on these determinations. A mixed method has been chosen for data collection and analysis. A Likert scale (for competence and attitude) was applied to 190 individuals who were interviewed face-to-face (for speech skills) with a semi-structured interview form about 22 participants randomly selected. In addition, the observation form related to the 22 participants interviewed were completed by the researcher during the interview, and after the completion of the collection of all the voice recordings, analyses of voice recordings with the speech skills evaluation scale was made. The results of the research revealed that the speech skills of the individuals who learned Turkish as a foreign language have various perspectives. According to the results, the most inadequate aspects of the participants' ability to speak in Turkish include vocabulary, using humorous elements while speaking Turkish, being able to include items such as idioms and proverbs while speaking Turkish, Turkish fluency respectively. In addition, the participants were found not to feel comfortable while speaking Turkish, to feel ridiculous and to be nervous while speaking in formal settings. There are conclusions and suggestions for the situations that arise after the have been analyses made.Keywords: learning Turkish as a foreign language, proficiency criteria, phonetic (modalities), speaking skills
Procedia PDF Downloads 240606 Off-Line Text-Independent Arabic Writer Identification Using Optimum Codebooks
Authors: Ahmed Abdullah Ahmed
Abstract:
The task of recognizing the writer of a handwritten text has been an attractive research problem in the document analysis and recognition community with applications in handwriting forensics, paleography, document examination and handwriting recognition. This research presents an automatic method for writer recognition from digitized images of unconstrained writings. Although a great effort has been made by previous studies to come out with various methods, their performances, especially in terms of accuracy, are fallen short, and room for improvements is still wide open. The proposed technique employs optimal codebook based writer characterization where each writing sample is represented by a set of features computed from two codebooks, beginning and ending. Unlike most of the classical codebook based approaches which segment the writing into graphemes, this study is based on fragmenting a particular area of writing which are beginning and ending strokes. The proposed method starting with contour detection to extract significant information from the handwriting and the curve fragmentation is then employed to categorize the handwriting into Beginning and Ending zones into small fragments. The similar fragments of beginning strokes are grouped together to create Beginning cluster, and similarly, the ending strokes are grouped to create the ending cluster. These two clusters lead to the development of two codebooks (beginning and ending) by choosing the center of every similar fragments group. Writings under study are then represented by computing the probability of occurrence of codebook patterns. The probability distribution is used to characterize each writer. Two writings are then compared by computing distances between their respective probability distribution. The evaluations carried out on ICFHR standard dataset of 206 writers using Beginning and Ending codebooks separately. Finally, the Ending codebook achieved the highest identification rate of 98.23%, which is the best result so far on ICFHR dataset.Keywords: off-line text-independent writer identification, feature extraction, codebook, fragments
Procedia PDF Downloads 510605 A Bayesian Approach for Analyzing Academic Article Structure
Authors: Jia-Lien Hsu, Chiung-Wen Chang
Abstract:
Research articles may follow a simple and succinct structure of organizational patterns, called move. For example, considering extended abstracts, we observe that an extended abstract usually consists of five moves, including Background, Aim, Method, Results, and Conclusion. As another example, when publishing articles in PubMed, authors are encouraged to provide a structured abstract, which is an abstract with distinct and labeled sections (e.g., Introduction, Methods, Results, Discussions) for rapid comprehension. This paper introduces a method for computational analysis of move structures (i.e., Background-Purpose-Method-Result-Conclusion) in abstracts and introductions of research documents, instead of manually time-consuming and labor-intensive analysis process. In our approach, sentences in a given abstract and introduction are automatically analyzed and labeled with a specific move (i.e., B-P-M-R-C in this paper) to reveal various rhetorical status. As a result, it is expected that the automatic analytical tool for move structures will facilitate non-native speakers or novice writers to be aware of appropriate move structures and internalize relevant knowledge to improve their writing. In this paper, we propose a Bayesian approach to determine move tags for research articles. The approach consists of two phases, training phase and testing phase. In the training phase, we build a Bayesian model based on a couple of given initial patterns and the corpus, a subset of CiteSeerX. In the beginning, the priori probability of Bayesian model solely relies on initial patterns. Subsequently, with respect to the corpus, we process each document one by one: extract features, determine tags, and update the Bayesian model iteratively. In the testing phase, we compare our results with tags which are manually assigned by the experts. In our experiments, the promising accuracy of the proposed approach reaches 56%.Keywords: academic English writing, assisted writing, move tag analysis, Bayesian approach
Procedia PDF Downloads 330604 Identifying a Drug Addict Person Using Artificial Neural Networks
Authors: Mustafa Al Sukar, Azzam Sleit, Abdullatif Abu-Dalhoum, Bassam Al-Kasasbeh
Abstract:
Use and abuse of drugs by teens is very common and can have dangerous consequences. The drugs contribute to physical and sexual aggression such as assault or rape. Some teenagers regularly use drugs to compensate for depression, anxiety or a lack of positive social skills. Teen resort to smoking should not be minimized because it can be "gateway drugs" for other drugs (marijuana, cocaine, hallucinogens, inhalants, and heroin). The combination of teenagers' curiosity, risk taking behavior, and social pressure make it very difficult to say no. This leads most teenagers to the questions: "Will it hurt to try once?" Nowadays, technological advances are changing our lives very rapidly and adding a lot of technologies that help us to track the risk of drug abuse such as smart phones, Wireless Sensor Networks (WSNs), Internet of Things (IoT), etc. This technique may help us to early discovery of drug abuse in order to prevent an aggravation of the influence of drugs on the abuser. In this paper, we have developed a Decision Support System (DSS) for detecting the drug abuse using Artificial Neural Network (ANN); we used a Multilayer Perceptron (MLP) feed-forward neural network in developing the system. The input layer includes 50 variables while the output layer contains one neuron which indicates whether the person is a drug addict. An iterative process is used to determine the number of hidden layers and the number of neurons in each one. We used multiple experiment models that have been completed with Log-Sigmoid transfer function. Particularly, 10-fold cross validation schemes are used to access the generalization of the proposed system. The experiment results have obtained 98.42% classification accuracy for correct diagnosis in our system. The data had been taken from 184 cases in Jordan according to a set of questions compiled from Specialists, and data have been obtained through the families of drug abusers.Keywords: drug addiction, artificial neural networks, multilayer perceptron (MLP), decision support system
Procedia PDF Downloads 299603 A Support Vector Machine Learning Prediction Model of Evapotranspiration Using Real-Time Sensor Node Data
Authors: Waqas Ahmed Khan Afridi, Subhas Chandra Mukhopadhyay, Bandita Mainali
Abstract:
The research paper presents a unique approach to evapotranspiration (ET) prediction using a Support Vector Machine (SVM) learning algorithm. The study leverages real-time sensor node data to develop an accurate and adaptable prediction model, addressing the inherent challenges of traditional ET estimation methods. The integration of the SVM algorithm with real-time sensor node data offers great potential to improve spatial and temporal resolution in ET predictions. In the model development, key input features are measured and computed using mathematical equations such as Penman-Monteith (FAO56) and soil water balance (SWB), which include soil-environmental parameters such as; solar radiation (Rs), air temperature (T), atmospheric pressure (P), relative humidity (RH), wind speed (u2), rain (R), deep percolation (DP), soil temperature (ST), and change in soil moisture (∆SM). The one-year field data are split into combinations of three proportions i.e. train, test, and validation sets. While kernel functions with tuning hyperparameters have been used to train and improve the accuracy of the prediction model with multiple iterations. This paper also outlines the existing methods and the machine learning techniques to determine Evapotranspiration, data collection and preprocessing, model construction, and evaluation metrics, highlighting the significance of SVM in advancing the field of ET prediction. The results demonstrate the robustness and high predictability of the developed model on the basis of performance evaluation metrics (R2, RMSE, MAE). The effectiveness of the proposed model in capturing complex relationships within soil and environmental parameters provide insights into its potential applications for water resource management and hydrological ecosystem.Keywords: evapotranspiration, FAO56, KNIME, machine learning, RStudio, SVM, sensors
Procedia PDF Downloads 68602 Multi-Stage Optimization of Local Environmental Quality by Comprehensive Computer Simulated Person as Sensor for Air Conditioning Control
Authors: Sung-Jun Yoo, Kazuhide Ito
Abstract:
In this study, a comprehensive computer simulated person (CSP) that integrates computational human model (virtual manikin) and respiratory tract model (virtual airway), was applied for estimation of indoor environmental quality. Moreover, an inclusive prediction method was established by integrating computational fluid dynamics (CFD) analysis with advanced CSP which is combined with physiologically-based pharmacokinetic (PBPK) model, unsteady thermoregulation model for analysis targeting micro-climate around human body and respiratory area with high accuracy. This comprehensive method can estimate not only the contaminant inhalation but also constant interaction in the contaminant transfer between indoor spaces, i.e., a target area for indoor air quality (IAQ) assessment, and respiratory zone for health risk assessment. This study focused on the usage of the CSP as an air/thermal quality sensor in indoors, which means the application of comprehensive model for assessment of IAQ and thermal environmental quality. Demonstrative analysis was performed in order to examine the applicability of the comprehensive model to the heating, ventilation, air conditioning (HVAC) control scheme. CSP was located at the center of the simple model room which has dimension of 3m×3m×3m. Formaldehyde which is generated from floor material was assumed as a target contaminant, and flow field, sensible/latent heat and contaminant transfer analysis in indoor space were conducted by using CFD simulation coupled with CSP. In this analysis, thermal comfort was evaluated by thermoregulatory analysis, and respiratory exposure risks represented by adsorption flux/concentration at airway wall surface were estimated by PBPK-CFD hybrid analysis. These Analysis results concerning IAQ and thermal comfort will be fed back to the HVAC control and could be used to find a suitable ventilation rate and energy requirement for air conditioning system.Keywords: CFD simulation, computer simulated person, HVAC control, indoor environmental quality
Procedia PDF Downloads 359601 Fuzzy Logic-Based Approach to Predict Fault in Transformer Oil Based on Health Index Using Dissolved Gas Analysis
Authors: Kharisma Utomo Mulyodinoto, Suwarno, Ahmed Abu-Siada
Abstract:
Transformer insulating oil is a key component that can be utilized to detect incipient faults within operating transformers without taking them out of service. Dissolved gas-in-oil analysis has been widely accepted as a powerful technique to detect such incipient faults. While the measurement of dissolved gases within transformer oil samples has been standardized over the past two decades, analysis of the results is not always straightforward as it depends on personnel expertise more than mathematical formulas. In analyzing such data, the generation rate of each dissolved gas is of more concern than the absolute value of the gas. As such, history of dissolved gases within a particular transformer should be archived for future comparison. Lack of such history may lead to misinterpretation of the obtained results. IEEE C57.104-2008 standards have classified the health condition of the transformer based on the absolute value of individual dissolved gases along with the total dissolved combustible gas (TDCG) within transformer oil into 4 conditions. While the technique is easy to implement, it is considered as a very conservative technique and is not widely accepted as a reliable interpretation tool. Moreover, measured gases for the same oil sample can be within various conditions limits and hence, misinterpretation of the data is expected. To overcome this limitation, this paper introduces a fuzzy logic approach to predict the health condition of the transformer oil based on IEEE C57.104-2008 standards along with Roger ratio and IEC ratio-based methods. DGA results of 31 chosen oil samples from 469 transformer oil samples of normal transformers and pre-known fault-type transformers that were collected from Indonesia Electrical Utility Company, PT. PLN (Persero), from different voltage rating: 500/150 kV, 150/20 kV, and 70/20 kV; different capacity: 500 MVA, 60 MVA, 50 MVA, 30 MVA, 20 MVA, 15 MVA, and 10 MVA; and different lifespan, are used to test and establish the fuzzy logic model. Results show that the proposed approach is of good accuracy and can be considered as a platform toward the standardization of the dissolved gas interpretation process.Keywords: dissolved gas analysis, fuzzy logic, health index, IEEE C57.104-2008, IEC ratio method, Roger ratio method
Procedia PDF Downloads 155600 Applying Semi-Automatic Digital Aerial Survey Technology and Canopy Characters Classification for Surface Vegetation Interpretation of Archaeological Sites
Authors: Yung-Chung Chuang
Abstract:
The cultural layers of archaeological sites are mainly affected by surface land use, land cover, and root system of surface vegetation. For this reason, continuous monitoring of land use and land cover change is important for archaeological sites protection and management. However, in actual operation, on-site investigation and orthogonal photograph interpretation require a lot of time and manpower. For this reason, it is necessary to perform a good alternative for surface vegetation survey in an automated or semi-automated manner. In this study, we applied semi-automatic digital aerial survey technology and canopy characters classification with very high-resolution aerial photographs for surface vegetation interpretation of archaeological sites. The main idea is based on different landscape or forest type can easily be distinguished with canopy characters (e.g., specific texture distribution, shadow effects and gap characters) extracted by semi-automatic image classification. A novel methodology to classify the shape of canopy characters using landscape indices and multivariate statistics was also proposed. Non-hierarchical cluster analysis was used to assess the optimal number of canopy character clusters and canonical discriminant analysis was used to generate the discriminant functions for canopy character classification (seven categories). Therefore, people could easily predict the forest type and vegetation land cover by corresponding to the specific canopy character category. The results showed that the semi-automatic classification could effectively extract the canopy characters of forest and vegetation land cover. As for forest type and vegetation type prediction, the average prediction accuracy reached 80.3%~91.7% with different sizes of test frame. It represented this technology is useful for archaeological site survey, and can improve the classification efficiency and data update rate.Keywords: digital aerial survey, canopy characters classification, archaeological sites, multivariate statistics
Procedia PDF Downloads 140599 Teaching Audiovisual Translation (AVT):Linguistic and Technical Aspects of Different Modes of AVT
Authors: Juan-Pedro Rica-Peromingo
Abstract:
Teachers constantly need to innovate and redefine materials for their lectures, especially in areas such as Language for Specific Purposes (LSP) and Translation Studies (TS). It is therefore essential for the lecturers to be technically skilled to handle the never-ending evolution in software and technology, which are necessary elements especially in certain courses at university level. This need becomes even more evident in Audiovisual Translation (AVT) Modules and Courses. AVT has undergone considerable growth in the area of teaching and learning of languages for academic purposes. We have witnessed the development of a considerable number of masters and postgraduate courses where AVT becomes a tool for L2 learning. The teaching and learning of different AVT modes are components of undergraduate and postgraduate courses. Universities, in which AVT is offered as part of their teaching programme or training, make use of professional or free software programs. This paper presents an approach in AVT withina specific university context, in which technology is used by means of professional and nonprofessional software. Students take an AVT subject as part of their English Linguistics Master’s Degree at the Complutense University (UCM) in which they are using professional (Spot) and nonprofessional (Subtitle Workshop, Aegisub, Windows Movie Maker) software packages. The students are encouraged to develop their tasks and projects simulating authentic professional experiences and contexts in the different AVT modes: subtitling for hearing and deaf and hard of hearing population, audio description and dubbing. Selected scenes from TV series such as X-Files, Gossip girl, IT Crowd; extracts from movies: Finding Nemo, Good Will Hunting, School of Rock, Harry Potter, Up; and short movies (Vincent) were used. Hence, the complexity of the audiovisual materials used in class as well as the activities for their projects were graded. The assessment of the diverse tasks carried out by all the students are expected to provide some insights into the best way to improve their linguistic accuracy and oral and written productions with the use of different AVT modes in a very specific ESP university context.Keywords: ESP, audiovisual translation, technology, university teaching, teaching
Procedia PDF Downloads 517598 Lexical-Semantic Deficits in Sinhala Speaking Persons with Post Stroke Aphasia: Evidence from Single Word Auditory Comprehension Task
Authors: D. W. M. S. Samarathunga, Isuru Dharmarathne
Abstract:
In aphasia, various levels of symbolic language processing (semantics) are affected. It is shown that Persons with Aphasia (PWA) often experience more problems comprehending some categories of words than others. The study aimed to determine lexical semantic deficits seen in Auditory Comprehension (AC) and to describe lexical-semantic deficits across six selected word categories. Thirteen (n =13) persons diagnosed with post-stroke aphasia (PSA) were recruited to perform an AC task. Foods, objects, clothes, vehicles, body parts and animals were selected as the six categories. As the test stimuli, black and white line drawings were adapted from a picture set developed for semantic studies by Snodgrass and Vanderwart. A pilot study was conducted with five (n=5) healthy nonbrain damaged Sinhala speaking adults to decide familiarity and applicability of the test material. In the main study, participants were scored based on the accuracy and number of errors shown. The results indicate similar trends of lexical semantic deficits identified in the literature confirming ‘animals’ to be the easiest category to comprehend. Mann-Whitney U test was performed to determine the association between the selected variables and the participants’ performance on AC task. No statistical significance was found between the errors and the type of aphasia reflecting similar patterns described in aphasia literature in other languages. The current study indicates the presence of selectivity of lexical semantic deficits in AC and a hierarchy was developed based on the complexity of the categories to comprehend by Sinhala speaking PWA, which might be clinically beneficial when improving language skills of Sinhala speaking persons with post-stroke aphasia. However, further studies on aphasia should be conducted with larger samples for a longer period to study deficits in Sinhala and other Sri Lankan languages (Tamil and Malay).Keywords: aphasia, auditory comprehension, selective lexical-semantic deficits, semantic categories
Procedia PDF Downloads 252597 Pupil Size: A Measure of Identification Memory in Target Present Lineups
Authors: Camilla Elphick, Graham Hole, Samuel Hutton, Graham Pike
Abstract:
Pupil size has been found to change irrespective of luminosity, suggesting that it can be used to make inferences about cognitive processes, such as cognitive load. To see whether identifying a target requires a different cognitive load to rejecting distractors, the effect of viewing a target (compared with viewing distractors) on pupil size was investigated using a sequential video lineup procedure with two lineup sessions. Forty one participants were chosen randomly via the university. Pupil sizes were recorded when viewing pre target distractors and post target distractors and compared to pupil size when viewing the target. Overall, pupil size was significantly larger when viewing the target compared with viewing distractors. In the first session, pupil size changes were significantly different between participants who identified the target (Hits) and those who did not. Specifically, the pupil size of Hits reduced significantly after viewing the target (by 26%), suggesting that cognitive load reduced following identification. The pupil sizes of Misses (who made no identification) and False Alarms (who misidentified a distractor) did not reduce, suggesting that the cognitive load remained high in participants who failed to make the correct identification. In the second session, pupil sizes were smaller overall, suggesting that cognitive load was smaller in this session, and there was no significant difference between Hits, Misses and False Alarms. Furthermore, while the frequency of Hits increased, so did False Alarms. These two findings suggest that the benefits of including a second session remain uncertain, as the second session neither provided greater accuracy nor a reliable way to measure it. It is concluded that pupil size is a measure of face recognition strength in the first session of a target present lineup procedure. However, it is still not known whether cognitive load is an adequate explanation for this, or whether cognitive engagement might describe the effect more appropriately. If cognitive load and cognitive engagement can be teased apart with further investigation, this would have positive implications for understanding eyewitness identification. Nevertheless, this research has the potential to provide a tool for improving the reliability of lineup procedures.Keywords: cognitive load, eyewitness identification, face recognition, pupillometry
Procedia PDF Downloads 403596 Good Practices for Model Structure Development and Managing Structural Uncertainty in Decision Making
Authors: Hossein Afzali
Abstract:
Increasingly, decision analytic models are used to inform decisions about whether or not to publicly fund new health technologies. It is well noted that the accuracy of model predictions is strongly influenced by the appropriateness of model structuring. However, there is relatively inadequate methodological guidance surrounding this issue in guidelines developed by national funding bodies such as the Australian Pharmaceutical Benefits Advisory Committee (PBAC) and The National Institute for Health and Care Excellence (NICE) in the UK. This presentation aims to discuss issues around model structuring within decision making with a focus on (1) the need for a transparent and evidence-based model structuring process to inform the most appropriate set of structural aspects as the base case analysis; (2) the need to characterise structural uncertainty (If there exist alternative plausible structural assumptions (or judgements), there is a need to appropriately characterise the related structural uncertainty). The presentation will provide an opportunity to share ideas and experiences on how the guidelines developed by national funding bodies address the above issues and identify areas for further improvements. First, a review and analysis of the literature and guidelines developed by PBAC and NICE will be provided. Then, it will be discussed how the issues around model structuring (including structural uncertainty) are not handled and justified in a systematic way within the decision-making process, its potential impact on the quality of public funding decisions, and how it should be presented in submissions to national funding bodies. This presentation represents a contribution to the good modelling practice within the decision-making process. Although the presentation focuses on the PBAC and NICE guidelines, the discussion can be applied more widely to many other national funding bodies that use economic evaluation to inform funding decisions but do not transparently address model structuring issues e.g. the Medical Services Advisory Committee (MSAC) in Australia or the Canadian Agency for Drugs and Technologies in Health.Keywords: decision-making process, economic evaluation, good modelling practice, structural uncertainty
Procedia PDF Downloads 181595 Resisting Adversarial Assaults: A Model-Agnostic Autoencoder Solution
Authors: Massimo Miccoli, Luca Marangoni, Alberto Aniello Scaringi, Alessandro Marceddu, Alessandro Amicone
Abstract:
The susceptibility of deep neural networks (DNNs) to adversarial manipulations is a recognized challenge within the computer vision domain. Adversarial examples, crafted by adding subtle yet malicious alterations to benign images, exploit this vulnerability. Various defense strategies have been proposed to safeguard DNNs against such attacks, stemming from diverse research hypotheses. Building upon prior work, our approach involves the utilization of autoencoder models. Autoencoders, a type of neural network, are trained to learn representations of training data and reconstruct inputs from these representations, typically minimizing reconstruction errors like mean squared error (MSE). Our autoencoder was trained on a dataset of benign examples; learning features specific to them. Consequently, when presented with significantly perturbed adversarial examples, the autoencoder exhibited high reconstruction errors. The architecture of the autoencoder was tailored to the dimensions of the images under evaluation. We considered various image sizes, constructing models differently for 256x256 and 512x512 images. Moreover, the choice of the computer vision model is crucial, as most adversarial attacks are designed with specific AI structures in mind. To mitigate this, we proposed a method to replace image-specific dimensions with a structure independent of both dimensions and neural network models, thereby enhancing robustness. Our multi-modal autoencoder reconstructs the spectral representation of images across the red-green-blue (RGB) color channels. To validate our approach, we conducted experiments using diverse datasets and subjected them to adversarial attacks using models such as ResNet50 and ViT_L_16 from the torch vision library. The autoencoder extracted features used in a classification model, resulting in an MSE (RGB) of 0.014, a classification accuracy of 97.33%, and a precision of 99%.Keywords: adversarial attacks, malicious images detector, binary classifier, multimodal transformer autoencoder
Procedia PDF Downloads 111594 Application of Hydrological Engineering Centre – River Analysis System (HEC-RAS) to Estuarine Hydraulics
Authors: Julia Zimmerman, Gaurav Savant
Abstract:
This study aims to evaluate the efficacy of the U.S. Army Corp of Engineers’ River Analysis System (HEC-RAS) application to modeling the hydraulics of estuaries. HEC-RAS has been broadly used for a variety of riverine applications. However, it has not been widely applied to the study of circulation in estuaries. This report details the model development and validation of a combined 1D/2D unsteady flow hydraulic model using HEC-RAS for estuaries and they are associated with tidally influenced rivers. Two estuaries, Galveston Bay and Delaware Bay, were used as case studies. Galveston Bay, a bar-built, vertically mixed estuary, was modeled for the 2005 calendar year. Delaware Bay, a drowned river valley estuary, was modeled from October 22, 2019, to November 5, 2019. Water surface elevation was used to validate both models by comparing simulation results to NOAA’s Center for Operational Oceanographic Products and Services (CO-OPS) gauge data. Simulations were run using the Diffusion Wave Equations (DW), the Shallow Water Equations, Eulerian-Lagrangian Method (SWE-ELM), and the Shallow Water Equations Eulerian Method (SWE-EM) and compared for both accuracy and computational resources required. In general, the Diffusion Wave Equations results were found to be comparable to the two Shallow Water equations sets while requiring less computational power. The 1D/2D combined approach was valid for study areas within the 2D flow area, with the 1D flow serving mainly as an inflow boundary condition. Within the Delaware Bay estuary, the HEC-RAS DW model ran in 22 minutes and had an average R² value of 0.94 within the 2-D mesh. The Galveston Bay HEC-RAS DW ran in 6 hours and 47 minutes and had an average R² value of 0.83 within the 2-D mesh. The longer run time and lower R² for Galveston Bay can be attributed to the increased length of the time frame modeled and the greater complexity of the estuarine system. The models did not accurately capture tidal effects within the 1D flow area.Keywords: Delaware bay, estuarine hydraulics, Galveston bay, HEC-RAS, one-dimensional modeling, two-dimensional modeling
Procedia PDF Downloads 198593 GIS Data Governance: GIS Data Submission Process for Build-in Project, Replacement Project at Oman electricity Transmission Company
Authors: Rahma Saleh Hussein Al Balushi
Abstract:
Oman Electricity Transmission Company's (OETC) vision is to be a renowned world-class transmission grid by 2025, and one of the indications of achieving the vision is obtaining Asset Management ISO55001 certification, which required setting out a documented Standard Operating Procedures (SOP). Hence, documented SOP for the Geographical information system data process has been established. Also, to effectively manage and improve OETC power transmission, asset data and information need to be governed as such by Asset Information & GIS department. This paper will describe in detail the current GIS data submission process and the journey for developing it. The methodology used to develop the process is based on three main pillars, which are system and end-user requirements, Risk evaluation, data availability, and accuracy. The output of this paper shows the dramatic change in the used process, which results subsequently in more efficient, accurate, and updated data. Furthermore, due to this process, GIS has been and is ready to be integrated with other systems as well as the source of data for all OETC users. Some decisions related to issuing No objection certificates (NOC) for excavation permits and scheduling asset maintenance plans in Computerized Maintenance Management System (CMMS) have been made consequently upon GIS data availability. On the Other hand, defining agreed and documented procedures for data collection, data systems update, data release/reporting and data alterations has also contributed to reducing the missing attributes and enhance data quality index of GIS transmission data. A considerable difference in Geodatabase (GDB) completeness percentage was observed between the years 2017 and year 2022. Overall, concluding that by governance, asset information & GIS department can control the GIS data process; collect, properly record, and manage asset data and information within the OETC network. This control extends to other applications and systems integrated with/related to GIS systems.Keywords: asset management ISO55001, standard procedures process, governance, CMMS
Procedia PDF Downloads 123