Search results for: point clouds features
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8442

Search results for: point clouds features

6462 On Performance of Cache Replacement Schemes in NDN-IoT

Authors: Rasool Sadeghi, Sayed Mahdi Faghih Imani, Negar Najafi

Abstract:

The inherent features of Named Data Networking (NDN) provides a robust solution for Internet of Thing (IoT). Therefore, NDN-IoT has emerged as a combined architecture which exploits the benefits of NDN for interconnecting of the heterogeneous objects in IoT. In NDN-IoT, caching schemes are a key role to improve the network performance. In this paper, we consider the effectiveness of cache replacement schemes in NDN-IoT scenarios. We investigate the impact of replacement schemes on average delay, average hop count, and average interest retransmission when replacement schemes are Least Frequently Used (LFU), Least Recently Used (LRU), First-In-First-Out (FIFO) and Random. The simulation results demonstrate that LFU and LRU present a stable performance when the cache size changes. Moreover, the network performance improves when the number of consumers increases.

Keywords: NDN-IoT, cache replacement, performance, ndnSIM

Procedia PDF Downloads 348
6461 Towards the Design of Gripper Independent of Substrate Surface Structures

Authors: Annika Schmidt, Ausama Hadi Ahmed, Carlo Menon

Abstract:

End effectors for robotic systems are becoming more and more advanced, resulting in a growing variety of gripping tasks. However, most grippers are application specific. This paper presents a gripper that interacts with an object’s surface rather than being dependent on a defined shape or size. For this purpose, ingressive and astrictive features are combined to achieve the desired gripping capabilities. The developed prototype is tested on a variety of surfaces with different hardness and roughness properties. The results show that the gripping mechanism works on all of the tested surfaces. The influence of the material properties on the amount of the supported load is also studied and the efficiency is discussed.

Keywords: claw, dry adhesion, insects, material properties

Procedia PDF Downloads 345
6460 Clustering the Wheat Seeds Using SOM Artificial Neural Networks

Authors: Salah Ghamari

Abstract:

In this study, the ability of self organizing map artificial (SOM) neural networks in clustering the wheat seeds varieties according to morphological properties of them was considered. The SOM is one type of unsupervised competitive learning. Experimentally, five morphological features of 300 seeds (including three varieties: gaskozhen, Md and sardari) were obtained using image processing technique. The results show that the artificial neural network has a good performance (90.33% accuracy) in classification of the wheat varieties despite of high similarity in them. The highest classification accuracy (100%) was achieved for sardari.

Keywords: artificial neural networks, clustering, self organizing map, wheat variety

Procedia PDF Downloads 631
6459 Development of a Microfluidic Device for Low-Volume Sample Lysis

Authors: Abbas Ali Husseini, Ali Mohammad Yazdani, Fatemeh Ghadiri, Alper Şişman

Abstract:

We developed a microchip device that uses surface acoustic waves for rapid lysis of low level of cell samples. The device incorporates sharp-edge glass microparticles for improved performance. We optimized the lysis conditions for high efficiency and evaluated the device's feasibility for point-of-care applications. The microchip contains a 13-finger pair interdigital transducer with a 30-degree focused angle. It generates high-intensity acoustic beams that converge 6 mm away. The microchip operates at a frequency of 16 MHz, exciting Rayleigh waves with a 250 µm wavelength on the LiNbO3 substrate. Cell lysis occurs when Candida albicans cells and glass particles are placed within the focal area. The high-intensity surface acoustic waves induce centrifugal forces on the cells and glass particles, resulting in cell lysis through lateral forces from the sharp-edge glass particles. We conducted 42 pilot cell lysis experiments to optimize the surface acoustic wave-induced streaming. We varied electrical power, droplet volume, glass particle size, concentration, and lysis time. A regression machine-learning model determined the impact of each parameter on lysis efficiency. Based on these findings, we predicted optimal conditions: electrical signal of 2.5 W, sample volume of 20 µl, glass particle size below 10 µm, concentration of 0.2 µg, and a 5-minute lysis period. Downstream analysis successfully amplified a DNA target fragment directly from the lysate. The study presents an efficient microchip-based cell lysis method employing acoustic streaming and microparticle collisions within microdroplets. Integration of a surface acoustic wave-based lysis chip with an isothermal amplification method enables swift point-of-care applications.

Keywords: cell lysis, surface acoustic wave, micro-glass particle, droplet

Procedia PDF Downloads 65
6458 Study on Clarification of the Core Technology in a Monozukuri Company

Authors: Nishiyama Toshiaki, Tadayuki Kyountani, Nguyen Huu Phuc, Shigeyuki Haruyama, Oke Oktavianty

Abstract:

It is important to clarify the company’s core technology in product development process to strengthen their power in providing technology that meets the customer requirement. QFD method is adopted to clarify the core technology through identifying the high element technologies that are related to the voice of customer, and offer the most delightful features for customer. AHP is used to determine the importance of evaluating factors. A case study was conducted by using this approach in Japan’s Monozukuri Company (so called manufacturing company) to clarify their core technology based on customer requirements.

Keywords: core technology, QFD, voices of customer, analysis procedure

Procedia PDF Downloads 369
6457 Two Taxa of Paradiacheopsis Genera Recordings of the Myxomycetes from Turkey

Authors: Dursun Yağız, Ahmet Afyon

Abstract:

The study materials were collected from Isparta province in 2008. These materials were moved to the laboratory. The 'Most Chamber Techniques' were applied to the materials in the laboratory. Materials were examined with a stereo microscope. As a result of investigations carried out on the samples of sporophores which were developed in the laboratory, Paradiacheopsis erythropodia (Ing) Nann.-Bremek. and Paradiacheopsis longipes Hooff & Nann.-Bremek. species were identified. As a result of the literature research, it is determined that these taxa were new recordings in Turkey. The identified taxa have been added to Turkey's myxomycota. These two taxa’ microscopic features, photos, localities and substrate information were given.

Keywords: myxomycete, paradiacheopsis, Turkey, slime mould

Procedia PDF Downloads 271
6456 Morphological and Chemical Characterization of the Surface of Orthopedic Implant Materials

Authors: Bertalan Jillek, Péter Szabó, Judit Kopniczky, István Szabó, Balázs Patczai, Kinga Turzó

Abstract:

Hip and knee prostheses are one of the most frequently used medical implants, that can significantly improve patients’ quality of life. Long term success and biointegration of these prostheses depend on several factors, like bulk and surface characteristics, construction and biocompatibility of the material. The applied surgical technique, the general health condition and life-quality of the patient are also determinant factors. Medical devices used in orthopedic surgeries have different surfaces depending on their function inside the human body. Surface roughness of these implants determines the interaction with the surrounding tissues. Numerous modifications have been applied in the recent decades to improve a specific property of an implant. Our goal was to compare the surface characteristics of typical implant materials used in orthopedic surgery and traumatology. Morphological and chemical structure of Vortex plate anodized titanium, cemented THR (total hip replacement) stem high nitrogen REX steel (SS), uncemented THR stem and cup titanium (Ti) alloy with titanium plasma spray coating (TPS), cemented cup and uncemented acetabular liner HXL and UHMWPE and TKR (total knee replacement) femoral component CoCrMo alloy (Sanatmetal Ltd, Hungary) discs were examined. Visualization and elemental analysis were made by scanning electron microscopy (SEM) and energy dispersive spectroscopy (EDS). Surface roughness was determined by atomic force microscopy (AFM) and profilometry. SEM and AFM revealed the morphological and roughness features of the examined materials. TPS Ti presented the highest Ra value (25 ± 2 μm, followed by CoCrMo alloy (535 ± 19 nm), Ti (227 ± 15 nm) and stainless steel (170 ± 11 nm). The roughness of the HXL and UHMWPE surfaces was in the same range, 147 ± 13 nm and 144 ± 15 nm, respectively. EDS confirmed typical elements on the investigated prosthesis materials: Vortex plate Ti (Ti, O, P); TPS Ti (Ti, O, Al); SS (Fe, Cr, Ni, C) CoCrMo (Co, Cr, Mo), HXL (C, Al, Ni) and UHMWPE (C, Al). The results indicate that the surface of prosthesis materials have significantly different features and the applied investigation methods are suitable for their characterization. Contact angle measurements and in vitro cell culture testing are further planned to test their surface energy characteristics and biocompatibility.

Keywords: morphology, PE, roughness, titanium

Procedia PDF Downloads 112
6455 Energy Efficiency Approach to Reduce Costs of Ownership of Air Jet Weaving

Authors: Corrado Grassi, Achim Schröter, Yves Gloy, Thomas Gries

Abstract:

Air jet weaving is the most productive, but also the most energy consuming weaving method. Increasing energy costs and environmental impact are constantly a challenge for the manufacturers of weaving machines. Current technological developments concern with low energy costs, low environmental impact, high productivity, and constant product quality. The high degree of energy consumption of the method can be ascribed to the high need of compressed air. An energy efficiency method is applied to the air jet weaving technology. Such method identifies and classifies the main relevant energy consumers and processes from the exergy point of view and it leads to the identification of energy efficiency potentials during the weft insertion process. Starting from the design phase, energy efficiency is considered as the central requirement to be satisfied. The initial phase of the method consists of an analysis of the state of the art of the main weft insertion components in order to point out a prioritization of the high demanding energy components and processes. The identified major components are investigated to reduce the high demand of energy of the weft insertion process. During the interaction of the flow field coming from the relay nozzles within the profiled reed, only a minor part of the stream is really accelerating the weft yarn, hence resulting in large energy inefficiency. Different tools such as FEM analysis, CFD simulation models and experimental analysis are used in order to design a more energy efficient design of the involved components in the filling insertion. A different concept for the metal strip of the profiled reed is developed. The developed metal strip allows a reduction of the machine energy consumption. Based on a parametric and aerodynamic study, the designed reed transmits higher values of the flow power to the filling yarn. The innovative reed fulfills both the requirement of raising energy efficiency and the compliance with the weaving constraints.

Keywords: air jet weaving, aerodynamic simulation, energy efficiency, experimental validation, weft insertion

Procedia PDF Downloads 183
6454 Sustainable Development Approach for Coastal Erosion Problem in Thailand: Using Bamboo Sticks to Rehabilitate Coastal Erosion

Authors: Sutida Maneeanakekul, Dusit Wechakit, Somsak Piriyayota

Abstract:

Coastal erosion is a major problem in Thailand, in both the Gulf of Thailand and the Andaman Sea coasts. According to the Department of Marine and Coastal Resources, land erosion occurred along the 200 km coastline with an average rate of 5 meters/year. Coastal erosion affects public and government properties, as well as the socio-economy of the country, including emigration in coastal communities, loss of habitats, and decline in fishery production. To combat the problem of coastal erosion, projects utilizing bamboo sticks for coastal defense against erosion were carried out in 5 areas beginning in November, 2010, including: Pak Klong Munharn- Samut Songkhram Province; Ban Khun Samutmaneerat, Pak Klong Pramong and Chao Matchu Shrine-Samut Sakhon Province,and Pak Klong Hongthong – Chachoengsao Province by Marine and Coastal Resources Department. In 2012, an evaluation of the effectiveness of solving the problem of coastal erosion by using bamboo stick was carried out, with a focus on three aspects. Firstly, the change in physical and biological features after using the bamboo stick technique was assessed. Secondly, participation of people in the community in the way of managing the problem of coastal erosion were these aspects evaluated as part of the study. The last aspect that was evaluated is the satisfaction of the community toward this technique. The results of evaluation showed that the amounts of sediment have dramatically changed behind the bamboo sticks lines. The increase of sediment was found to be about 23.50-56.20 centimeters (during 2012-2013). In terms of biological aspect, there has been an increase in mangrove forest areas, especially at Bang Ya Prak, Samut Sakhon Province. Average tree density was found to be about 4,167 trees per square meter. Additionally, an increase in production of fisheries was observed. Presently, the change in the evaluated physical features tends to increase in every aspect, including the satisfaction of people in community toward the process of solving the erosion problem. People in the community are involved in the preparatory, operation, monitoring and evaluation process to resolve the problem in the medium levels.

Keywords: bamboo sticks, coastal erosion, rehabilitate, Thailand sustainable development approach

Procedia PDF Downloads 223
6453 Amrita Bose-Einstein Condensate Solution Formed by Gold Nanoparticles Laser Fusion and Atmospheric Water Generation

Authors: Montree Bunruanses, Preecha Yupapin

Abstract:

In this work, the quantum material called Amrita (elixir) is made from top-down gold into nanometer particles by fusing 99% gold with a laser and mixing it with drinking water using the atmospheric water (AWG) production system, which is made of water with air. The high energy laser power destroyed the four natural force bindings from gravity-weak-electromagnetic and strong coupling forces, where finally it was the purified Bose-Einstein condensate (BEC) states. With this method, gold atoms in the form of spherical single crystals with a diameter of 30-50 nanometers are obtained and used. They were modulated (activated) with a frequency generator into various matrix structures mixed with AWG water to be used in the upstream conversion (quantum reversible) process, which can be applied on humans both internally or externally by drinking or applying on the treated surfaces. Doing both space (body) and time (mind) will go back to the origin and start again from the coupling of space-time on both sides of time at fusion (strong coupling force) and push out (Big Bang) at the equilibrium point (singularity) occurs as strings and DNA with neutrinos as coupling energy. There is no distortion (purification), which is the point where time and space have not yet been determined, and there is infinite energy. Therefore, the upstream conversion is performed. It is reforming DNA to make it be purified. The use of Amrita is a method used for people who cannot meditate (quantum meditation). Various cases were applied, where the results show that the Amrita can make the body and the mind return to their pure origins and begin the downstream process with the Big Bang movement, quantum communication in all dimensions, DNA reformation, frequency filtering, crystal body forming, broadband quantum communication networks, black hole forming, quantum consciousness, body and mind healing, etc.

Keywords: quantum materials, quantum meditation, quantum reversible, Bose-Einstein condensate

Procedia PDF Downloads 55
6452 Embodied Spirituality in Gestalt Therapy

Authors: Silvia Alaimo

Abstract:

This lecture brings to our attention the theme of spirituality within Gestalt therapy’s theoretical and clinical perspectives and which is closely connected to the fertile emptiness and creative indifference’ experiences. First of all, the premise that must be done is the overcoming traditional western culture’s philosophical and religious misunderstandings, such as the dicotomy between spirituality and pratical/material daily life, as well as the widespread secular perspective of classic psychology. Even fullness and emptiness have traditionally been associated with the concepts of being and not being. "There is only one way through which we can contact the deepest layers of our existence, rejuvenate our thinking and reach intuition (the harmony of thought and being): inner silence" (Perls) *. Therefore, "fertile void" doesn't mean empty in itself, but rather an useful condition of every creative and responsible act, making room for a deeper dimension close to spirituality. Spirituality concerns questions about the meaning of existence, which lays beyond the concrete and literal dimension, looking for the essence of things, and looking at the value of personal experience. Looking at fundamentals of Gestalt epistemology, phenomenology, aesthetics, and the relationship, we can reach the heart of a therapeutic work that takes spiritual contours and which are based on an embodied (incarnate size), through the relational aesthetic knowledge (Spagnuolo Lobb ), the deep contact with each other, the role of compassion and responsibility, as the patient's recognition criteria (Orange, 2013) rooted in the body. The aesthetic dimension, like the spiritual dimension to which it is often associated, is a subtle dimension: it is the dimension of the essence of things, of their "soul." In clinical practice, it implies that the relationship between therapist and patient is "in the absence of judgment," also called "zero point of creative indifference," expressed by ‘therapeutic mentality’. It consists in following with interest and authentic curiosity where the patient wants to go and support him in his intentionality of contact. It’s a condition of pure and simple awareness, of the full acceptance of "what is," a moment of detachment from one's own life in which one does not take oneself too seriously, a starting point for finding a center of balance and integration that brings to the creative act, to growth, and, as Perls would say, to the excitement and adventure of living.

Keywords: spirituality, bodily, embodied aesthetics, phenomenology, relationship

Procedia PDF Downloads 129
6451 Comparative Performance Analysis of Nonlinearity Cancellation Techniques for MOS-C Realization in Integrator Circuits

Authors: Hasan Çiçekli, Ahmet Gökçen, Uğur Çam

Abstract:

In this paper, a comparative performance analysis of mostly used four nonlinearity cancellation techniques used to realize the passive resistor by MOS transistors is presented. The comparison is done by using an integrator circuit which is employing sequentially Op-amp, OTRA and ICCII as active element. All of the circuits are implemented by MOS-C realization and simulated by PSPICE program using 0.35 µm process TSMC MOSIS model parameters. With MOS-C realization, the circuits became electronically tunable and fully integrable which is very important in IC design. The output waveforms, frequency responses, THD analysis results and features of the nonlinearity cancellation techniques are also given.

Keywords: integrator circuits, MOS-C realization, nonlinearity cancellation, tuneable resistors

Procedia PDF Downloads 516
6450 Use of Machine Learning Algorithms to Pediatric MR Images for Tumor Classification

Authors: I. Stathopoulos, V. Syrgiamiotis, E. Karavasilis, A. Ploussi, I. Nikas, C. Hatzigiorgi, K. Platoni, E. P. Efstathopoulos

Abstract:

Introduction: Brain and central nervous system (CNS) tumors form the second most common group of cancer in children, accounting for 30% of all childhood cancers. MRI is the key imaging technique used for the visualization and management of pediatric brain tumors. Initial characterization of tumors from MRI scans is usually performed via a radiologist’s visual assessment. However, different brain tumor types do not always demonstrate clear differences in visual appearance. Using only conventional MRI to provide a definite diagnosis could potentially lead to inaccurate results, and so histopathological examination of biopsy samples is currently considered to be the gold standard for obtaining definite diagnoses. Machine learning is defined as the study of computational algorithms that can use, complex or not, mathematical relationships and patterns from empirical and scientific data to make reliable decisions. Concerning the above, machine learning techniques could provide effective and accurate ways to automate and speed up the analysis and diagnosis for medical images. Machine learning applications in radiology are or could potentially be useful in practice for medical image segmentation and registration, computer-aided detection and diagnosis systems for CT, MR or radiography images and functional MR (fMRI) images for brain activity analysis and neurological disease diagnosis. Purpose: The objective of this study is to provide an automated tool, which may assist in the imaging evaluation and classification of brain neoplasms in pediatric patients by determining the glioma type, grade and differentiating between different brain tissue types. Moreover, a future purpose is to present an alternative way of quick and accurate diagnosis in order to save time and resources in the daily medical workflow. Materials and Methods: A cohort, of 80 pediatric patients with a diagnosis of posterior fossa tumor, was used: 20 ependymomas, 20 astrocytomas, 20 medulloblastomas and 20 healthy children. The MR sequences used, for every single patient, were the following: axial T1-weighted (T1), axial T2-weighted (T2), FluidAttenuated Inversion Recovery (FLAIR), axial diffusion weighted images (DWI), axial contrast-enhanced T1-weighted (T1ce). From every sequence only a principal slice was used that manually traced by two expert radiologists. Image acquisition was carried out on a GE HDxt 1.5-T scanner. The images were preprocessed following a number of steps including noise reduction, bias-field correction, thresholding, coregistration of all sequences (T1, T2, T1ce, FLAIR, DWI), skull stripping, and histogram matching. A large number of features for investigation were chosen, which included age, tumor shape characteristics, image intensity characteristics and texture features. After selecting the features for achieving the highest accuracy using the least number of variables, four machine learning classification algorithms were used: k-Nearest Neighbour, Support-Vector Machines, C4.5 Decision Tree and Convolutional Neural Network. The machine learning schemes and the image analysis are implemented in the WEKA platform and MatLab platform respectively. Results-Conclusions: The results and the accuracy of images classification for each type of glioma by the four different algorithms are still on process.

Keywords: image classification, machine learning algorithms, pediatric MRI, pediatric oncology

Procedia PDF Downloads 135
6449 In-Flight Aircraft Performance Model Enhancement Using Adaptive Lookup Tables

Authors: Georges Ghazi, Magali Gelhaye, Ruxandra Botez

Abstract:

Over the years, the Flight Management System (FMS) has experienced a continuous improvement of its many features, to the point of becoming the pilot’s primary interface for flight planning operation on the airplane. With the assistance of the FMS, the concept of distance and time has been completely revolutionized, providing the crew members with the determination of the optimized route (or flight plan) from the departure airport to the arrival airport. To accomplish this function, the FMS needs an accurate Aircraft Performance Model (APM) of the aircraft. In general, APMs that equipped most modern FMSs are established before the entry into service of an individual aircraft, and results from the combination of a set of ordinary differential equations and a set of performance databases. Unfortunately, an aircraft in service is constantly exposed to dynamic loads that degrade its flight characteristics. These degradations endow two main origins: airframe deterioration (control surfaces rigging, seals missing or damaged, etc.) and engine performance degradation (fuel consumption increase for a given thrust). Thus, after several years of service, the performance databases and the APM associated to a specific aircraft are no longer representative enough of the actual aircraft performance. It is important to monitor the trend of the performance deterioration and correct the uncertainties of the aircraft model in order to improve the accuracy the flight management system predictions. The basis of this research lies in the new ability to continuously update an Aircraft Performance Model (APM) during flight using an adaptive lookup table technique. This methodology was developed and applied to the well-known Cessna Citation X business aircraft. For the purpose of this study, a level D Research Aircraft Flight Simulator (RAFS) was used as a test aircraft. According to Federal Aviation Administration the level D is the highest certification level for the flight dynamics modeling. Basically, using data available in the Flight Crew Operating Manual (FCOM), a first APM describing the variation of the engine fan speed and aircraft fuel flow w.r.t flight conditions was derived. This model was next improved using the proposed methodology. To do that, several cruise flights were performed using the RAFS. An algorithm was developed to frequently sample the aircraft sensors measurements during the flight and compare the model prediction with the actual measurements. Based on these comparisons, a correction was performed on the actual APM in order to minimize the error between the predicted data and the measured data. In this way, as the aircraft flies, the APM will be continuously enhanced, making the FMS more and more precise and the prediction of trajectories more realistic and more reliable. The results obtained are very encouraging. Indeed, using the tables initialized with the FCOM data, only a few iterations were needed to reduce the fuel flow prediction error from an average relative error of 12% to 0.3%. Similarly, the FCOM prediction regarding the engine fan speed was reduced from a maximum error deviation of 5.0% to 0.2% after only ten flights.

Keywords: aircraft performance, cruise, trajectory optimization, adaptive lookup tables, Cessna Citation X

Procedia PDF Downloads 252
6448 The Impact of Urbanisation on Sediment Concentration of Ginzo River in Katsina City, Katsina State, Nigeria

Authors: Ahmed A. Lugard, Mohammed A. Aliyu

Abstract:

This paper studied the influence of urban development and its accompanied land surface transformation on sediment concentration of a natural flowing Ginzo river across the city of Katsina. An opposite twin river known as Tille river, which is less urbanized, was used to compare the result of the sediment concentration of the Ginzo River in order to ascertain the consequences of the urban area on impacting the sediment concentration. An instrument called USP 61 point integrating cable way sampler described by Gregory and walling (1973), was used to collect the suspended sediment samples in the wet season months of June, July, August and September. The result obtained in the study shows that only the sample collected at the peripheral site of the city, which is mostly farmland areas resembles the results in the four sites of Tille river, which is the reference stream in the study. It was found to be only + 10% different from one another, while at the other three sites of the Ginzo which are highly urbanized the disparity ranges from 35-45% less than what are obtained at the four sites of Tille River. In the generalized assessment, the t-distribution result applied to the two set of data shows that there is a significant difference between the sediment concentration of urbanized River Ginzo and that of less urbanized River Tille. The study further discovered that the less sediment concentration found in urbanized River Ginzo is attributed to concretization of surfaced, tarred roads, concretized channeling of segments of the river including the river bed and reserved open grassland areas, all within the catchments. The study therefore concludes that urbanization affect not only the hydrology of an urbanized river basin, but also the sediment concentration which is a significant aspect of its geomorphology. This world certainly affects the flood plain of the basin at a certain point which might be a suitable land for cultivation. It is recommended here that further studies on the impact of urbanization on River Basins should focus on all elements of geomorphology as it has been on hydrology. This would make the work rather complete as the two disciplines are inseparable from each other. The authorities concern should also trigger a more proper environmental and land use management policies to arrest the menace of land degradation and related episodic events.

Keywords: environment, infiltration, river, urbanization

Procedia PDF Downloads 301
6447 Application of Seismic Refraction Method in Geotechnical Study

Authors: Abdalla Mohamed M. Musbahi

Abstract:

The study area lies in Al-Falah area on Airport-Tripoli in Zone (16) Where planned establishment of complex multi-floors for residential and commercial, this part was divided into seven subzone. In each sup zone, were collected Orthogonal profiles by using Seismic refraction method. The overall aim with this project is to investigate the applicability of Seismic refraction method is a commonly used traditional geophysical technique to determine depth-to-bedrock, competence of bedrock, depth to the water table, or depth to other seismic velocity boundaries The purpose of the work is to make engineers and decision makers recognize the importance of planning and execution of a pre-investigation program including geophysics and in particular seismic refraction method. The overall aim with this thesis is achieved by evaluation of seismic refraction method in different scales, determine the depth and velocity of the base layer (bed-rock). Calculate the elastic property in each layer in the region by using the Seismic refraction method. The orthogonal profiles was carried out in every subzones of (zone 16). The layout of the seismic refraction set up is schematically, the geophones are placed on the linear imaginary line whit a 5 m spacing, the three shot points (in beginning of layout–mid and end of layout) was used, in order to generate the P and S waves. The 1st and last shot point is placed about 5 meters from the geophones and the middle shot point is put in between 12th to 13th geophone, from time-distance curve the P and S waves was calculated and the thickness was estimated up to three-layers. As we know any change in values of physical properties of medium (shear modulus, bulk modulus, density) leads to change waves velocity which passing through medium where any change in properties of rocks cause change in velocity of waves. because the change in properties of rocks cause change in parameters of medium density (ρ), bulk modulus (κ), shear modulus (μ). Therefore, the velocity of waves which travel in rocks have close relationship with these parameters. Therefore we can estimate theses parameters by knowing primary and secondary velocity (p-wave, s-wave).

Keywords: application of seismic, geotechnical study, physical properties, seismic refraction

Procedia PDF Downloads 480
6446 Unsteady Reactive Hydromagnetic Fluid Flow of a Two-Step Exothermic Chemical Reaction through a Channel

Authors: J. A. Gbadeyan, R. A. Kareem

Abstract:

In this paper, we investigated the effects of unsteady internal heat generation of a two-step exothermic reactive hydromagnetic fluid flow under different chemical kinetics namely: Sensitized, Arrhenius and Bimolecular kinetics through an isothermal wall temperature channel. The resultant modeled nonlinear partial differential equations were simplified and solved using a combined Laplace-Differential Transform Method (LDTM). The solutions obtained were discussed and presented graphically to show the salient features of the fluid flow and heat transfer characteristics.

Keywords: unsteady, reactive, hydromagnetic, couette ow, exothermi creactio

Procedia PDF Downloads 430
6445 Design of DNA Origami Structures Using LAMP Products as a Combined System for the Detection of Extended Spectrum B-Lactamases

Authors: Kalaumari Mayoral-Peña, Ana I. Montejano-Montelongo, Josué Reyes-Muñoz, Gonzalo A. Ortiz-Mancilla, Mayrin Rodríguez-Cruz, Víctor Hernández-Villalobos, Jesús A. Guzmán-López, Santiago García-Jacobo, Iván Licona-Vázquez, Grisel Fierros-Romero, Rosario Flores-Vallejo

Abstract:

The group B-lactamic antibiotics include some of the most frequently used small drug molecules against bacterial infections. Nevertheless, an alarming decrease in their efficacy has been reported due to the emergence of antibiotic-resistant bacteria. Infections caused by bacteria expressing extended Spectrum B-lactamases (ESBLs) are difficult to treat and account for higher morbidity and mortality rates, delayed recovery, and high economic burden. According to the Global Report on Antimicrobial Resistance Surveillance, it is estimated that mortality due to resistant bacteria will ascend to 10 million cases per year worldwide. These facts highlight the importance of developing low-cost and readily accessible detection methods of drug-resistant ESBLs bacteria to prevent their spread and promote accurate and fast diagnosis. Bacterial detection is commonly done using molecular diagnostic techniques, where PCR stands out for its high performance. However, this technique requires specialized equipment not available everywhere, is time-consuming, and has a high cost. Loop-Mediated Isothermal Amplification (LAMP) is an alternative technique that works at a constant temperature, significantly decreasing the equipment cost. It yields double-stranded DNA of several lengths with repetitions of the target DNA sequence as a product. Although positive and negative results from LAMP can be discriminated by colorimetry, fluorescence, and turbidity, there is still a large room for improvement in the point-of-care implementation. DNA origami is a technique that allows the formation of 3D nanometric structures by folding a large single-stranded DNA (scaffold) into a determined shape with the help of short DNA sequences (staples), which hybridize with the scaffold. This research aimed to generate DNA origami structures using LAMP products as scaffolds to improve the sensitivity to detect ESBLs in point-of-care diagnosis. For this study, the coding sequence of the CTM-X-15 ESBL of E. coli was used to generate the LAMP products. The set of LAMP primers were designed using PrimerExplorerV5. As a result, a target sequence of 200 nucleotides from CTM-X-15 ESBL was obtained. Afterward, eight different DNA origami structures were designed using the target sequence in the SDCadnano and analyzed with CanDo to evaluate the stability of the 3D structures. The designs were constructed minimizing the total number of staples to reduce costs and complexity for point-of-care applications. After analyzing the DNA origami designs, two structures were selected. The first one was a zig-zag flat structure, while the second one was a wall-like shape. Given the sequence repetitions in the scaffold sequence, both were able to be assembled with only 6 different staples each one, ranging between 18 to 80 nucleotides. Simulations of both structures were performed using scaffolds of different sizes yielding stable structures in all the cases. The generation of the LAMP products were tested by colorimetry and electrophoresis. The formation of the DNA structures was analyzed using electrophoresis and colorimetry. The modeling of novel detection methods through bioinformatics tools allows reliable control and prediction of results. To our knowledge, this is the first study that uses LAMP products and DNA-origami in combination to delect ESBL-producing bacterial strains, which represent a promising methodology for diagnosis in the point-of-care.

Keywords: beta-lactamases, antibiotic resistance, DNA origami, isothermal amplification, LAMP technique, molecular diagnosis

Procedia PDF Downloads 200
6444 The Follower Robots Tested in Different Lighting Condition and Improved Capabilities

Authors: Sultan Muhammed Fatih Apaydin

Abstract:

In this study, two types of robot were examined as being pioneer robot and follower robot for improving of the capabilities of tracking robots. Robots continue to tracking each other and measurement of the follow-up distance between them is very important for improvements to be applied. It was achieved that the follower robot follows the pioneer robot in line with intended goals. The tests were applied to the robots in various grounds and environments in point of performance and necessary improvements were implemented by measuring the results of these tests.

Keywords: mobile robot, remote and autonomous control, infra-red sensors, arduino

Procedia PDF Downloads 556
6443 The Heritagisation of the Titanic Culture for Urban Regeneration Use: A Case Study of the Titanic Belfast

Authors: Yu Liang

Abstract:

The study of heritage in different contexts has been discussed during the past decades, which the relationship with other fields such as tourism, museum, and urban regeneration has also been interested in scholars. Governmental and policy attention were also fascinated by the use of heritage, which it is a ‘heritagisation’ process, to achieve certain goals because the advantage will appear in both economic development and social inclusion with suitable planning. In the case of Belfast, this city has been through tough ages due to its complicated ideology issues in the past; however, it is obvious to see the transformation through representing their Belfast heritages in tourism. Planners are willing to use this method to attract cultural tourists, investors and also residents to reborn and retrieve their confidence. One of the target topics is the establishment of Titanic Belfast that explores the culture of Titanic and the history of the shipbuilding industry in Belfast. Even though the cultural flagship brought economic and social benefit, not all of the people agreed on the vision of relaunching a sunken ship and felt proud of it. The aim of this research is to clarify the concept of a ‘heritagisation’ that it could achieve certain goals in consolidating areas, increasing local self-identity pride, and promoting tourism activities if well-planned. Moreover, to discuss the preference and the pros and cons of its practice with the Titanic culture in Belfast’s regeneration process, especially the Titanic Belfast flagship project. From the methodological point of view, a mixed incorporating qualitative point of interviews, observation, and secondary sources with different perspectives and approaches are adopted in this case study. The expected result would show that a great majority of outsiders and the planners were pleasured about the concept of Titanic Belfast’s establishment and agreed its attraction traveling to Belfast. Nevertheless, there were still an amount of locals disagree that the Titanic culture and the flagship would be representative of this city and would bring other advantages to them. In other words, some residents doubt or less likely to support the issue since they have been ignored out of the planning process. Hence, opinions are divided among 38 residents, various outsiders, and stakeholders, and their perspectives have drawn an interesting task for sustainable research in the future.

Keywords: Belfast, heritagisation, Titanic, Titanic Belfast, urban regeneration

Procedia PDF Downloads 304
6442 An Analytical Study of the Quality of Educational Administration and Management At Secondary School Level in Punjab, Pakistan

Authors: Shamim Akhtar

Abstract:

The purpose of the present research was to analyse the performance level of district administrators and school heads teachers at secondary school level. The sample of the study was head teachers and teachers of secondary schools. In survey three scales were used, two scales were for the head teachers, one five point scale was for analysing the working efficiency of educational administrators and other seven points scale was for head teachers for analysing their own performance and one another seven point rating scale similar to head teacher was for the teachers for analysing the working performance of their head teachers. The results of the head teachers’ responses revealed that the performance of their District Educational Administrators was average and for the performance efficiency of the head teachers, researcher constructed the rating scales on seven parameters of management likely academic management, personnel management, financial management, infra-structure management, linkage and interface, student’s services, and managerial excellence. Results of percentages, means, and graphical presentation on different parameters of management showed that there was an obvious difference in head teachers and teachers’ responses and head teachers probably were overestimating their efficiency; but teachers evaluated that they were performing averagely on majority statements. Results of t-test showed that there was no significance difference in the responses of rural and urban teachers but significant difference in male and female teachers’ responses showed that female head teachers were performing their responsibilities better than male head teachers in public sector schools. When efficiency of the head teachers on different parameters of management were analysed it was concluded that their efficiency on academic and personnel management was average and on financial management and on managerial excellence was highly above of average level but on others parameters like infra-structure management, linkage and interface and on students services was above of average level on most statements but highly above of average on some statements. Hence there is need to improve the working efficiency in academic management and personnel management.

Keywords: educational administration, educational management, parameters of management, education

Procedia PDF Downloads 320
6441 Levels of Students’ Understandings of Electric Field Due to a Continuous Charged Distribution: A Case Study of a Uniformly Charged Insulating Rod

Authors: Thanida Sujarittham, Narumon Emarat, Jintawat Tanamatayarat, Kwan Arayathanitkul, Suchai Nopparatjamjomras

Abstract:

Electric field is an important fundamental concept in electrostatics. In high-school, generally Thai students have already learned about definition of electric field, electric field due to a point charge, and superposition of electric fields due to multiple-point charges. Those are the prerequisite basic knowledge students holding before entrancing universities. In the first-year university level, students will be quickly revised those basic knowledge and will be then introduced to a more complicated topic—electric field due to continuous charged distributions. We initially found that our freshman students, who were from the Faculty of Science and enrolled in the introductory physic course (SCPY 158), often seriously struggled with the basic physics concepts—superposition of electric fields and inverse square law and mathematics being relevant to this topic. These also then resulted on students’ understanding of advanced topics within the course such as Gauss's law, electric potential difference, and capacitance. Therefore, it is very important to determine students' understanding of electric field due to continuous charged distributions. The open-ended question about sketching net electric field vectors from a uniformly charged insulating rod was administered to 260 freshman science students as pre- and post-tests. All of their responses were analyzed and classified into five levels of understandings. To get deep understanding of each level, 30 students were interviewed toward their individual responses. The pre-test result found was that about 90% of students had incorrect understanding. Even after completing the lectures, there were only 26.5% of them could provide correct responses. Up to 50% had confusions and irrelevant ideas. The result implies that teaching methods in Thai high schools may be problematic. In addition for our benefit, these students’ alternative conceptions identified could be used as a guideline for developing the instructional method currently used in the course especially for teaching electrostatics.

Keywords: alternative conceptions, electric field of continuous charged distributions, inverse square law, levels of student understandings, superposition principle

Procedia PDF Downloads 281
6440 Prediction of Mental Health: Heuristic Subjective Well-Being Model on Perceived Stress Scale

Authors: Ahmet Karakuş, Akif Can Kilic, Emre Alptekin

Abstract:

A growing number of studies have been conducted to determine how well-being may be predicted using well-designed models. It is necessary to investigate the backgrounds of features in order to construct a viable Subjective Well-Being (SWB) model. We have picked the suitable variables from the literature on SWB that are acceptable for real-world data instructions. The goal of this work is to evaluate the model by feeding it with SWB characteristics and then categorizing the stress levels using machine learning methods to see how well it performs on a real dataset. Despite the fact that it is a multiclass classification issue, we have achieved significant metric scores, which may be taken into account for a specific task.

Keywords: machine learning, multiclassification problem, subjective well-being, perceived stress scale

Procedia PDF Downloads 111
6439 Map UI Design of IoT Application Based on Passenger Evacuation Behaviors in Underground Station

Authors: Meng-Cong Zheng

Abstract:

When the public space is in an emergency, how to quickly establish spatial cognition and emergency shelter in the closed underground space is the urgent task. This study takes Taipei Station as the research base and aims to apply the use of Internet of things (IoT) application for underground evacuation mobility design. The first experiment identified passengers' evacuation behaviors and spatial cognition in underground spaces by wayfinding tasks and thinking aloud, then defined the design conditions of User Interface (UI) and proposed the UI design.  The second experiment evaluated the UI design based on passengers' evacuation behaviors by wayfinding tasks and think aloud again as same as the first experiment. The first experiment found that the design conditions that the subjects were most concerned about were "map" and hoping to learn the relative position of themselves with other landmarks by the map and watch the overall route. "Position" needs to be accurately labeled to determine the location in underground space. Each step of the escape instructions should be presented clearly in "navigation bar." The "message bar" should be informed of the next or final target exit. In the second experiment with the UI design, we found that the "spatial map" distinguishing between walking and non-walking areas with shades of color is useful. The addition of 2.5D maps of the UI design increased the user's perception of space. Amending the color of the corner diagram in the "escape route" also reduces the confusion between the symbol and other diagrams. The larger volume of toilets and elevators can be a judgment of users' relative location in "Hardware facilities." Fire extinguisher icon should be highlighted. "Fire point tips" of the UI design indicated fire with a graphical fireball can convey precise information to the escaped person. "Fire point tips" of the UI design indicated fire with a graphical fireball can convey precise information to the escaped person. However, "Compass and return to present location" are less used in underground space.

Keywords: evacuation behaviors, IoT application, map UI design, underground station

Procedia PDF Downloads 186
6438 Computation of Radiotherapy Treatment Plans Based on CT to ED Conversion Curves

Authors: B. Petrović, L. Rutonjski, M. Baucal, M. Teodorović, O. Čudić, B. Basarić

Abstract:

Radiotherapy treatment planning computers use CT data of the patient. For the computation of a treatment plan, treatment planning system must have an information on electron densities of tissues scanned by CT. This information is given by the conversion curve CT (CT number) to ED (electron density), or simply calibration curve. Every treatment planning system (TPS) has built in default CT to ED conversion curves, for the CTs of different manufacturers. However, it is always recommended to verify the CT to ED conversion curve before actual clinical use. Objective of this study was to check how the default curve already provided matches the curve actually measured on a specific CT, and how much it influences the calculation of a treatment planning computer. The examined CT scanners were from the same manufacturer, but four different scanners from three generations. The measurements of all calibration curves were done with the dedicated phantom CIRS 062M Electron Density Phantom. The phantom was scanned, and according to real HU values read at the CT console computer, CT to ED conversion curves were generated for different materials, for same tube voltage 140 kV. Another phantom, CIRS Thorax 002 LFC which represents an average human torso in proportion, density and two-dimensional structure, was used for verification. The treatment planning was done on CT slices of scanned CIRS LFC 002 phantom, for selected cases. Interest points were set in the lungs, and in the spinal cord, and doses recorded in TPS. The overall calculated treatment times for four scanners and default scanner did not differ more than 0.8%. Overall interest point dose in bone differed max 0.6% while for single fields was maximum 2.7% (lateral field). Overall interest point dose in lungs differed max 1.1% while for single fields was maximum 2.6% (lateral field). It is known that user should verify the CT to ED conversion curve, but often, developing countries are facing lack of QA equipment, and often use default data provided. We have concluded that the CT to ED curves obtained differ in certain points of a curve, generally in the region of higher densities. This influences the treatment planning result which is not significant, but definitely does make difference in the calculated dose.

Keywords: Computation of treatment plan, conversion curve, radiotherapy, electron density

Procedia PDF Downloads 462
6437 Evaluation of Random Forest and Support Vector Machine Classification Performance for the Prediction of Early Multiple Sclerosis from Resting State FMRI Connectivity Data

Authors: V. Saccà, A. Sarica, F. Novellino, S. Barone, T. Tallarico, E. Filippelli, A. Granata, P. Valentino, A. Quattrone

Abstract:

The work aim was to evaluate how well Random Forest (RF) and Support Vector Machine (SVM) algorithms could support the early diagnosis of Multiple Sclerosis (MS) from resting-state functional connectivity data. In particular, we wanted to explore the ability in distinguishing between controls and patients of mean signals extracted from ICA components corresponding to 15 well-known networks. Eighteen patients with early-MS (mean-age 37.42±8.11, 9 females) were recruited according to McDonald and Polman, and matched for demographic variables with 19 healthy controls (mean-age 37.55±14.76, 10 females). MRI was acquired by a 3T scanner with 8-channel head coil: (a)whole-brain T1-weighted; (b)conventional T2-weighted; (c)resting-state functional MRI (rsFMRI), 200 volumes. Estimated total lesion load (ml) and number of lesions were calculated using LST-toolbox from the corrected T1 and FLAIR. All rsFMRIs were pre-processed using tools from the FMRIB's Software Library as follows: (1) discarding of the first 5 volumes to remove T1 equilibrium effects, (2) skull-stripping of images, (3) motion and slice-time correction, (4) denoising with high-pass temporal filter (128s), (5) spatial smoothing with a Gaussian kernel of FWHM 8mm. No statistical significant differences (t-test, p < 0.05) were found between the two groups in the mean Euclidian distance and the mean Euler angle. WM and CSF signal together with 6 motion parameters were regressed out from the time series. We applied an independent component analysis (ICA) with the GIFT-toolbox using the Infomax approach with number of components=21. Fifteen mean components were visually identified by two experts. The resulting z-score maps were thresholded and binarized to extract the mean signal of the 15 networks for each subject. Statistical and machine learning analysis were then conducted on this dataset composed of 37 rows (subjects) and 15 features (mean signal in the network) with R language. The dataset was randomly splitted into training (75%) and test sets and two different classifiers were trained: RF and RBF-SVM. We used the intrinsic feature selection of RF, based on the Gini index, and recursive feature elimination (rfe) for the SVM, to obtain a rank of the most predictive variables. Thus, we built two new classifiers only on the most important features and we evaluated the accuracies (with and without feature selection) on test-set. The classifiers, trained on all the features, showed very poor accuracies on training (RF:58.62%, SVM:65.52%) and test sets (RF:62.5%, SVM:50%). Interestingly, when feature selection by RF and rfe-SVM were performed, the most important variable was the sensori-motor network I in both cases. Indeed, with only this network, RF and SVM classifiers reached an accuracy of 87.5% on test-set. More interestingly, the only misclassified patient resulted to have the lowest value of lesion volume. We showed that, with two different classification algorithms and feature selection approaches, the best discriminant network between controls and early MS, was the sensori-motor I. Similar importance values were obtained for the sensori-motor II, cerebellum and working memory networks. These findings, in according to the early manifestation of motor/sensorial deficits in MS, could represent an encouraging step toward the translation to the clinical diagnosis and prognosis.

Keywords: feature selection, machine learning, multiple sclerosis, random forest, support vector machine

Procedia PDF Downloads 227
6436 Crafting of Paper Cutting Techniques for Embellishment of Fashion Textiles

Authors: A. Vaidya-Soocheta, K. M. Wong-Hon-Lang

Abstract:

Craft and fashion have always been interlinked. The combination of both often gives stunning results. The present study introduces ‘Paper Cutting Craft Techniques’ like the Japanese –Kirigami, Mexican –PapelPicado, German –Scherenschnitte, Polish –Wycinankito in textiles to develop innovative and novel design structures as embellishments and ornamentation. The project studies various ways of using these paper cutting techniques to obtain interesting features and delicate design patterns on fabrics. While paper has its advantages and related uses, it is fragile rigid and thus not appropriate for clothing. Fabric is sturdy, flexible, dimensionally stable and washable. In the present study, the cut out techniques develop creative design motifs and patterns to give an inventive and unique appeal to the fabrics. The beauty and fascination of lace in garments have always given them a nostalgic charm. Laces with their intricate and delicate complexity in combination with other materials add a feminine touch to a garment and give it a romantic, mysterious appeal. Various textured and decorative effects through fabric manipulation are experimented along with the use of paper cutting craft skills as an innovative substitute for developing lace or “Broderie Anglaise” effects on textiles. A number of assorted fabric types with varied textures were selected for the study. Techniques to avoid fraying and unraveling of the design cut fabrics were introduced. Fabrics were further manipulated by use of interesting prints with embossed effects on cut outs. Fabric layering in combination with assorted techniques such as cutting of folded fabric, printing, appliqué, embroidery, crochet, braiding, weaving added a novel exclusivity to the fabrics. The fabrics developed by these innovative methods were then tailored into garments. The study thus tested the feasibility and practicability of using these fabrics by designing a collection of evening wear garments based on the theme ‘Nostalgia’. The prototypes developed were complemented by designing fashion accessories with the crafted fabrics. Prototypes of accessories add interesting features to the study. The adaptation and application of this novel technique of paper cutting craft on textiles can be an innovative start for a new trend in textile and fashion industry. The study anticipates that this technique will open new avenues in the world of fashion to incorporate its use commercially.

Keywords: collection, fabric cutouts, nostalgia, prototypes

Procedia PDF Downloads 339
6435 Detecting Natural Fractures and Modeling Them to Optimize Field Development Plan in Libyan Deep Sandstone Reservoir (Case Study)

Authors: Tarek Duzan

Abstract:

Fractures are a fundamental property of most reservoirs. Despite their abundance, they remain difficult to detect and quantify. The most effective characterization of fractured reservoirs is accomplished by integrating geological, geophysical, and engineering data. Detection of fractures and defines their relative contribution is crucial in the early stages of exploration and later in the production of any field. Because fractures could completely change our thoughts, efforts, and planning to produce a specific field properly. From the structural point of view, all reservoirs are fractured to some point of extent. North Gialo field is thought to be a naturally fractured reservoir to some extent. Historically, natural fractured reservoirs are more complicated in terms of their exploration and production efforts, and most geologists tend to deny the presence of fractures as an effective variable. Our aim in this paper is to determine the degree of fracturing, and consequently, our evaluation and planning can be done properly and efficiently from day one. The challenging part in this field is that there is no enough data and straightforward well testing that can let us completely comfortable with the idea of fracturing; however, we cannot ignore the fractures completely. Logging images, available well testing, and limited core studies are our tools in this stage to evaluate, model, and predict possible fracture effects in this reservoir. The aims of this study are both fundamental and practical—to improve the prediction and diagnosis of natural-fracture attributes in N. Gialo hydrocarbon reservoirs and accurately simulate their influence on production. Moreover, the production of this field comes from 2-phase plan; a self depletion of oil and then gas injection period for pressure maintenance and increasing ultimate recovery factor. Therefore, well understanding of fracturing network is essential before proceeding with the targeted plan. New analytical methods will lead to more realistic characterization of fractured and faulted reservoir rocks. These methods will produce data that can enhance well test and seismic interpretations, and that can readily be used in reservoir simulators.

Keywords: natural fracture, sandstone reservoir, geological, geophysical, and engineering data

Procedia PDF Downloads 84
6434 The Application of Video Segmentation Methods for the Purpose of Action Detection in Videos

Authors: Nassima Noufail, Sara Bouhali

Abstract:

In this work, we develop a semi-supervised solution for the purpose of action detection in videos and propose an efficient algorithm for video segmentation. The approach is divided into video segmentation, feature extraction, and classification. In the first part, a video is segmented into clips, and we used the K-means algorithm for this segmentation; our goal is to find groups based on similarity in the video. The application of k-means clustering into all the frames is time-consuming; therefore, we started by the identification of transition frames where the scene in the video changes significantly, and then we applied K-means clustering into these transition frames. We used two image filters, the gaussian filter and the Laplacian of Gaussian. Each filter extracts a set of features from the frames. The Gaussian filter blurs the image and omits the higher frequencies, and the Laplacian of gaussian detects regions of rapid intensity changes; we then used this vector of filter responses as an input to our k-means algorithm. The output is a set of cluster centers. Each video frame pixel is then mapped to the nearest cluster center and painted with a corresponding color to form a visual map. The resulting visual map had similar pixels grouped. We then computed a cluster score indicating how clusters are near each other and plotted a signal representing frame number vs. clustering score. Our hypothesis was that the evolution of the signal would not change if semantically related events were happening in the scene. We marked the breakpoints at which the root mean square level of the signal changes significantly, and each breakpoint is an indication of the beginning of a new video segment. In the second part, for each segment from part 1, we randomly selected a 16-frame clip, then we extracted spatiotemporal features using convolutional 3D network C3D for every 16 frames using a pre-trained model. The C3D final output is a 512-feature vector dimension; hence we used principal component analysis (PCA) for dimensionality reduction. The final part is the classification. The C3D feature vectors are used as input to a multi-class linear support vector machine (SVM) for the training model, and we used a multi-classifier to detect the action. We evaluated our experiment on the UCF101 dataset, which consists of 101 human action categories, and we achieved an accuracy that outperforms the state of art by 1.2%.

Keywords: video segmentation, action detection, classification, Kmeans, C3D

Procedia PDF Downloads 61
6433 Loss Quantification Archaeological Sites in Watershed Due to the Use and Occupation of Land

Authors: Elissandro Voigt Beier, Cristiano Poleto

Abstract:

The main objective of the research is to assess the loss through the quantification of material culture (archaeological fragments) in rural areas, sites explored economically by machining on seasonal crops, and also permanent, in a hydrographic subsystem Camaquã River in the state of Rio Grande do Sul, Brazil. The study area consists of different micro basins and differs in area, ranging between 1,000 m² and 10,000 m², respectively the largest and the smallest, all with a large number of occurrences and outcrop locations of archaeological material and high density in intense farm environment. In the first stage of the research aimed to identify the dispersion of points of archaeological material through field survey through plot points by the Global Positioning System (GPS), within each river basin, was made use of concise bibliography on the topic in the region, helping theoretically in understanding the old landscaping with preferences of occupation for reasons of ancient historical people through the settlements relating to the practice observed in the field. The mapping was followed by the cartographic development in the region through the development of cartographic products of the land elevation, consequently were created cartographic products were to contribute to the understanding of the distribution of the absolute materials; the definition and scope of the material dispersed; and as a result of human activities the development of revolving letter by mechanization of in situ material, it was also necessary for the preparation of materials found density maps, linking natural environments conducive to ancient historical occupation with the current human occupation. The third stage of the project it is for the systematic collection of archaeological material without alteration or interference in the subsurface of the indigenous settlements, thus, the material was prepared and treated in the laboratory to remove soil excesses, cleaning through previous communication methodology, measurement and quantification. Approximately 15,000 were identified archaeological fragments belonging to different periods of ancient history of the region, all collected outside of its environmental and historical context and it also has quite changed and modified. The material was identified and cataloged considering features such as object weight, size, type of material (lithic, ceramic, bone, Historical porcelain and their true association with the ancient history) and it was disregarded its principles as individual lithology of the object and functionality same. As observed preliminary results, we can point out the change of materials by heavy mechanization and consequent soil disturbance processes, and these processes generate loading of archaeological materials. Therefore, as a next step will be sought, an estimate of potential losses through a mathematical model. It is expected by this process, to reach a reliable model of high accuracy which can be applied to an archeological site of lower density without encountering a significant error.

Keywords: degradation of heritage, quantification in archaeology, watershed, use and occupation of land

Procedia PDF Downloads 264