Search results for: shape error
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4078

Search results for: shape error

868 Radar Track-based Classification of Birds and UAVs

Authors: Altilio Rosa, Chirico Francesco, Foglia Goffredo

Abstract:

In recent years, the number of Unmanned Aerial Vehicles (UAVs) has significantly increased. The rapid development of commercial and recreational drones makes them an important part of our society. Despite the growing list of their applications, these vehicles pose a huge threat to civil and military installations: detection, classification and neutralization of such flying objects become an urgent need. Radar is an effective remote sensing tool for detecting and tracking flying objects, but scenarios characterized by the presence of a high number of tracks related to flying birds make especially challenging the drone detection task: operator PPI is cluttered with a huge number of potential threats and his reaction time can be severely affected. Flying birds compared to UAVs show similar velocity, RADAR cross-section and, in general, similar characteristics. Building from the absence of a single feature that is able to distinguish UAVs and birds, this paper uses a multiple features approach where an original feature selection technique is developed to feed binary classifiers trained to distinguish birds and UAVs. RADAR tracks acquired on the field and related to different UAVs and birds performing various trajectories were used to extract specifically designed target movement-related features based on velocity, trajectory and signal strength. An optimization strategy based on a genetic algorithm is also introduced to select the optimal subset of features and to estimate the performance of several classification algorithms (Neural network, SVM, Logistic regression…) both in terms of the number of selected features and misclassification error. Results show that the proposed methods are able to reduce the dimension of the data space and to remove almost all non-drone false targets with a suitable classification accuracy (higher than 95%).

Keywords: birds, classification, machine learning, UAVs

Procedia PDF Downloads 222
867 Coils and Antennas Fabricated with Sewing Litz Wire for Wireless Power Transfer

Authors: Hikari Ryu, Yuki Fukuda, Kento Oishi, Chiharu Igarashi, Shogo Kiryu

Abstract:

Recently, wireless power transfer has been developed in various fields. Magnetic coupling is popular for feeding power at a relatively short distance and at a lower frequency. Electro-magnetic wave coupling at a high frequency is used for long-distance power transfer. The wireless power transfer has attracted attention in e-textile fields. Rigid batteries are required for many body-worn electric systems at the present time. The technology enables such batteries to be removed from the systems. Flexible coils have been studied for such applications. Coils with a high Q factor are required in the magnetic-coupling power transfer. Antennas with low return loss are needed for the electro-magnetic coupling. Litz wire is so flexible to fabricate coils and antennas sewn on fabric and has low resistivity. In this study, the electric characteristics of some coils and antennas fabricated with the Litz wire by using two sewing techniques are investigated. As examples, a coil and an antenna are described. Both were fabricated with 330/0.04 mm Litz wire. The coil was a planar coil with a square shape. The outer side was 150 mm, the number of turns was 15, and the pitch interval between each turn was 5 mm. The Litz wire of the coil was overstitched with a sewing machine. The coil was fabricated as a receiver coil for a magnetic coupled wireless power transfer. The Q factor was 200 at a frequency of 800 kHz. A wireless power system was constructed by using the coil. A power oscillator was used in the system. The resonant frequency of the circuit was set to 123 kHz, where the switching loss of power FETs was small. The power efficiencies were 0.44 – 0.99, depending on the distance between the transmitter and receiver coils. As an example of an antenna with a sewing technique, a fractal pattern antenna was stitched on a 500 mm x 500 mm fabric by using a needle punch method. The pattern was the 2nd-oder Vicsec fractal. The return loss of the antenna was -28 dB at a frequency of 144 MHz.

Keywords: e-textile, flexible coils and antennas, Litz wire, wireless power transfer

Procedia PDF Downloads 133
866 Mathematical Modelling of Drying Kinetics of Cantaloupe in a Solar Assisted Dryer

Authors: Melike Sultan Karasu Asnaz, Ayse Ozdogan Dolcek

Abstract:

Crop drying, which aims to reduce the moisture content to a certain level, is a method used to extend the shelf life and prevent it from spoiling. One of the oldest food preservation techniques is open sunor shade drying. Even though this technique is the most affordable of all drying methods, there are some drawbacks such as contamination by insects, environmental pollution, windborne dust, and direct expose to weather conditions such as wind, rain, hail. However, solar dryers that provide a hygienic and controllable environment to preserve food and extend its shelf life have been developed and used to dry agricultural products. Thus, foods can be dried quickly without being affected by weather variables, and quality products can be obtained. This research is mainly devoted to investigating the modelling of drying kinetics of cantaloupe in a forced convection solar dryer. Mathematical models for the drying process should be defined to simulate the drying behavior of the foodstuff, which will greatly contribute to the development of solar dryer designs. Thus, drying experiments were conducted and replicated five times, and various data such as temperature, relative humidity, solar irradiation, drying air speed, and weight were instantly monitored and recorded. Moisture content of sliced and pretreated cantaloupe were converted into moisture ratio and then fitted against drying time for constructing drying curves. Then, 10 quasi-theoretical and empirical drying models were applied to find the best drying curve equation according to the Levenberg-Marquardt nonlinear optimization method. The best fitted mathematical drying model was selected according to the highest coefficient of determination (R²), and the mean square of the deviations (χ^²) and root mean square error (RMSE) criterial. The best fitted model was utilized to simulate a thin layer solar drying of cantaloupe, and the simulation results were compared with the experimental data for validation purposes.

Keywords: solar dryer, mathematical modelling, drying kinetics, cantaloupe drying

Procedia PDF Downloads 127
865 Molecular Topology and TLC Retention Behaviour of s-Triazines: QSRR Study

Authors: Lidija R. Jevrić, Sanja O. Podunavac-Kuzmanović, Strahinja Z. Kovačević

Abstract:

Quantitative structure-retention relationship (QSRR) analysis was used to predict the chromatographic behavior of s-triazine derivatives by using theoretical descriptors computed from the chemical structure. Fundamental basis of the reported investigation is to relate molecular topological descriptors with chromatographic behavior of s-triazine derivatives obtained by reversed-phase (RP) thin layer chromatography (TLC) on silica gel impregnated with paraffin oil and applied ethanol-water (φ = 0.5-0.8; v/v). Retention parameter (RM0) of 14 investigated s-triazine derivatives was used as dependent variable while simple connectivity index different orders were used as independent variables. The best QSRR model for predicting RM0 value was obtained with simple third order connectivity index (3χ) in the second-degree polynomial equation. Numerical values of the correlation coefficient (r=0.915), Fisher's value (F=28.34) and root mean square error (RMSE = 0.36) indicate that model is statistically significant. In order to test the predictive power of the QSRR model leave-one-out cross-validation technique has been applied. The parameters of the internal cross-validation analysis (r2CV=0.79, r2adj=0.81, PRESS=1.89) reflect the high predictive ability of the generated model and it confirms that can be used to predict RM0 value. Multivariate classification technique, hierarchical cluster analysis (HCA), has been applied in order to group molecules according to their molecular connectivity indices. HCA is a descriptive statistical method and it is the most frequently used for important area of data processing such is classification. The HCA performed on simple molecular connectivity indices obtained from the 2D structure of investigated s-triazine compounds resulted in two main clusters in which compounds molecules were grouped according to the number of atoms in the molecule. This is in agreement with the fact that these descriptors were calculated on the basis of the number of atoms in the molecule of the investigated s-triazine derivatives.

Keywords: s-triazines, QSRR, chemometrics, chromatography, molecular descriptors

Procedia PDF Downloads 393
864 Morphological Studies of the Gills of the Red Swamp Freshwater Crayfish Procambarus clarkii (Crustacea: Decapoda: Cambarids) (Girard 1852) from the River Nile and Its Branches in Egypt

Authors: Mohamed M. A. Abumandour

Abstract:

The red swamp freshwater crayfish breathe through three types of feather-like trichobranchiate gills; podobranchiae, arthrobranchiae and pleurobranchiae. All gills have the same general structure and appearance; plume-like with single broad setiferous, and single axis. The gill consists of axis with numerous finger-like filaments, having three morphological types; round, pointed and somewhat hooked shaped. The direction of filaments vary according their position; in middle region were nearly perpendicular to gill axis while in the apex were nearly parallel to axis. There were characteristic system of gill spines on; central axis (two types were distinguishable by presence of socket), basal plate, setobranch (long non-branched and short multidenticulate) and on the bilobed epipodal plate. There are four shape of spinated-like distal region of setobranch seta; two pointed processes (longitudinal arrangement and irregular arranged) and two broad processes (transverse triangular and multidenticulate). The bilobed epipodal plate devoid from any filaments and extended from outer side of podobranchiae as triangular basal part then extended between the gills as cord-like middle part then pass under the gill to lies against the thoracic body wall. By SEM, the apical part of bilobed epipodal plate have serrated free border and corrugated surface while the middle part have none serrated free border. There are two methods of gill cleaning mechanism in crayfish; passive and active method. The passive method occurred by; setae of setobranch, branchiostegite, bilobed epipodal plate, setiferous arthrodial lamellae and reversing the respiratory water through a narrow spaced branchial chamber.

Keywords: crayfis, gill spines, setobranch, gill setae, cleaning mechanisms

Procedia PDF Downloads 410
863 Structural Design of a Relief Valve Considering Strength

Authors: Nam-Hee Kim, Jang-Hoon Ko, Kwon-Hee Lee

Abstract:

A relief valve is a mechanical element to keep safety by controlling high pressure. Usually, the high pressure is relieved by using the spring force and letting the fluid to flow from another way out of system. When its normal pressure is reached, the relief valve can return to initial state. The relief valve in this study has been applied for pressure vessel, evaporator, piping line, etc. The relief valve should be designed for smooth operation and should satisfy the structural safety requirement under operating condition. In general, the structural analysis is performed by following fluid flow analysis. In this process, the FSI (Fluid-Structure Interaction) is required to input the force obtained from the output of the flow analysis. Firstly, this study predicts the velocity profile and the pressure distribution in the given system. In this study, the assumptions for flow analysis are as follows: • The flow is steady-state and three-dimensional. • The fluid is Newtonian and incompressible. • The walls of the pipe and valve are smooth. The flow characteristics in this relief valve does not induce any problem. The commercial software ANSYS/CFX is utilized for flow analysis. On the contrary, very high pressure may cause structural problem due to severe stress. The relief valve is made of body, bonnet, guide, piston and nozzle, and its material is stainless steel. To investigate its structural safety, the worst case loading is considered as the pressure of 700 bar. The load is applied to inside the valve, which is greater than the load obtained from FSI. The maximum stress is calculated as 378 MPa by performing the finite element analysis. However, the value is greater than its allowable value. Thus, an alternative design is suggested to improve the structural performance through case study. We found that the sensitive design variable to the strength is the shape of the nozzle. The case study is to vary the size of the nozzle. Finally, it can be seen that the suggested design satisfy the structural design requirement. The FE analysis is performed by using the commercial software ANSYS/Workbench.

Keywords: relief valve, structural analysis, structural design, strength, safety factor

Procedia PDF Downloads 303
862 Measurement of Ionospheric Plasma Distribution over Myanmar Using Single Frequency Global Positioning System Receiver

Authors: Win Zaw Hein, Khin Sandar Linn, Su Su Yi Mon, Yoshitaka Goto

Abstract:

The Earth ionosphere is located at the altitude of about 70 km to several 100 km from the ground, and it is composed of ions and electrons called plasma. In the ionosphere, these plasma makes delay in GPS (Global Positioning System) signals and reflect in radio waves. The delay along the signal path from the satellite to the receiver is directly proportional to the total electron content (TEC) of plasma, and this delay is the largest error factor in satellite positioning and navigation. Sounding observation from the top and bottom of the ionosphere was popular to investigate such ionospheric plasma for a long time. Recently, continuous monitoring of the TEC using networks of GNSS (Global Navigation Satellite System) observation stations, which are basically built for land survey, has been conducted in several countries. However, in these stations, multi-frequency support receivers are installed to estimate the effect of plasma delay using their frequency dependence and the cost of multi-frequency support receivers are much higher than single frequency support GPS receiver. In this research, single frequency GPS receiver was used instead of expensive multi-frequency GNSS receivers to measure the ionospheric plasma variation such as vertical TEC distribution. In this measurement, single-frequency support ublox GPS receiver was used to probe ionospheric TEC. The location of observation was assigned at Mandalay Technological University in Myanmar. In the method, the ionospheric TEC distribution is represented by polynomial functions for latitude and longitude, and parameters of the functions are determined by least-squares fitting on pseudorange data obtained at a known location under an assumption of thin layer ionosphere. The validity of the method was evaluated by measurements obtained by the Japanese GNSS observation network called GEONET. The performance of measurement results using single-frequency of GPS receiver was compared with the results by dual-frequency measurement.

Keywords: ionosphere, global positioning system, GPS, ionospheric delay, total electron content, TEC

Procedia PDF Downloads 137
861 Artificial Neural Network Approach for Modeling and Optimization of Conidiospore Production of Trichoderma harzianum

Authors: Joselito Medina-Marin, Maria G. Serna-Diaz, Alejandro Tellez-Jurado, Juan C. Seck-Tuoh-Mora, Eva S. Hernandez-Gress, Norberto Hernandez-Romero, Iaina P. Medina-Serna

Abstract:

Trichoderma harzianum is a fungus that has been utilized as a low-cost fungicide for biological control of pests, and it is important to determine the optimal conditions to produce the highest amount of conidiospores of Trichoderma harzianum. In this work, the conidiospore production of Trichoderma harzianum is modeled and optimized by using Artificial Neural Networks (AANs). In order to gather data of this process, 30 experiments were carried out taking into account the number of hours of culture (10 distributed values from 48 to 136 hours) and the culture humidity (70, 75 and 80 percent), obtained as a response the number of conidiospores per gram of dry mass. The experimental results were used to develop an iterative algorithm to create 1,110 ANNs, with different configurations, starting from one to three hidden layers, and every hidden layer with a number of neurons from 1 to 10. Each ANN was trained with the Levenberg-Marquardt backpropagation algorithm, which is used to learn the relationship between input and output values. The ANN with the best performance was chosen in order to simulate the process and be able to maximize the conidiospores production. The obtained ANN with the highest performance has 2 inputs and 1 output, three hidden layers with 3, 10 and 10 neurons in each layer, respectively. The ANN performance shows an R2 value of 0.9900, and the Root Mean Squared Error is 1.2020. This ANN predicted that 644175467 conidiospores per gram of dry mass are the maximum amount obtained in 117 hours of culture and 77% of culture humidity. In summary, the ANN approach is suitable to represent the conidiospores production of Trichoderma harzianum because the R2 value denotes a good fitting of experimental results, and the obtained ANN model was used to find the parameters to produce the biggest amount of conidiospores per gram of dry mass.

Keywords: Trichoderma harzianum, modeling, optimization, artificial neural network

Procedia PDF Downloads 160
860 Improving Efficiencies of Planting Configurations on Draft Environment of Town Square: The Case Study of Taichung City Hall in Taichung, Taiwan

Authors: Yu-Wen Huang, Yi-Cheng Chiang

Abstract:

With urban development, lots of buildings are built around the city. The buildings always affect the urban wind environment. The accelerative situation of wind caused of buildings often makes pedestrians uncomfortable, even causes the accidents and dangers. Factors influencing pedestrian level wind including atmospheric boundary layer, wind direction, wind velocity, planting, building volume, geometric shape of the buildings and adjacent interference effects, etc. Planting has many functions including scraping and slowing urban heat island effect, creating a good visual landscape, increasing urban green area and improve pedestrian level wind. On the other hand, urban square is an important space element supporting the entrance to buildings, city landmarks, and activity collections, etc. The appropriateness of urban square environment usually dominates its success. This research focuses on the effect of tree-planting on the wind environment of urban square. This research studied the square belt of Taichung City Hall. Taichung City Hall is a cuboid building with a large mass opening. The square belt connects the front square, the central opening and the back square. There is often wind draft on the square belt. This phenomenon decreases the activities on the squares. This research applies tree-planting to improve the wind environment and evaluate the effects of two types of planting configuration. The Computational Fluid Dynamics (CFD) simulation analysis and extensive field measurements are applied to explore the improve efficiency of planting configuration on wind environment. This research compares efficiencies of different kinds of planting configuration, including the clustering array configuration and the dispersion, and evaluates the efficiencies by the SET*.

Keywords: micro-climate, wind environment, planting configuration, comfortableness, computational fluid dynamics (CFD)

Procedia PDF Downloads 310
859 Strategic Policy Formulation to Ensure the Atlantic Forest Regeneration

Authors: Ramon F. B. da Silva, Mateus Batistella, Emilio Moran

Abstract:

Although the existence of two Forest Transition (FT) pathways, the economic development and the forest scarcity, there are many contexts that shape the model of FT observed in each particular region. This means that local conditions, such as relief, soil quality, historic land use/cover, public policies, the engagement of society in compliance with legal regulations, and the action of enforcement agencies, represent dimensions which combined, creates contexts that enable forest regeneration. From this perspective we can understand the regeneration process of native vegetation cover in the Paraíba Valley (Forest Atlantic biome), ongoing since the 1960s. This research analyzed public information, land use/cover maps, environmental public policies, and interviewed 17 stakeholders from the Federal and State agencies, municipal environmental and agricultural departments, civil society, farmers, aiming comprehend the contexts behind the forest regeneration in the Paraíba Valley, Sao Paulo State, Brazil. The first policy to protect forest vegetation was the Forest Code n0 4771 of 1965, but this legislation did not promote the increase of forest, just the control of deforestation, not enough to the Atlantic Forest biome that reached its highest pick of degradation in 1985 (8% of Atlantic Forest remnants). We concluded that the Brazilian environmental legislation acted in a strategic way to promote the increase of forest cover (102% of regeneration between 1985 and 2011) from 1993 when the Federal Decree n0 750 declared the initial and advanced stages of secondary succession protected against any kind of exploitation or degradation ensuring the forest regeneration process. The strategic policy formulation was also observed in the Sao Paulo State law n0 6171 of 1988 that prohibited the use of fire to manage agricultural landscape, triggering a process of forest regeneration in formerly pasture areas.

Keywords: forest transition, land abandonment, law enforcement, rural economic crisis

Procedia PDF Downloads 553
858 Internal Power Recovery in Cryogenic Cooling Plants, Part II: Compressor Development

Authors: Ambra Giovannelli, Erika Maria Archilei

Abstract:

The electrical power consumption related to refrigeration systems is evaluated to be in the order of 15% of the total electricity consumption worldwide. For this reason, in the last years several energy saving techniques have been suggested to reduce the power demand of refrigeration and air conditioning plants. The research work deals with the development of an innovative internal power recovery system for industrial cryogenic cooling plants. Such system is based on a Compressor-Expander Group (CEG). Both the expander and the compressor have been designed starting from automotive turbocharging components, strongly modified to take refrigerant fluid properties and specific system requirements into consideration. A preliminary choice of the machines (radial compressors and expanders) among existing components available on the market was realised according to the rules of the similarity theory. Once the expander was selected, it was strongly modified and performance verified by means of steady-state 3D CFD simulations. This paper focuses the attention on the development of the second CEG main component: the compressor. Once the preliminary selection has been done, the compressor geometry has been modified to take the new boundary conditions into account. In particular, the impeller has been machined to address the required total enthalpy increase. Such evaluation has been carried out by means of a simplified 1D model. Moreover, a vaneless diffuser has been added, modifying the shape of casing rear and front disks. To verify the performance of the modified compressor geometry and suggest improvements, a numerical fluid dynamic model has been set up and the commercial Ansys-CFX software has been used to perform steady-state 3D simulations. In this work, all the numerical results will be shown, highlighting critical aspects and suggesting further developments to increase compressor performance and flexibility.

Keywords: vapour compression systems, energy saving, refrigeration plant, organic fluids, centrifugal compressor

Procedia PDF Downloads 218
857 Preparedness for Microbial Forensics Evidence Collection on Best Practice

Authors: Victor Ananth Paramananth, Rashid Muniginin, Mahaya Abd Rahman, Siti Afifah Ismail

Abstract:

Safety issues, scene protection, and appropriate evidence collection must be handled in any bio crime scene. There will be a scene or multi-scene to be cordoned for investigation in any bio-incident or bio crime event. Evidence collection is critical in determining the type of microbial or toxin, its lethality, and its source. As a consequence, from the start of the investigation, a proper sampling method is required. The most significant challenges for the crime scene officer would be deciding where to obtain samples, the best sampling method, and the sample sizes needed. Since there could be evidence in liquid, viscous, or powder shape at a crime scene, crime scene officers have difficulty determining which tools to use for sampling. To maximize sample collection, the appropriate tools for sampling methods are necessary. This study aims to assist the crime scene officer in collecting liquid, viscous, and powder biological samples in sufficient quantity while preserving sample quality. Observational tests on sample collection using liquid, viscous, and powder samples for adequate quantity and sample quality were performed using UV light in this research. The density of the light emission varies upon the method of collection and sample types. The best tools for collecting sufficient amounts of liquid, viscous, and powdered samples can be identified by observing UV light. Instead of active microorganisms, the invisible powder is used to assess sufficient sample collection during a crime scene investigation using various collection tools. The liquid, powdered and viscous samples collected using different tools were analyzed using Fourier transform infrared - attenuate total reflection (FTIR-ATR). FTIR spectroscopy is commonly used for rapid discrimination, classification, and identification of intact microbial cells. The liquid, viscous and powdered samples collected using various tools have been successfully observed using UV light. Furthermore, FTIR-ATR analysis showed that collected samples are sufficient in quantity while preserving their quality.

Keywords: biological sample, crime scene, collection tool, UV light, forensic

Procedia PDF Downloads 195
856 A Cooperative Signaling Scheme for Global Navigation Satellite Systems

Authors: Keunhong Chae, Seokho Yoon

Abstract:

Recently, the global navigation satellite system (GNSS) such as Galileo and GPS is employing more satellites to provide a higher degree of accuracy for the location service, thus calling for a more efficient signaling scheme among the satellites used in the overall GNSS network. In that the network throughput is improved, the spatial diversity can be one of the efficient signaling schemes; however, it requires multiple antenna that could cause a significant increase in the complexity of the GNSS. Thus, a diversity scheme called the cooperative signaling was proposed, where the virtual multiple-input multiple-output (MIMO) signaling is realized with using only a single antenna in the transmit satellite of interest and with modeling the neighboring satellites as relay nodes. The main drawback of the cooperative signaling is that the relay nodes receive the transmitted signal at different time instants, i.e., they operate in an asynchronous way, and thus, the overall performance of the GNSS network could degrade severely. To tackle the problem, several modified cooperative signaling schemes were proposed; however, all of them are difficult to implement due to a signal decoding at the relay nodes. Although the implementation at the relay nodes could be simpler to some degree by employing the time-reversal and conjugation operations instead of the signal decoding, it would be more efficient if we could implement the operations of the relay nodes at the source node having more resources than the relay nodes. So, in this paper, we propose a novel cooperative signaling scheme, where the data signals are combined in a unique way at the source node, thus obviating the need of the complex operations such as signal decoding, time-reversal and conjugation at the relay nodes. The numerical results confirm that the proposed scheme provides the same performance in the cooperative diversity and the bit error rate (BER) as the conventional scheme, while reducing the complexity at the relay nodes significantly. Acknowledgment: This work was supported by the National GNSS Research Center program of Defense Acquisition Program Administration and Agency for Defense Development.

Keywords: global navigation satellite network, cooperative signaling, data combining, nodes

Procedia PDF Downloads 280
855 Prediction of Seismic Damage Using Scalar Intensity Measures Based on Integration of Spectral Values

Authors: Konstantinos G. Kostinakis, Asimina M. Athanatopoulou

Abstract:

A key issue in seismic risk analysis within the context of Performance-Based Earthquake Engineering is the evaluation of the expected seismic damage of structures under a specific earthquake ground motion. The assessment of the seismic performance strongly depends on the choice of the seismic Intensity Measure (IM), which quantifies the characteristics of a ground motion that are important to the nonlinear structural response. Several conventional IMs of ground motion have been used to estimate their damage potential to structures. Yet, none of them has been proved to be able to predict adequately the seismic damage. Therefore, alternative, scalar intensity measures, which take into account not only ground motion characteristics but also structural information have been proposed. Some of these IMs are based on integration of spectral values over a range of periods, in an attempt to account for the information that the shape of the acceleration, velocity or displacement spectrum provides. The adequacy of a number of these IMs in predicting the structural damage of 3D R/C buildings is investigated in the present paper. The investigated IMs, some of which are structure specific and some are nonstructure-specific, are defined via integration of spectral values. To achieve this purpose three symmetric in plan R/C buildings are studied. The buildings are subjected to 59 bidirectional earthquake ground motions. The two horizontal accelerograms of each ground motion are applied along the structural axes. The response is determined by nonlinear time history analysis. The structural damage is expressed in terms of the maximum interstory drift as well as the overall structural damage index. The values of the aforementioned seismic damage measures are correlated with seven scalar ground motion IMs. The comparative assessment of the results revealed that the structure-specific IMs present higher correlation with the seismic damage of the three buildings. However, the adequacy of the IMs for estimation of the structural damage depends on the response parameter adopted. Furthermore, it was confirmed that the widely used spectral acceleration at the fundamental period of the structure is a good indicator of the expected earthquake damage level.

Keywords: damage measures, bidirectional excitation, spectral based IMs, R/C buildings

Procedia PDF Downloads 328
854 Entrepreneurship Education: A Panacea for Entrepreneurial Intention of University Undergraduates in Ogun State, Nigeria

Authors: Adedayo Racheal Agbonna

Abstract:

The rising level of graduate unemployment in Nigeria has brought about the introduction of entrepreneurship education as a career option for self–reliance and self-employment. Sequel to this, it is important to have an understanding of the determining factors of entrepreneurial intention. Therefore this research empirically investigated the influence of entrepreneurship education on entrepreneurial intention of undergraduate students of selected universities in Ogun State, Nigeria. The study is significant to researchers, university policy makers, and the government. Survey research design was adopted in the study. The population consisted of 17,659 final year undergraduate students universities in Ogun State. The study adopted stratified and random sampling technique. The table of sample size determination was used to determine the sample size for this study at 95% confidence level and 5% margin error to arrive at a sample size of 1877 respondents. The elements of population were 400 level students of the selected universities. A structured questionnaire titled 'Entrepreneurship Education and students’ Entrepreneurial intention' was administered. The result of the reliability test had the following values 0.716, 0.907 and 0.949 for infrastructure, perceived university support, and entrepreneurial intention respectively. In the same vein, from the construct validity test, the following values were obtained 0.711, 0.663 and 0.759 for infrastructure, perceived university support and entrepreneurial intention respectively. Findings of this study revealed that each of the entrepreneurship education variables significantly affected intention University infrastructure B= -1.200, R²=0.679, F (₁,₁₈₇₅) = 3958.345, P < 0.05) Perceived University Support B= -1.027, R²=0.502, F(₁,₁₈₇₅) = 1924.612, P < 0.05). The perception of respondents in public university and private university on entrepreneurship education have a statistically significant difference [F(₁,₁₈₇₅) = 134.614, p < 0.05) α F(₁,₁₈₇₅) = 363.439]. The study concluded that entrepreneurship education positively influenced entrepreneurial intention of undergraduate students in Ogun State, Nigeria. Also, university infrastructure and perceived university support have negative and significant effect on entrepreneurial intention. The study recommended that to promote entrepreneurial intention of university undergraduate students, infrastructures and the university support that can arouse entrepreneurial intention of students should be put in place.

Keywords: entrepreneurship education, entrepreneurial intention, perceived university support, university infrastructure

Procedia PDF Downloads 235
853 Fully Coupled Porous Media Model

Authors: Nia Mair Fry, Matthew Profit, Chenfeng Li

Abstract:

This work focuses on the development and implementation of a fully implicit-implicit, coupled mechanical deformation and porous flow, finite element software tool. The fully implicit software accurately predicts classical fundamental analytical solutions such as the Terzaghi consolidation problem. Furthermore, it can capture other analytical solutions less well known in the literature, such as Gibson’s sedimentation rate problem and Coussy’s problems investigating wellbore stability for poroelastic rocks. The mechanical volume strains are transferred to the porous flow governing equation in an implicit framework. This will overcome some of the many current industrial issues, which use explicit solvers for the mechanical governing equations and only implicit solvers on the porous flow side. This can potentially lead to instability and non-convergence issues in the coupled system, plus giving results with an accountable degree of error. The specification of a fully monolithic implicit-implicit coupled porous media code sees the solution of both seepage-mechanical equations in one matrix system, under a unified time-stepping scheme, which makes the problem definition much easier. When using an explicit solver, additional input such as the damping coefficient and mass scaling factor is required, which are circumvented with a fully implicit solution. Further, improved accuracy is achieved as the solution is not dependent on predictor-corrector methods for the pore fluid pressure solution, but at the potential cost of reduced stability. In testing of this fully monolithic porous media code, there is the comparison of the fully implicit coupled scheme against an existing staggered explicit-implicit coupled scheme solution across a range of geotechnical problems. These cases include 1) Biot coefficient calculation, 2) consolidation theory with Terzaghi analytical solution, 3) sedimentation theory with Gibson analytical solution, and 4) Coussy well-bore poroelastic analytical solutions.

Keywords: coupled, implicit, monolithic, porous media

Procedia PDF Downloads 138
852 Aerial Photogrammetry-Based Techniques to Rebuild the 30-Years Landform Changes of a Landslide-Dominated Watershed in Taiwan

Authors: Yichin Chen

Abstract:

Taiwan is an island characterized by an active tectonics and high erosion rates. Monitoring the dynamic landscape of Taiwan is an important issue for disaster mitigation, geomorphological research, and watershed management. Long-term and high spatiotemporal landform data is essential for quantifying and simulating the geomorphological processes and developing warning systems. Recently, the advances in unmanned aerial vehicle (UAV) and computational photogrammetry technology have provided an effective way to rebuild and monitor the topography changes in high spatio-temporal resolutions. This study rebuilds the 30-years landform change in the Aiyuzi watershed in 1986-2017 by using the aerial photogrammetry-based techniques. The Aiyuzi watershed, located in central Taiwan and has an area of 3.99 Km², is famous for its frequent landslide and debris flow disasters. This study took the aerial photos by using UAV and collected multi-temporal historical, stereo photographs, taken by the Aerial Survey Office of Taiwan’s Forestry Bureau. To rebuild the orthoimages and digital surface models (DSMs), Pix4DMapper, a photogrammetry software, was used. Furthermore, to control model accuracy, a set of ground control points was surveyed by using eGPS. The results show that the generated DSMs have the ground sampling distance (GSD) of ~10 cm and ~0.3 cm from the UAV’s and historical photographs, respectively, and vertical error of ~1 m. By comparing the DSMs, there are many deep-seated landslides (with depth over 20 m) occurred on the upstream in the Aiyuzi watershed. Even though a large amount of sediment is delivered from the landslides, the steep main channel has sufficient capacity to transport sediment from the channel and to erode the river bed to ~20 m in depth. Most sediments are transported to the outlet of watershed and deposits on the downstream channel. This case study shows that UAV and photogrammetry technology are useful for topography change monitoring effectively.

Keywords: aerial photogrammetry, landslide, landform change, Taiwan

Procedia PDF Downloads 157
851 Development of an Optimised, Automated Multidimensional Model for Supply Chains

Authors: Safaa H. Sindi, Michael Roe

Abstract:

This project divides supply chain (SC) models into seven Eras, according to the evolution of the market’s needs throughout time. The five earliest Eras describe the emergence of supply chains, while the last two Eras are to be created. Research objectives: The aim is to generate the two latest Eras with their respective models that focus on the consumable goods. Era Six contains the Optimal Multidimensional Matrix (OMM) that incorporates most characteristics of the SC and allocates them into four quarters (Agile, Lean, Leagile, and Basic SC). This will help companies, especially (SMEs) plan their optimal SC route. Era Seven creates an Automated Multidimensional Model (AMM) which upgrades the matrix of Era six, as it accounts for all the supply chain factors (i.e. Offshoring, sourcing, risk) into an interactive system with Heuristic Learning that helps larger companies and industries to select the best SC model for their market. Methodologies: The data collection is based on a Fuzzy-Delphi study that analyses statements using Fuzzy Logic. The first round of Delphi study will contain statements (fuzzy rules) about the matrix of Era six. The second round of Delphi contains the feedback given from the first round and so on. Preliminary findings: both models are applicable, Matrix of Era six reduces the complexity of choosing the best SC model for SMEs by helping them identify the best strategy of Basic SC, Lean, Agile and Leagile SC; that’s tailored to their needs. The interactive heuristic learning in the AMM of Era seven will help mitigate error and aid large companies to identify and re-strategize the best SC model and distribution system for their market and commodity, hence increasing efficiency. Potential contributions to the literature: The problematic issue facing many companies is to decide which SC model or strategy to incorporate, due to the many models and definitions developed over the years. This research simplifies this by putting most definition in a template and most models in the Matrix of era six. This research is original as the division of SC into Eras, the Matrix of Era six (OMM) with Fuzzy-Delphi and Heuristic Learning in the AMM of Era seven provides a synergy of tools that were not combined before in the area of SC. Additionally the OMM of Era six is unique as it combines most characteristics of the SC, which is an original concept in itself.

Keywords: Leagile, automation, heuristic learning, supply chain models

Procedia PDF Downloads 389
850 Revolutionizing Project Management: A Comprehensive Review of Artificial Intelligence and Machine Learning Applications for Smarter Project Execution

Authors: Wenzheng Fu, Yue Fu, Zhijiang Dong, Yujian Fu

Abstract:

The integration of artificial intelligence (AI) and machine learning (ML) into project management is transforming how engineering projects are executed, monitored, and controlled. This paper provides a comprehensive survey of AI and ML applications in project management, systematically categorizing their use in key areas such as project data analytics, monitoring, tracking, scheduling, and reporting. As project management becomes increasingly data-driven, AI and ML offer powerful tools for improving decision-making, optimizing resource allocation, and predicting risks, leading to enhanced project outcomes. The review highlights recent research that demonstrates the ability of AI and ML to automate routine tasks, provide predictive insights, and support dynamic decision-making, which in turn increases project efficiency and reduces the likelihood of costly delays. This paper also examines the emerging trends and future opportunities in AI-driven project management, such as the growing emphasis on transparency, ethical governance, and data privacy concerns. The research suggests that AI and ML will continue to shape the future of project management by driving further automation and offering intelligent solutions for real-time project control. Additionally, the review underscores the need for ongoing innovation and the development of governance frameworks to ensure responsible AI deployment in project management. The significance of this review lies in its comprehensive analysis of AI and ML’s current contributions to project management, providing valuable insights for both researchers and practitioners. By offering a structured overview of AI applications across various project phases, this paper serves as a guide for the adoption of intelligent systems, helping organizations achieve greater efficiency, adaptability, and resilience in an increasingly complex project management landscape.

Keywords: artificial intelligence, decision support systems, machine learning, project management, resource optimization, risk prediction

Procedia PDF Downloads 22
849 Comparative Numerical Simulations of Reaction-Coupled Annular and Free-Bubbling Fluidized Beds Performance

Authors: Adefarati Oloruntoba, Yongmin Zhang, Hongliang Xiao

Abstract:

An annular fluidized bed (AFB) is gaining extensive application in the process industry due to its efficient gas-solids contacting. But a direct evaluation of its reaction performance is still lacking. In this paper, comparative 3D Euler–Lagrange multiphase-particle-in-cell (MP-PIC) computations are performed to assess the reaction performance of AFB relative to a bubbling fluidized bed (BFB) in an FCC regeneration process. By using the energy-minimization multi-scale (EMMS) drag model with a suitable heterogeneity index, the MP-PIC simulation predicts the typical fountain region in AFB and solids holdup of BFB, which is consistent with an experiment. Coke combustion rate, flue gas and temperature profile are utilized as the performance indicators, while related bed hydrodynamics are explored to account for the different performance under varying superficial gas velocities (0.5 m/s, 0.6 m/s, and 0.7 m/s). Simulation results indicate that the burning rates of coke and its species are relatively the same in both beds, albeit marginal increase in BFB. Similarly, the shape and evolution time of flue gas (CO, CO₂, H₂O and O₂) curves are indistinguishable but match the coke combustion rates. However, AFB has high proclivity to high temperature-gradient as higher gas and solids temperatures are predicted in the freeboard. Moreover, for both beds, the effect of superficial gas velocity is only conspicuous on the temperature but negligible on combustion efficiency and effluent gas emissions due to constant gas volumetric flow rate and bed loading criteria. Cross-flow of solids from the annulus to the spout region as well as the high primary gas in the AFB directly assume the underlying mechanisms for its unique gas-solids hydrodynamics (pressure, solids holdup, velocity, mass flux) and local spatial homogeneity, which in turn influence the reactor performance. Overall, the study portrays AFB as a cheap alternative reactor to BFB for catalyst regeneration.

Keywords: annular fluidized bed, bubbling fluidized bed, coke combustion, flue gas, fountaining, CFD, MP-PIC, hydrodynamics, FCC regeneration

Procedia PDF Downloads 163
848 Pavement Management for a Metropolitan Area: A Case Study of Montreal

Authors: Luis Amador Jimenez, Md. Shohel Amin

Abstract:

Pavement performance models are based on projections of observed traffic loads, which makes uncertain to study funding strategies in the long run if history does not repeat. Neural networks can be used to estimate deterioration rates but the learning rate and momentum have not been properly investigated, in addition, economic evolvement could change traffic flows. This study addresses both issues through a case study for roads of Montreal that simulates traffic for a period of 50 years and deals with the measurement error of the pavement deterioration model. Travel demand models are applied to simulate annual average daily traffic (AADT) every 5 years. Accumulated equivalent single axle loads (ESALs) are calculated from the predicted AADT and locally observed truck distributions combined with truck factors. A back propagation Neural Network (BPN) method with a Generalized Delta Rule (GDR) learning algorithm is applied to estimate pavement deterioration models capable of overcoming measurement errors. Linear programming of lifecycle optimization is applied to identify M&R strategies that ensure good pavement condition while minimizing the budget. It was found that CAD 150 million is the minimum annual budget to good condition for arterial and local roads in Montreal. Montreal drivers prefer the use of public transportation for work and education purposes. Vehicle traffic is expected to double within 50 years, ESALS are expected to double the number of ESALs every 15 years. Roads in the island of Montreal need to undergo a stabilization period for about 25 years, a steady state seems to be reached after.

Keywords: pavement management system, traffic simulation, backpropagation neural network, performance modeling, measurement errors, linear programming, lifecycle optimization

Procedia PDF Downloads 460
847 Water Harvest and Recycling with Principles of Permaculture in Rural Buildings in Southeastern Anatolia Region, Turkey

Authors: Muhammed Gündoğan

Abstract:

Permaculture is an important source of science and experience that can ensure the integration of sustainable architecture with nature. Since the past, many applications have been applied in rural areas for generations with the principle of benefiting from the self-renewal potential of nature. This culture, which has been transferred from generation to generation with architectural disciplines, has the potential to significantly improve the sustainability of the rural area and is an important guide with its nature-based solution proposals. Şanlıurfa has arid and semi-arid climate characteristics. Although it has substantial agricultural potential, water is limited, especially in rural areas. In the region, rainwater harvesting practices such as artificial water canals and cisterns have been used for a long time. However, these solutions remained mostly at the urban scale, and their reflections at the building scale were restricted and inadequate solutions. Impermeable surfaces are required for water harvesting, but water harvesting is not possible as rural buildings are mostly surrounded by cultivated land. Therefore, existing structures are important in terms of applicability. In this context, considering the typology of Traditional Şanlıurfa Houses, the aim of the project was to create a proposal for limited potable and utility water, which is a serious problem, especially for rural buildings in Şanlıurfa. In the project proposal, roof systems that can work integrated with the structural shape of Traditional Şanlıurfa Houses, rainwater collection systems in the inner courtyard, and greywater recycling were provided. While the average precipitation amount was 453.7 kg/m3 between 1929 and 2012, this value was measured as 622.7 kg/m3 in 2012. Greywater was used to produce natural fertilizers and compost for small-scale fruit and vegetable gardens, and it was combined with the principles of Permaculture to make it a lifestyle. As a result, it has been estimated that a total of 976.4 m3 kg of water can be saved, with an annual average of 158.8 m3 of rainwater recycling and 817.6 m3 of greywater recycling within the scope of the project.

Keywords: rural, traditional residential building, permaculture, rainwater harvesting, greywater recycling

Procedia PDF Downloads 131
846 Synthesis and Characterization of Graphene Composites with Application for Sustainable Energy

Authors: Daniel F. Sava, Anton Ficai, Bogdan S. Vasile, Georgeta Voicu, Ecaterina Andronescu

Abstract:

The energy crisis and environmental contamination are very serious problems, therefore searching for better and sustainable renewable energy is a must. It is predicted that the global energy demand will double until 2050. Solar water splitting and photocatalysis are considered as one of the solutions to these issues. The use of oxide semiconductors for solar water splitting and photocatalysis started in 1972 with the experiments of Fujishima and Honda on TiO2 electrodes. Since then, the evolution of nanoscience and characterization methods leads to a better control of size, shape and properties of materials. Although the past decade advancements are astonishing, for these applications the properties have to be controlled at a much finer level, allowing the control of charge-carrier lives, energy level positions, charge trapping centers, etc. Graphene has attracted a lot of attention, since its discovery in 2004, due to the excellent electrical, optical, mechanical and thermal properties that it possesses. These properties make it an ideal support for photocatalysts, thus graphene composites with oxide semiconductors are of great interest. We present in this work the synthesis and characterization of graphene-related materials and oxide semiconductors and their different composites. These materials can be used in constructing devices for different applications (batteries, water splitting devices, solar cells, etc), thus showing their application flexibility. The synthesized materials are different morphologies and sizes of TiO2, ZnO and Fe2O3 that are obtained through hydrothermal, sol-gel methods and graphene oxide which is synthesized through a modified Hummer method and reduced with different agents. Graphene oxide and the reduced form could also be used as a single material for transparent conductive films. The obtained single materials and composites were characterized through several methods: XRD, SEM, TEM, IR spectroscopy, RAMAN, XPS and BET adsorption/desorption isotherms. From the results, we see the variation of the properties with the variation of synthesis parameters, size and morphology of the particles.

Keywords: composites, graphene, hydrothermal, renewable energy

Procedia PDF Downloads 498
845 A Survey to Determine the Incidence of Piglets' Mortality in Outdoor Farms in New Zealand

Authors: Patrick C. H. Morel, Ian W. Barugh, Kirsty L. Chidgey

Abstract:

The aim of this study was to quantify the level of piglet deaths in outdoor farrowing systems in New Zealand. A total of 14 farms were visited, the farmers interviewed, and data collected. A total of 10,154 sows were kept on those farms representing an estimated 33% of the NZ sow herd or 80% of the outdoor sow herd in 2016. Data from 25,911 litters was available for the different analyses. The characteristics and reproductive performance for the years 2015-2016 from the 14 farms surveyed in this study were analysed, and the following results were obtained. The average percentage of stillbirths was 7.1% ranging between 3.5 and 10.7%, and the average pre-weaning live-born mortality was 16.7% ranging between 3.7% and 23.6%. The majority of piglet deaths (89%) occurred during the first week after birth, with 81% of deaths occurring up to day three. The number of piglets born alive was 12.3 (8.0 to 14.0), and average number of piglets weaned per sow per year was 22.4, range 10.5-27.3. The average stocking rate per ha (number of sows and mated gilts) was 15.3 and ranged from 2.8 to 28.6. The sow to boar ratio average was 20.9:1 and the range was 7.1: 1 to 63:1. The sow replacement rate ranged between 37% and 78%. There was a large variation in the piglet live-born mortality both between months within a farm and between farms within a given month. The monthly recorded piglet mortality ranged between 7.7% and 31.5%, and there was no statistically significant difference between months on the number of piglets born, born alive, weaned or on pre-weaning piglet mortality. Twelve different types of hut/farrowing systems were used on the 14 farms. No difference in piglet mortality was observed between A-Frame, A-Frame Modified and for Box-shape huts. There was a positive relationship between the average number of piglets born per litter and the number of piglets born alive (r=0.975) or the number weaned per litter (r=0.845). Moreover, as the average number of piglets born-alive increases, both pre-weaning live-born mortality rate and the number of piglets weaned increased. An increase of 1 piglet in the number born alive corresponds to an increase of 2.9% in live-born mortality and an increase of 0.56 piglets weaned. Farmers reported that staff are the key to success with the key attributes being: good and reliable with attention to detail and skills with the stock.

Keywords: mortality, piglets, outdoor, pig farm

Procedia PDF Downloads 115
844 DNA Methylation Score Development for In utero Exposure to Paternal Smoking Using a Supervised Machine Learning Approach

Authors: Cristy Stagnar, Nina Hubig, Diana Ivankovic

Abstract:

The epigenome is a compelling candidate for mediating long-term responses to environmental effects modifying disease risk. The main goal of this research is to develop a machine learning-based DNA methylation score, which will be valuable in delineating the unique contribution of paternal epigenetic modifications to the germline impacting childhood health outcomes. It will also be a useful tool in validating self-reports of nonsmoking and in adjusting epigenome-wide DNA methylation association studies for this early-life exposure. Using secondary data from two population-based methylation profiling studies, our DNA methylation score is based on CpG DNA methylation measurements from cord blood gathered from children whose fathers smoked pre- and peri-conceptually. Each child’s mother and father fell into one of three class labels in the accompanying questionnaires -never smoker, former smoker, or current smoker. By applying different machine learning algorithms to the accessible resource for integrated epigenomic studies (ARIES) sub-study of the Avon longitudinal study of parents and children (ALSPAC) data set, which we used for training and testing of our model, the best-performing algorithm for classifying the father smoker and mother never smoker was selected based on Cohen’s κ. Error in the model was identified and optimized. The final DNA methylation score was further tested and validated in an independent data set. This resulted in a linear combination of methylation values of selected probes via a logistic link function that accurately classified each group and contributed the most towards classification. The result is a unique, robust DNA methylation score which combines information on DNA methylation and early life exposure of offspring to paternal smoking during pregnancy and which may be used to examine the paternal contribution to offspring health outcomes.

Keywords: epigenome, health outcomes, paternal preconception environmental exposures, supervised machine learning

Procedia PDF Downloads 185
843 Resisting Adversarial Assaults: A Model-Agnostic Autoencoder Solution

Authors: Massimo Miccoli, Luca Marangoni, Alberto Aniello Scaringi, Alessandro Marceddu, Alessandro Amicone

Abstract:

The susceptibility of deep neural networks (DNNs) to adversarial manipulations is a recognized challenge within the computer vision domain. Adversarial examples, crafted by adding subtle yet malicious alterations to benign images, exploit this vulnerability. Various defense strategies have been proposed to safeguard DNNs against such attacks, stemming from diverse research hypotheses. Building upon prior work, our approach involves the utilization of autoencoder models. Autoencoders, a type of neural network, are trained to learn representations of training data and reconstruct inputs from these representations, typically minimizing reconstruction errors like mean squared error (MSE). Our autoencoder was trained on a dataset of benign examples; learning features specific to them. Consequently, when presented with significantly perturbed adversarial examples, the autoencoder exhibited high reconstruction errors. The architecture of the autoencoder was tailored to the dimensions of the images under evaluation. We considered various image sizes, constructing models differently for 256x256 and 512x512 images. Moreover, the choice of the computer vision model is crucial, as most adversarial attacks are designed with specific AI structures in mind. To mitigate this, we proposed a method to replace image-specific dimensions with a structure independent of both dimensions and neural network models, thereby enhancing robustness. Our multi-modal autoencoder reconstructs the spectral representation of images across the red-green-blue (RGB) color channels. To validate our approach, we conducted experiments using diverse datasets and subjected them to adversarial attacks using models such as ResNet50 and ViT_L_16 from the torch vision library. The autoencoder extracted features used in a classification model, resulting in an MSE (RGB) of 0.014, a classification accuracy of 97.33%, and a precision of 99%.

Keywords: adversarial attacks, malicious images detector, binary classifier, multimodal transformer autoencoder

Procedia PDF Downloads 113
842 Levels of CTX1 in Premenopausal Osteoporotic Women Study Conducted in Khyberpuktoonkhwa Province, Pakistan

Authors: Mehwish Durrani, Rubina Nazli, Muhammad Abubakr, Muhammad Shafiq

Abstract:

Objectives: To evaluate the high socio-economic status, urbanization, and decrease ambulation can lead to early osteoporosis in women reporting from Peshawar region. Study Design: Descriptive cross-sectional study was done. Sample size was 100 subjects, using 30% proportion of osteoporosis, 95% confidence level, and 9% margin of error under WHO software for sample size determination. Place and Duration of study: This study was carried out in the tertiary referral health care facilities of Peshawar viz PGMI Hayatabad Medical Complex, Peshawar, Khyber Pakhtunkhwa Province, Pakistan. Ethical approval for the study was taken from the Institutional Ethical Research board (IERD) at Post Graduate Medical Institute, Hayatabad Medical Complex, and Peshawar.The study was done in six months time period. Patients and Methods: Levels of CTX1 as a marker of bone degradation in radiographically assessed perimenopausal women was determined. These females were randomly selected and screened for osteoporosis. Hemoglobin in gm/dl, ESR by Westergren method as millimeter in 1 hour, Serum Ca mg/dl, Serum alkaline Phosphatase international units per liter radiographic grade of osteoporosis according to Singh index as 1-6 and CTX 1 level in pg/ml. Results: High levels of CTX1 was observed in perimenopausal osteoporotic women which were radiographically diagnosed as osteoporotic patients. The High socio-economic class also predispose to osteoporosis. Decrease ambulation another risk factor showed significant association with the increased levels of CTX1. Conclusion: The results of this study propose that minimum ambulation and high socioeconomic class both had significance association with the increase levels of serum CTX1, which in turn will lead to osteoporosis and to its complications.

Keywords: osteoporosis, CTX1, perimenopausal women, Hayatabad Medical Complex, Khyberpuktoonkhwa

Procedia PDF Downloads 332
841 Seashore Debris Detection System Using Deep Learning and Histogram of Gradients-Extractor Based Instance Segmentation Model

Authors: Anshika Kankane, Dongshik Kang

Abstract:

Marine debris has a significant influence on coastal environments, damaging biodiversity, and causing loss and damage to marine and ocean sector. A functional cost-effective and automatic approach has been used to look up at this problem. Computer vision combined with a deep learning-based model is being proposed to identify and categorize marine debris of seven kinds on different beach locations of Japan. This research compares state-of-the-art deep learning models with a suggested model architecture that is utilized as a feature extractor for debris categorization. The model is being proposed to detect seven categories of litter using a manually constructed debris dataset, with the help of Mask R-CNN for instance segmentation and a shape matching network called HOGShape, which can then be cleaned on time by clean-up organizations using warning notifications of the system. The manually constructed dataset for this system is created by annotating the images taken by fixed KaKaXi camera using CVAT annotation tool with seven kinds of category labels. A pre-trained HOG feature extractor on LIBSVM is being used along with multiple templates matching on HOG maps of images and HOG maps of templates to improve the predicted masked images obtained via Mask R-CNN training. This system intends to timely alert the cleanup organizations with the warning notifications using live recorded beach debris data. The suggested network results in the improvement of misclassified debris masks of debris objects with different illuminations, shapes, viewpoints and litter with occlusions which have vague visibility.

Keywords: computer vision, debris, deep learning, fixed live camera images, histogram of gradients feature extractor, instance segmentation, manually annotated dataset, multiple template matching

Procedia PDF Downloads 107
840 Fault Tolerant and Testable Designs of Reversible Sequential Building Blocks

Authors: Vishal Pareek, Shubham Gupta, Sushil Chandra Jain

Abstract:

With increasing high-speed computation demand the power consumption, heat dissipation and chip size issues are posing challenges for logic design with conventional technologies. Recovery of bit loss and bit errors is other issues that require reversibility and fault tolerance in the computation. The reversible computing is emerging as an alternative to conventional technologies to overcome the above problems and helpful in a diverse area such as low-power design, nanotechnology, quantum computing. Bit loss issue can be solved through unique input-output mapping which require reversibility and bit error issue require the capability of fault tolerance in design. In order to incorporate reversibility a number of combinational reversible logic based circuits have been developed. However, very few sequential reversible circuits have been reported in the literature. To make the circuit fault tolerant, a number of fault model and test approaches have been proposed for reversible logic. In this paper, we have attempted to incorporate fault tolerance in sequential reversible building blocks such as D flip-flop, T flip-flop, JK flip-flop, R-S flip-flop, Master-Slave D flip-flop, and double edge triggered D flip-flop by making them parity preserving. The importance of this proposed work lies in the fact that it provides the design of reversible sequential circuits completely testable for any stuck-at fault and single bit fault. In our opinion our design of reversible building blocks is superior to existing designs in term of quantum cost, hardware complexity, constant input, garbage output, number of gates and design of online testable D flip-flop have been proposed for the first time. We hope our work can be extended for building complex reversible sequential circuits.

Keywords: parity preserving gate, quantum computing, fault tolerance, flip-flop, sequential reversible logic

Procedia PDF Downloads 545
839 Objective Assessment of the Evolution of Microplastic Contamination in Sediments from a Vast Coastal Area

Authors: Vanessa Morgado, Ricardo Bettencourt da Silva, Carla Palma

Abstract:

The environmental pollution by microplastics is well recognized. Microplastics were already detected in various matrices from distinct environmental compartments worldwide, some from remote areas. Various methodologies and techniques have been used to determine microplastic in such matrices, for instance, sediment samples from the ocean bottom. In order to determine microplastics in a sediment matrix, the sample is typically sieved through a 5 mm mesh, digested to remove the organic matter, and density separated to isolate microplastics from the denser part of the sediment. The physical analysis of microplastic consists of visual analysis under a stereomicroscope to determine particle size, colour, and shape. The chemical analysis is performed by an infrared spectrometer coupled to a microscope (micro-FTIR), allowing to the identification of the chemical composition of microplastic, i.e., the type of polymer. Creating legislation and policies to control and manage (micro)plastic pollution is essential to protect the environment, namely the coastal areas. The regulation is defined from the known relevance and trends of the pollution type. This work discusses the assessment of contamination trends of a 700 km² oceanic area affected by contamination heterogeneity, sampling representativeness, and the uncertainty of the analysis of collected samples. The methodology developed consists of objectively identifying meaningful variations of microplastic contamination by the Monte Carlo simulation of all uncertainty sources. This work allowed us to unequivocally conclude that the contamination level of the studied area did not vary significantly between two consecutive years (2018 and 2019) and that PET microplastics are the major type of polymer. The comparison of contamination levels was performed for a 99% confidence level. The developed know-how is crucial for the objective and binding determination of microplastic contamination in relevant environmental compartments.

Keywords: measurement uncertainty, micro-ATR-FTIR, microplastics, ocean contamination, sampling uncertainty

Procedia PDF Downloads 89