Search results for: SQL injection attack classification
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3548

Search results for: SQL injection attack classification

518 Gender Estimation by Means of Quantitative Measurements of Foramen Magnum: An Analysis of CT Head Images

Authors: Thilini Hathurusinghe, Uthpalie Siriwardhana, W. M. Ediri Arachchi, Ranga Thudugala, Indeewari Herath, Gayani Senanayake

Abstract:

The foramen magnum is more prone to protect than other skeletal remains during high impact and severe disruptive injuries. Therefore, it is worthwhile to explore whether these measurements can be used to determine the human gender which is vital in forensic and anthropological studies. The idea was to find out the ability to use quantitative measurements of foramen magnum as an anatomical indicator for human gender estimation and to evaluate the gender-dependent variations of foramen magnum using quantitative measurements. Randomly selected 113 subjects who underwent CT head scans at Sri Jayawardhanapura General Hospital of Sri Lanka within a period of six months, were included in the study. The sample contained 58 males (48.76 ± 14.7 years old) and 55 females (47.04 ±15.9 years old). Maximum length of the foramen magnum (LFM), maximum width of the foramen magnum (WFM), minimum distance between occipital condyles (MnD) and maximum interior distance between occipital condyles (MxID) were measured. Further, AreaT and AreaR were also calculated. The gender was estimated using binomial logistic regression. The mean values of all explanatory variables (LFM, WFM, MnD, MxID, AreaT, and AreaR) were greater among male than female. All explanatory variables except MnD (p=0.669) were statistically significant (p < 0.05). Significant bivariate correlations were demonstrated by AreaT and AreaR with the explanatory variables. The results evidenced that WFM and MxID were the best measurements in predicting gender according to binomial logistic regression. The estimated model was: log (p/1-p) =10.391-0.136×MxID-0.231×WFM, where p is the probability of being a female. The classification accuracy given by the above model was 65.5%. The quantitative measurements of foramen magnum can be used as a reliable anatomical marker for human gender estimation in the Sri Lankan context.

Keywords: foramen magnum, forensic and anthropological studies, gender estimation, logistic regression

Procedia PDF Downloads 133
517 Biotechnological Methods for the Grouting of the Tunneling Space

Authors: V. Ivanov, J. Chu, V. Stabnikov

Abstract:

Different biotechnological methods for the production of construction materials and for the performance of construction processes in situ are developing within a new scientific discipline of Construction Biotechnology. The aim of this research was to develop and test new biotechnologies and biotechnological grouts for the minimization of the hydraulic conductivity of the fractured rocks and porous soil. This problem is essential to minimize flow rate of groundwater into the construction sites, the tunneling space before and after excavation, inside levies, as well as to stop water seepage from the aquaculture ponds, agricultural channels, radioactive waste or toxic chemicals storage sites, from the landfills or from the soil-polluted sites. The conventional fine or ultrafine cement grouts or chemical grouts have such restrictions as high cost, viscosity, sometime toxicity but the biogrouts, which are based on microbial or enzymatic activities and some not expensive inorganic reagents, could be more suitable in many cases because of lower cost and low or zero toxicity. Due to these advantages, development of biotechnologies for biogrouting is going exponentially. However, most popular at present biogrout, which is based on activity of urease- producing bacteria initiating crystallization of calcium carbonate from calcium salt has such disadvantages as production of toxic ammonium/ammonia and development of high pH. Therefore, the aim of our studies was development and testing of new biogrouts that are environmentally friendly and have low cost suitable for large scale geotechnical, construction, and environmental applications. New microbial biotechnologies have been studied and tested in the sand columns, fissured rock samples, in 1 m3 tank with sand, and in the pack of stone sheets that were the models of the porous soil and fractured rocks. Several biotechnological methods showed positive results: 1) biogrouting using sequential desaturation of sand by injection of denitrifying bacteria and medium following with biocementation using urease-producing bacteria, urea and calcium salt decreased hydraulic conductivity of sand to 2×10-7 ms-1 after 17 days of treatment and consumed almost three times less reagents than conventional calcium-and urea-based biogrouting; 2) biogrouting using slime-producing bacteria decreased hydraulic conductivity of sand to 1x10-6 ms-1 after 15 days of treatment; 3) biogrouting of the rocks with the width of the fissures 65×10-6 m using calcium bicarbonate solution, that was produced from CaCO3 and CO2 under 30 bars pressure, decreased hydraulic conductivity of the fissured rocks to 2×10-7 ms-1 after 5 days of treatment. These bioclogging technologies could have a lot of advantages over conventional construction materials and processes and can be used in geotechnical engineering, agriculture and aquaculture, and for the environmental protection.

Keywords: biocementation, bioclogging, biogrouting, fractured rocks, porous soil, tunneling space

Procedia PDF Downloads 191
516 Assessment of Knowledge, Awareness about Hemorrhoids Causes and Stages among the General Public of Saudi Arabia

Authors: Asaiel Mubark Al Hadi

Abstract:

Background: A frequent anorectal condition known as hemorrhoids, sometimes known as piles, is characterized by a weakening of the anal cushion and the supporting tissue as well as spasms of the internal sphincter. Hemorrhoids are most frequently identified by painless bright red bleeding, prolapse, annoying grape-like tissue prolapse, itching, or a combination of symptoms. digital rectal examination (DRE) and anoscope are used to diagnose it. Constipation, a low-fiber diet, a high body- mass index (BMI), pregnancy, and a reduced physical activity are among the factors that are typically thought to increase the risk of hemorrhoids. Golighers is the most commonly used hemorrhoid classification scheme It is 4 degrees, which determines the degree of the event. The purpose of this study is to assess knowledge and awareness level of the causes and stages of Hemorrhoids in the public of Saudi Arabia. Method: This cross-sectional study was conducted in the Saudi Arabia between Oct 2022- Dec 2022. The study group included at least 384 aged above 18 years. The outcomes of this study were analyzed using the SPSS program using a pre-tested questionnaire. Results: The study included 1410 participants, 69.9% of them were females and 30.1% were males. 53.7% of participants aged 20- 30 years old. 17% of participants had hemorrhoids and 42% had a relative who had hemorrhoids. 42.8% of participants could identify stage 1 of hemorrhoids correctly, 44.7% identified stage 2 correctly, 46.7% identified stage 3 correctly and 58.1% identified stage 4 correctly. Only 28.9% of participants had high level of knowledge about hemorrhoids, 62.7% had moderate knowledge and 8.4% had low knowledge. Conclusion: In conclusion, Saudi general population has poor knowledge of hemorrhoids, their causes and their management approach. There was a significant association between knowledge scores of hemorrhoids with age, gender, residence area and employment.

Keywords: hemorrhoids, external hemorrhoid, internal hemorrhoid, anal fissure, hemorrhoid stages, prolapse, rectal bleeding

Procedia PDF Downloads 69
515 Stress and Rhythm in the Educated Nigerian Accent of English

Authors: Nkereke M. Essien

Abstract:

The intention of this paper is to examine stress in the Educated Nigerian Accent of English (ENAE) with the aim of analyzing stress and rhythmic patterns of Nigerian English. Our aim also is to isolate differences and similarities in the stress patterns studied and also know what forms the accent of these Educated Nigerian English (ENE) which marks them off from other groups or English’s of the world, to ascertain and characterize it and to provide documented evidence for its existence. Nigerian stress and rhythmic patterns are significantly different from the British English stress and rhythmic patterns consequently, the educated Nigerian English (ENE) features more stressed syllables than the native speakers’ varieties. The excessive stressed of syllables causes a contiguous “Ss” in the rhythmic flow of ENE, and this brings about a “jerky rhythm’ which distorts communication. To ascertain this claim, ten (10) Nigerian speakers who are educated in the English Language were selected by a stratified Random Sampling technique from two Federal Universities in Nigeria. This classification belongs to the education to the educated class or standard variety. Their performance was compared to that of a Briton (control). The Metrical system of analysis was used. The respondents were made to read some words and utterance which was recorded and analyzed perceptually, statistically and acoustically using the one-way Analysis of Variance (ANOVA). The Turky-Kramer Post Hoc test, the Wilcoxon Matched Pairs Signed Ranks test, and the Praat analysis software were used in the analysis. It was revealed from our findings that the Educated Nigerian English speakers feature more stressed syllables in their productions by spending more time in pronouncing stressed syllables and sometimes lesser time in pronouncing the unstressed syllables. Their overall tempo was faster. The ENE speakers used tone to mark prominence while the native speaker used stress to mark pronounce, typified by the control. We concluded that the stress pattern of the ENE speakers was significantly different from the native speaker’s variety represented by the control’s performance.

Keywords: accent, Nigerian English, rhythm, stress

Procedia PDF Downloads 222
514 Electron Bernstein Wave Heating in the Toroidally Magnetized System

Authors: Johan Buermans, Kristel Crombé, Niek Desmet, Laura Dittrich, Andrei Goriaev, Yurii Kovtun, Daniel López-Rodriguez, Sören Möller, Per Petersson, Maja Verstraeten

Abstract:

The International Thermonuclear Experimental Reactor (ITER) will rely on three sources of external heating to produce and sustain a plasma; Neutral Beam Injection (NBI), Ion Cyclotron Resonance Heating (ICRH), and Electron Cyclotron Resonance Heating (ECRH). ECRH is a way to heat the electrons in a plasma by resonant absorption of electromagnetic waves. The energy of the electrons is transferred indirectly to the ions by collisions. The electron cyclotron heating system can be directed to deposit heat in particular regions in the plasma (https://www.iter.org/mach/Heating). Electron Cyclotron Resonance Heating (ECRH) at the fundamental resonance in X-mode is limited by a low cut-off density. Electromagnetic waves cannot propagate in the region between this cut-off and the Upper Hybrid Resonance (UHR) and cannot reach the Electron Cyclotron Resonance (ECR) position. Higher harmonic heating is hence preferred in heating scenarios nowadays to overcome this problem. Additional power deposition mechanisms can occur above this threshold to increase the plasma density. This includes collisional losses in the evanescent region, resonant power coupling at the UHR, tunneling of the X-wave with resonant coupling at the ECR, and conversion to the Electron Bernstein Wave (EBW) with resonant coupling at the ECR. A more profound knowledge of these deposition mechanisms can help determine the optimal plasma production scenarios. Several ECRH experiments are performed on the TOroidally MAgnetized System (TOMAS) to identify the conditions for Electron Bernstein Wave (EBW) heating. Density and temperature profiles are measured with movable Triple Langmuir Probes in the horizontal and vertical directions. Measurements of the forwarded and reflected power allow evaluation of the coupling efficiency. Optical emission spectroscopy and camera images also contribute to plasma characterization. The influence of the injected power, magnetic field, gas pressure, and wave polarization on the different deposition mechanisms is studied, and the contribution of the Electron Bernstein Wave is evaluated. The TOMATOR 1D hydrogen-helium plasma simulator numerically describes the evolution of current less magnetized Radio Frequency plasmas in a tokamak based on Braginskii’s legal continuity and heat balance equations. This code was initially benchmarked with experimental data from TCV to determine the transport coefficients. The code is used to model the plasma parameters and the power deposition profiles. The modeling is compared with the data from the experiments.

Keywords: electron Bernstein wave, Langmuir probe, plasma characterization, TOMAS

Procedia PDF Downloads 75
513 Additive Manufacturing with Ceramic Filler

Authors: Irsa Wolfram, Boruch Lorenz

Abstract:

Innovative solutions with additive manufacturing applying material extrusion for functional parts necessitate innovative filaments with persistent quality. Uniform homogeneity and a consistent dispersion of particles embedded in filaments generally require multiple cycles of extrusion or well-prepared primal matter by injection molding, kneader machines, or mixing equipment. These technologies commit to dedicated equipment that is rarely at the disposal in production laboratories unfamiliar with research in polymer materials. This stands in contrast to laboratories that investigate complex material topics and technology science to leverage the potential of 3-D printing. Consequently, scientific studies in labs are often constrained to compositions and concentrations of fillersofferedfrom the market. Therefore, we introduce a prototypal laboratory methodology scalable to tailoredprimal matter for extruding ceramic composite filaments with fused filament fabrication (FFF) technology. - A desktop single-screw extruder serves as a core device for the experiments. Custom-made filaments encapsulate the ceramic fillers and serve with polylactide (PLA), which is a thermoplastic polyester, as primal matter and is processed in the melting area of the extruder, preserving the defined concentration of the fillers. Validated results demonstrate that this approach enables continuously produced and uniform composite filaments with consistent homogeneity. Itis 3-D printable with controllable dimensions, which is a prerequisite for any scalable application. Additionally, digital microscopy confirms the steady dispersion of the ceramic particles in the composite filament. - This permits a 2D reconstruction of the planar distribution of the embedded ceramic particles in the PLA matrices. The innovation of the introduced method lies in the smart simplicity of preparing the composite primal matter. It circumvents the inconvenience of numerous extrusion operations and expensive laboratory equipment. Nevertheless, it deliversconsistent filaments of controlled, predictable, and reproducible filler concentration, which is the prerequisite for any industrial application. The introduced prototypal laboratory methodology seems capable for other polymer matrices and suitable to further utilitarian particle types beyond and above ceramic fillers. This inaugurates a roadmap for supplementary laboratory development of peculiar composite filaments, providing value for industries and societies. This low-threshold entry of sophisticated preparation of composite filaments - enabling businesses to create their own dedicated filaments - will support the mutual efforts for establishing 3D printing to new functional devices.

Keywords: additive manufacturing, ceramic composites, complex filament, industrial application

Procedia PDF Downloads 91
512 Shift in the Rhizosphere Soil Fungal Community Associated with Root Rot Infection of Plukenetia Volubilis Linneo Caused by Fusarium and Rhizopus Species

Authors: Constantine Uwaremwe, Wenjie Bao, Bachir Goudia Daoura, Sandhya Mishra, Xianxian Zhang, Lingjie Shen, Shangwen Xia, Xiaodong Yang

Abstract:

Background: Plukenetia volubilis Linneo is an oleaginous plant belonging to the family Euphorbiaceae. Due to its seeds containing a high content of edible oil and rich in vitamins, P. volubilis is cultivated as an economical plant worldwide. However, the cultivation and growth of P. volubilis is challenged by phytopathogen invasion leading to production loss. Methods: In the current study, we tested the pathogenicity of fungal pathogens isolated from root rot infected P. volubilis plant tissues by inoculating them into healthy P. volubilis seedlings. Metagenomic sequencing was used to assess the shift in the fungal community of P. volubilis rhizosphere soil after root rot infection. Results: Four Fusarium isolates and two Rhizopus isolates were found to be root rot causative agents of P. volubilis as they induced typical root rot symptoms in healthy seedlings. The metagenomic sequencing data showed that root rot infection altered the rhizosphere fungal community. In root rot infected soil, the richness and diversity indices increased or decreased depending on pathogens. The four most abundant phyla across all samples were Ascomycota, Glomeromycota, Basidiomycota, and Mortierellomycota. In infected soil, the relative abundance of each phylum increased or decreased depending on the pathogen and functional taxonomic classification. Conclusions: Based on our results, we concluded that Fusarium and Rhizopus species cause root rot infection of P. volubilis. In root rot infected P. volubilis, the shift in the rhizosphere fungal community was pathogen-dependent. These findings may serve as a key point for a future study on the biocontrol of root rot of P. volubilis.

Keywords: fusarium spp., plukenetia volubilis l., rhizopus spp., rhizosphere fungal community, root rot

Procedia PDF Downloads 5
511 A Cloud-Based Federated Identity Management in Europe

Authors: Jesus Carretero, Mario Vasile, Guillermo Izquierdo, Javier Garcia-Blas

Abstract:

Currently, there is a so called ‘identity crisis’ in cybersecurity caused by the substantial security, privacy and usability shortcomings encountered in existing systems for identity management. Federated Identity Management (FIM) could be solution for this crisis, as it is a method that facilitates management of identity processes and policies among collaborating entities without enforcing a global consistency, that is difficult to achieve when there are ID legacy systems. To cope with this problem, the Connecting Europe Facility (CEF) initiative proposed in 2014 a federated solution in anticipation of the adoption of the Regulation (EU) N°910/2014, the so-called eIDAS Regulation. At present, a network of eIDAS Nodes is being deployed at European level to allow that every citizen recognized by a member state is to be recognized within the trust network at European level, enabling the consumption of services in other member states that, until now were not allowed, or whose concession was tedious. This is a very ambitious approach, since it tends to enable cross-border authentication of Member States citizens without the need to unify the authentication method (eID Scheme) of the member state in question. However, this federation is currently managed by member states and it is initially applied only to citizens and public organizations. The goal of this paper is to present the results of a European Project, named eID@Cloud, that focuses on the integration of eID in 5 cloud platforms belonging to authentication service providers of different EU Member States to act as Service Providers (SP) for private entities. We propose an initiative based on a private eID Scheme both for natural and legal persons. The methodology followed in the eID@Cloud project is that each Identity Provider (IdP) is subscribed to an eIDAS Node Connector, requesting for authentication, that is subscribed to an eIDAS Node Proxy Service, issuing authentication assertions. To cope with high loads, load balancing is supported in the eIDAS Node. The eID@Cloud project is still going on, but we already have some important outcomes. First, we have deployed the federation identity nodes and tested it from the security and performance point of view. The pilot prototype has shown the feasibility of deploying this kind of systems, ensuring good performance due to the replication of the eIDAS nodes and the load balance mechanism. Second, our solution avoids the propagation of identity data out of the native domain of the user or entity being identified, which avoids problems well known in cybersecurity due to network interception, man in the middle attack, etc. Last, but not least, this system allows to connect any country or collectivity easily, providing incremental development of the network and avoiding difficult political negotiations to agree on a single authentication format (which would be a major stopper).

Keywords: cybersecurity, identity federation, trust, user authentication

Procedia PDF Downloads 148
510 Characterization of Atmospheric Aerosols by Developing a Cascade Impactor

Authors: Sapan Bhatnagar

Abstract:

Micron size particles emitted from different sources and produced by combustion have serious negative effects on human health and environment. They can penetrate deep into our lungs through the respiratory system. Determination of the amount of particulates present in the atmosphere per cubic meter is necessary to monitor, regulate and model atmospheric particulate levels. Cascade impactor is used to collect the atmospheric particulates and by gravimetric analysis, their concentration in the atmosphere of different size ranges can be determined. Cascade impactors have been used for the classification of particles by aerodynamic size. They operate on the principle of inertial impaction. It consists of a number of stages each having an impaction plate and a nozzle. Collection plates are connected in series with smaller and smaller cutoff diameter. Air stream passes through the nozzle and the plates. Particles in the stream having large enough inertia impact upon the plate and smaller particles pass onto the next stage. By designing each successive stage with higher air stream velocity in the nozzle, smaller diameter particles will be collected at each stage. Particles too small to be impacted on the last collection plate will be collected on a backup filter. Impactor consists of 4 stages each made of steel, having its cut-off diameters less than 10 microns. Each stage is having collection plates, soaked with oil to prevent bounce and allows the impactor to function at high mass concentrations. Even after the plate is coated with particles, the incoming particle will still have a wet surface which significantly reduces particle bounce. The particles that are too small to be impacted on the last collection plate are then collected on a backup filter (microglass fiber filter), fibers provide larger surface area to which particles may adhere and voids in filter media aid in reducing particle re-entrainment.

Keywords: aerodynamic diameter, cascade, environment, particulates, re-entrainment

Procedia PDF Downloads 307
509 Quality Analysis of Vegetables Through Image Processing

Authors: Abdul Khalique Baloch, Ali Okatan

Abstract:

The quality analysis of food and vegetable from image is hot topic now a day, where researchers make them better then pervious findings through different technique and methods. In this research we have review the literature, and find gape from them, and suggest better proposed approach, design the algorithm, developed a software to measure the quality from images, where accuracy of image show better results, and compare the results with Perouse work done so for. The Application we uses an open-source dataset and python language with tensor flow lite framework. In this research we focus to sort food and vegetable from image, in the images, the application can sorts and make them grading after process the images, it could create less errors them human base sorting errors by manual grading. Digital pictures datasets were created. The collected images arranged by classes. The classification accuracy of the system was about 94%. As fruits and vegetables play main role in day-to-day life, the quality of fruits and vegetables is necessary in evaluating agricultural produce, the customer always buy good quality fruits and vegetables. This document is about quality detection of fruit and vegetables using images. Most of customers suffering due to unhealthy foods and vegetables by suppliers, so there is no proper quality measurement level followed by hotel managements. it have developed software to measure the quality of the fruits and vegetables by using images, it will tell you how is your fruits and vegetables are fresh or rotten. Some algorithms reviewed in this thesis including digital images, ResNet, VGG16, CNN and Transfer Learning grading feature extraction. This application used an open source dataset of images and language used python, and designs a framework of system.

Keywords: deep learning, computer vision, image processing, rotten fruit detection, fruits quality criteria, vegetables quality criteria

Procedia PDF Downloads 52
508 Role of Endotherapy vs Surgery in the Management of Traumatic Pancreatic Injury: A Tertiary Center Experience

Authors: Thinakar Mani Balusamy, Ratnakar S. Kini, Bharat Narasimhan, Venkateswaran A. R, Pugazhendi Thangavelu, Mohammed Ali, Prem Kumar K., Kani Sheikh M., Sibi Thooran Karmegam, Radhakrishnan N., Mohammed Noufal

Abstract:

Introduction: Pancreatic injury remains a complicated condition requiring an individualized case by case approach to management. In this study, we aim to analyze the varied presentations and treatment outcomes of traumatic pancreatic injury in a tertiary care center. Methods: All consecutive patients hospitalized at our center with traumatic pancreatic injury between 2013 and 2017 were included. The American Association for Surgery of Trauma (AAST) classification was used to stratify patients into five grades of severity. Outcome parameters were then analyzed based on the treatment modality employed. Results: Of the 35 patients analyzed, 26 had an underlying blunt trauma with the remaining nine presenting due to penetrating injury. Overall in-hospital mortality was 28%. 19 of these patients underwent exploratory laparotomy with the remaining 16 managed nonoperatively. Nine patients had a severe injury ( > grade 3) – of which four underwent endotherapy, three had stents placed and one underwent an endoscopic pseudocyst drainage. Among those managed nonoperatively, three underwent a radiological drainage procedure. Conclusion: Mortality rates were clearly higher in patients managed operatively. This is likely a result of significantly higher degrees of major associated non-pancreatic injuries and not just a reflection of surgical morbidity. Despite this, surgical management remains the mainstay of therapy, especially in higher grades of pancreatic injury. However we would like to emphasize that endoscopic intervention definitely remains the preferred treatment modality when the clinical setting permits. This is especially applicable in cases of main pancreatic duct injury with ascites as well as pseudocysts.

Keywords: endotherapy, non-operative management, surgery, traumatic pancreatic injury

Procedia PDF Downloads 192
507 Land Use Dynamics of Ikere Forest Reserve, Nigeria Using Geographic Information System

Authors: Akintunde Alo

Abstract:

The incessant encroachments into the forest ecosystem by the farmers and local contractors constitute a major threat to the conservation of genetic resources and biodiversity in Nigeria. To propose a viable monitoring system, this study employed Geographic Information System (GIS) technology to assess the changes that occurred for a period of five years (between 2011 and 2016) in Ikere forest reserve. Landsat imagery of the forest reserve was obtained. For the purpose of geo-referencing the acquired satellite imagery, ground-truth coordinates of some benchmark places within the forest reserve was relied on. Supervised classification algorithm, image processing, vectorization and map production were realized using ArcGIS. Various land use systems within the forest ecosystem were digitized into polygons of different types and colours for 2011 and 2016, roads were represented with lines of different thickness and colours. Of the six land-use delineated, the grassland increased from 26.50 % in 2011 to 45.53% in 2016 of the total land area with a percentage change of 71.81 %. Plantations of Gmelina arborea and Tectona grandis on the other hand reduced from 62.16 % in 2011 to 27.41% in 2016. The farmland and degraded land recorded percentage change of about 176.80 % and 8.70 % respectively from 2011 to 2016. Overall, the rate of deforestation in the study area is on the increase and becoming severe. About 72.59% of the total land area has been converted to non-forestry uses while the remnant 27.41% is occupied by plantations of Gmelina arborea and Tectona grandis. Interestingly, over 55 % of the plantation area in 2011 has changed to grassland, or converted to farmland and degraded land in 2016. The rate of change over time was about 9.79 % annually. Based on the results, rapid actions to prevail on the encroachers to stop deforestation and encouraged re-afforestation in the study area are recommended.

Keywords: land use change, forest reserve, satellite imagery, geographical information system

Procedia PDF Downloads 340
506 Chebyshev Collocation Method for Solving Heat Transfer Analysis for Squeezing Flow of Nanofluid in Parallel Disks

Authors: Mustapha Rilwan Adewale, Salau Ayobami Muhammed

Abstract:

This study focuses on the heat transfer analysis of magneto-hydrodynamics (MHD) squeezing flow between parallel disks, considering a viscous incompressible fluid. The upper disk exhibits both upward and downward motion, while the lower disk remains stationary but permeable. By employing similarity transformations, a system of nonlinear ordinary differential equations is derived to describe the flow behavior. To solve this system, a numerical approach, namely the Chebyshev collocation method, is utilized. The study investigates the influence of flow parameters and compares the obtained results with existing literature. The significance of this research lies in understanding the heat transfer characteristics of MHD squeezing flow, which has practical implications in various engineering and industrial applications. By employing the similarity transformations, the complex governing equations are simplified into a system of nonlinear ordinary differential equations, facilitating the analysis of the flow behavior. To obtain numerical solutions for the system, the Chebyshev collocation method is implemented. This approach provides accurate approximations for the nonlinear equations, enabling efficient computations of the heat transfer properties. The obtained results are compared with existing literature, establishing the validity and consistency of the numerical approach. The study's major findings shed light on the influence of flow parameters on the heat transfer characteristics of the squeezing flow. The analysis reveals the impact of parameters such as magnetic field strength, disk motion amplitude, fluid viscosity on the heat transfer rate between the disks, the squeeze number(S), suction/injection parameter(A), Hartman number(M), Prandtl number(Pr), modified Eckert number(Ec), and the dimensionless length(δ). These findings contribute to a comprehensive understanding of the system's behavior and provide insights for optimizing heat transfer processes in similar configurations. In conclusion, this study presents a thorough heat transfer analysis of magneto-hydrodynamics squeezing flow between parallel disks. The numerical solutions obtained through the Chebyshev collocation method demonstrate the feasibility and accuracy of the approach. The investigation of flow parameters highlights their influence on heat transfer, contributing to the existing knowledge in this field. The agreement of the results with previous literature further strengthens the reliability of the findings. These outcomes have practical implications for engineering applications and pave the way for further research in related areas.

Keywords: squeezing flow, magneto-hydro-dynamics (MHD), chebyshev collocation method(CCA), parallel manifolds, finite difference method (FDM)

Procedia PDF Downloads 55
505 Critical Conditions for the Initiation of Dynamic Recrystallization Prediction: Analytical and Finite Element Modeling

Authors: Pierre Tize Mha, Mohammad Jahazi, Amèvi Togne, Olivier Pantalé

Abstract:

Large-size forged blocks made of medium carbon high-strength steels are extensively used in the automotive industry as dies for the production of bumpers and dashboards through the plastic injection process. The manufacturing process of the large blocks starts with ingot casting, followed by open die forging and a quench and temper heat treatment process to achieve the desired mechanical properties and numerical simulation is widely used nowadays to predict these properties before the experiment. But the temperature gradient inside the specimen remains challenging in the sense that the temperature before loading inside the material is not the same, but during the simulation, constant temperature is used to simulate the experiment because it is assumed that temperature is homogenized after some holding time. Therefore to be close to the experiment, real distribution of the temperature through the specimen is needed before the mechanical loading. Thus, We present here a robust algorithm that allows the calculation of the temperature gradient within the specimen, thus representing a real temperature distribution within the specimen before deformation. Indeed, most numerical simulations consider a uniform temperature gradient which is not really the case because the surface and core temperatures of the specimen are not identical. Another feature that influences the mechanical properties of the specimen is recrystallization which strongly depends on the deformation conditions and the type of deformation like Upsetting, Cogging...etc. Indeed, Upsetting and Cogging are the stages where the greatest deformations are observed, and a lot of microstructural phenomena can be observed, like recrystallization, which requires in-depth characterization. Complete dynamic recrystallization plays an important role in the final grain size during the process and therefore helps to increase the mechanical properties of the final product. Thus, the identification of the conditions for the initiation of dynamic recrystallization is still relevant. Also, the temperature distribution within the sample and strain rate influence the recrystallization initiation. So the development of a technique allowing to predict the initiation of this recrystallization remains challenging. In this perspective, we propose here, in addition to the algorithm allowing to get the temperature distribution before the loading stage, an analytical model leading to determine the initiation of this recrystallization. These two techniques are implemented into the Abaqus finite element software via the UAMP and VUHARD subroutines for comparison with a simulation where an isothermal temperature is imposed. The Artificial Neural Network (ANN) model to describe the plastic behavior of the material is also implemented via the VUHARD subroutine. From the simulation, the temperature distribution inside the material and recrystallization initiation is properly predicted and compared to the literature models.

Keywords: dynamic recrystallization, finite element modeling, artificial neural network, numerical implementation

Procedia PDF Downloads 66
504 Comparative Correlation Investigation of Polynuclear Aromatic Hydrocarbons (PAHs) in Soils of Different Land Uses: Sources Evaluation Perspective

Authors: O. Onoriode Emoyan, E. Eyitemi Akporhonor, Charles Otobrise

Abstract:

Polycyclic Aromatic Hydrocarbons (PAHs) are formed mainly as a result of incomplete combustion of organic materials during industrial, domestic activities or natural occurrence. Their toxicity and contamination of terrestrial and aquatic ecosystem have been established. Though with limited validity index, previous research has focused on PAHs isomer pair ratios of variable physicochemical properties in source identification. The objective of this investigation was to determine the empirical validity of Pearson correlation coefficient (PCC) and cluster analysis (CA) in PAHs source identification along soil samples of different land uses. Therefore, 16 PAHs grouped as endocrine disruption substances (EDSs) were determined in 10 sample stations in top and sub soils seasonally. PAHs was determined the use of Varian 300 gas chromatograph interfaced with flame ionization detector. Instruments and reagents used are of standard and chromatographic grades respectively. PCC and CA results showed that the classification of PAHs along kinetically and thermodyanamically-favoured and those derived directly from plants product through biologically mediated processes used in source signature is about the predominance PAHs are likely to be. Therefore the observed PAHs in the studied stations have trace quantities of the vast majority of the sixteen un-substituted PAHs which may ultimately inhabit the actual source signature authentication. Type and extent of bacterial metabolism, transformation products/substrates, and environmental factors such as: salinity, pH, oxygen concentration, nutrients, light intensity, temperature, co-substrates and environmental medium are hereby recommended as factors to be considered when evaluating possible sources of PAHs.

Keywords: comparative correlation, kinetically and thermodynamically-favored PAHs, pearson correlation coefficient, cluster analysis, sources evaluation

Procedia PDF Downloads 402
503 Comparative Analysis of Costs and Well Drilling Techniques for Water, Geothermal Energy, Oil and Gas Production

Authors: Thales Maluf, Nazem Nascimento

Abstract:

The development of society relies heavily on the total amount of energy obtained and its consumption. Over the years, there has been an advancement on energy attainment, which is directly related to some natural resources and developing systems. Some of these resources should be highlighted for its remarkable presence in world´s energy grid, such as water, petroleum, and gas, while others deserve attention for representing an alternative to diversify the energy grid, like geothermal sources. Therefore, because all these resources can be extracted from the underground, drilling wells is a mandatory activity in terms of exploration, and it involves a previous geological study and an adequate preparation. It also involves a cleaning process and an extraction process that can be executed by different procedures. For that reason, this research aims the enhancement of exploration processes through a comparative analysis of drilling costs and techniques used to produce them. The analysis itself is based on a bibliographical review based on books, scientific papers, schoolwork and mainly explore drilling methods and technologies, equipment used, well measurements, extraction methods, and production costs. Besides techniques and costs regarding the drilling processes, some properties and general characteristics of these sources are also compared. Preliminary studies show that there are some major differences regarding the exploration processes, mostly because these resources are naturally distinct. Water wells, for instance, have hundreds of meters of length because water is stored close to the surface, while oil, gas, and geothermal production wells can reach thousands of meters, which make them more expensive to be drilled. The drilling methods present some general similarities especially regarding the main mechanism of perforation, but since water is a resource stored closer to the surface than the other ones, there is a wider variety of methods. Water wells can be drilled by rotary mechanisms, percussion mechanisms, rotary-percussion mechanisms, and some other simpler methods. Oil and gas production wells, on the other hand, require rotary or rotary-percussion drilling with a proper structure called drill rig and resistant materials for the drill bits and the other components, mostly because they´re stored in sedimentary basins that can be located thousands of meters under the ground. Geothermal production wells also require rotary or rotary-percussion drilling and require the existence of an injection well and an extraction well. The exploration efficiency also depends on the permeability of the soil, and that is why it has been developed the Enhanced Geothermal Systems (EGS). Throughout this review study, it can be verified that the analysis of the extraction processes of energy resources is essential since these resources are responsible for society development. Furthermore, the comparative analysis of costs and well drilling techniques for water, geothermal energy, oil, and gas production, which is the main goal of this research, can enable the growth of energy generation field through the emergence of ideas that improve the efficiency of energy generation processes.

Keywords: drilling, water, oil, Gas, geothermal energy

Procedia PDF Downloads 127
502 Data Mining Spatial: Unsupervised Classification of Geographic Data

Authors: Chahrazed Zouaoui

Abstract:

In recent years, the volume of geospatial information is increasing due to the evolution of communication technologies and information, this information is presented often by geographic information systems (GIS) and stored on of spatial databases (BDS). The classical data mining revealed a weakness in knowledge extraction at these enormous amounts of data due to the particularity of these spatial entities, which are characterized by the interdependence between them (1st law of geography). This gave rise to spatial data mining. Spatial data mining is a process of analyzing geographic data, which allows the extraction of knowledge and spatial relationships from geospatial data, including methods of this process we distinguish the monothematic and thematic, geo- Clustering is one of the main tasks of spatial data mining, which is registered in the part of the monothematic method. It includes geo-spatial entities similar in the same class and it affects more dissimilar to the different classes. In other words, maximize intra-class similarity and minimize inter similarity classes. Taking account of the particularity of geo-spatial data. Two approaches to geo-clustering exist, the dynamic processing of data involves applying algorithms designed for the direct treatment of spatial data, and the approach based on the spatial data pre-processing, which consists of applying clustering algorithms classic pre-processed data (by integration of spatial relationships). This approach (based on pre-treatment) is quite complex in different cases, so the search for approximate solutions involves the use of approximation algorithms, including the algorithms we are interested in dedicated approaches (clustering methods for partitioning and methods for density) and approaching bees (biomimetic approach), our study is proposed to design very significant to this problem, using different algorithms for automatically detecting geo-spatial neighborhood in order to implement the method of geo- clustering by pre-treatment, and the application of the bees algorithm to this problem for the first time in the field of geo-spatial.

Keywords: mining, GIS, geo-clustering, neighborhood

Procedia PDF Downloads 363
501 Gender Bias in Natural Language Processing: Machines Reflect Misogyny in Society

Authors: Irene Yi

Abstract:

Machine learning, natural language processing, and neural network models of language are becoming more and more prevalent in the fields of technology and linguistics today. Training data for machines are at best, large corpora of human literature and at worst, a reflection of the ugliness in society. Machines have been trained on millions of human books, only to find that in the course of human history, derogatory and sexist adjectives are used significantly more frequently when describing females in history and literature than when describing males. This is extremely problematic, both as training data, and as the outcome of natural language processing. As machines start to handle more responsibilities, it is crucial to ensure that they do not take with them historical sexist and misogynistic notions. This paper gathers data and algorithms from neural network models of language having to deal with syntax, semantics, sociolinguistics, and text classification. Results are significant in showing the existing intentional and unintentional misogynistic notions used to train machines, as well as in developing better technologies that take into account the semantics and syntax of text to be more mindful and reflect gender equality. Further, this paper deals with the idea of non-binary gender pronouns and how machines can process these pronouns correctly, given its semantic and syntactic context. This paper also delves into the implications of gendered grammar and its effect, cross-linguistically, on natural language processing. Languages such as French or Spanish not only have rigid gendered grammar rules, but also historically patriarchal societies. The progression of society comes hand in hand with not only its language, but how machines process those natural languages. These ideas are all extremely vital to the development of natural language models in technology, and they must be taken into account immediately.

Keywords: gendered grammar, misogynistic language, natural language processing, neural networks

Procedia PDF Downloads 99
500 A Comparative Human Rights Analysis of Expulsion as a Counterterrorism Instrument: An Evaluation of Belgium

Authors: Louise Reyntjens

Abstract:

Where criminal law used to be the traditional response to cope with the terrorist threat, European governments are increasingly relying on administrative paths. The reliance on immigration law fits into this trend. Terrorism is seen as a civilization menace emanating from abroad. In this context, the expulsion of dangerous aliens, immigration law’s core task, is put forward as a key security tool. Governments all over Europe are focusing on removing dangerous individuals from their territory rather than bringing them to justice. This research reflects on the consequences for the expelled individuals’ fundamental rights. For this, the author selected four European countries for a comparative study: Belgium, France, the United Kingdom and Sweden. All these countries face similar social and security issues, igniting the recourse to immigration law as a counterterrorism tool. Yet, they adopt a very different approach on this: the United Kingdom positions itself on the repressive side of the spectrum. Sweden on the other hand, also 'securitized' its immigration policy after the recent terrorist hit in Stockholm, but remains on the tolerant side of the spectrum. Belgium and France are situated in between. This paper addresses the situation in Belgium. In 2017, the Belgian parliament introduced several legislative changes by which it considerably expanded and facilitated the possibility to expel unwanted aliens. First, the expulsion measure was subjected to new and questionably definitions: a serious attack on the nation’s safety used to be required to expel certain categories of aliens. Presently, mere suspicions suffice to fulfil the new definition of a 'serious threat to national security'. A definition which fails to respond to the principle of legality; the law, nor the prepatory works clarify what is meant by 'a threat to national security'. This creates the risk of submitting this concept’s interpretation almost entirely to the discretion of the immigration authorities. Secondly, in name of intervening more quickly and efficiently, the automatic suspensive appeal for expulsions was abolished. The European Court of Human Rights nonetheless requires such an automatic suspensive appeal under Article 13 and 3 of the Convention. Whether this procedural reform will stand to endure, is thus questionable. This contribution also raises questions regarding expulsion’s efficacy as a key security tool. In a globalized and mobilized world, particularly in a European Union with no internal boundaries, questions can be raised about the usefulness of this measure. Even more so, by simply expelling a dangerous individual, States avoid their responsibility and shift the risk to another State. Criminal law might in these instances be more capable of providing a conclusive and long term response. This contribution explores the human rights consequences of expulsion as a security tool in Belgium. It also offers a critical view on its efficacy for protecting national security.

Keywords: Belgium, counter-terrorism and human rights, expulsion, immigration law

Procedia PDF Downloads 108
499 Study on Co-Relation of Prostate Specific Antigen with Metastatic Bone Disease in Prostate Cancer on Skeletal Scintigraphy

Authors: Muhammad Waleed Asfandyar, Akhtar Ahmed, Syed Adib-ul-Hasan Rizvi

Abstract:

Objective: To evaluate the ability of serum concentration of prostate specific antigen between two cutting points considering it as a predictor of skeletal metastasis on bone scintigraphy in men with prostate cancer. Settings: This study was carried out in department of Nuclear Medicine at Sindh Institute of Urology and Transplantation (SIUT) Karachi, Pakistan. Materials and Method: From August 2013 to November 2013, forty two (42) consecutive patients with prostate cancer who underwent technetium-99m methylene diphosphonate (Tc-99mMDP) whole body bone scintigraphy were prospectively analyzed. The information was collected from the scintigraphic database at a Nuclear medicine department Sindh institute of urology and transplantation Karachi Pakistan. Patients who did not have a serum PSA concentration available within 1 month before or after the time of performing the Tc-99m MDP whole body bone scintigraphy were excluded from this study. A whole body bone scintigraphy scan (from the toes to top of the head) was performed using a whole-body Moving gamma camera technique (anterior and posterior) 2–4 hours after intravenous injection of 20 mCi of Tc-99m MDP. In addition, all patients necessarily have a pathological report available. Bony metastases were determined from the bone scan studies and no further correlation with histopathology or other imaging modalities were performed. To preserve patient confidentiality, direct patient identifiers were not collected. In all the patients, Prostate specific antigen values and skeletal scintigraphy were evaluated. Results: The mean age, mean PSA, and incidence of bone metastasis on bone scintigraphy were 68.35 years, 370.51 ng/mL and 19/42 (45.23%) respectively. According to PSA levels, patients were divided into 5 groups < 10ng/mL (10/42), 10-20 ng/mL (5/42), 20-50 ng/mL (2/42), 50-100 (3/42), 100- 500ng/mL (3/42) and more than 500ng/mL (0/42) presenting negative bone scan. The incidence of positive bone scan (%) for bone metastasis for each group were O1 patient (5.26%), 0%, 03 patients (15.78%), 01 patient (5.26%), 04 patients (21.05%), and 10 patients (52.63%) respectively. From the 42 patients 19 (45.23%) presented positive scintigraphic examination for the presence of bone metastasis. 1 patient presented bone metastasis on bone scintigraphy having PSA level less than 10ng/mL, and in only 1 patient (5.26%) with bone metastasis PSA concentration was less than 20 ng/mL. therefore, when the cutting point adopted for PSA serum concentration was 10ng/mL, a negative predictive value for bone metastasis was 95% with sensitivity rates 94.74% and the positive predictive value and specificities of the method were 56.53% and 43.48% respectively. When the cutting point of PSA serum concentration was 20ng/mL the observed results for Positive predictive value and specificity were (78.27% and 65.22% respectively) whereas negative predictive value and sensitivity stood (100% and 95%) respectively. Conclusion: Results of our study allow us to conclude that serum PSA concentration of higher than 20ng/mL was the most accurate cutting point than a serum concentration of PSA higher than 10ng/mL to predict metastasis in radionuclide bone scintigraphy. In this way, unnecessary cost can be avoided, since a considerable part of prostate adenocarcinomas present low serum PSA levels less than 20 ng/mL and for these cases radionuclide bone scintigraphy could be unnecessary.

Keywords: bone scan, cut off value, prostate specific antigen value, scintigraphy

Procedia PDF Downloads 294
498 Visualization Tool for EEG Signal Segmentation

Authors: Sweeti, Anoop Kant Godiyal, Neha Singh, Sneh Anand, B. K. Panigrahi, Jayasree Santhosh

Abstract:

This work is about developing a tool for visualization and segmentation of Electroencephalograph (EEG) signals based on frequency domain features. Change in the frequency domain characteristics are correlated with change in mental state of the subject under study. Proposed algorithm provides a way to represent the change in the mental states using the different frequency band powers in form of segmented EEG signal. Many segmentation algorithms have been suggested in literature having application in brain computer interface, epilepsy and cognition studies that have been used for data classification. But the proposed method focusses mainly on the better presentation of signal and that’s why it could be a good utilization tool for clinician. Algorithm performs the basic filtering using band pass and notch filters in the range of 0.1-45 Hz. Advanced filtering is then performed by principal component analysis and wavelet transform based de-noising method. Frequency domain features are used for segmentation; considering the fact that the spectrum power of different frequency bands describes the mental state of the subject. Two sliding windows are further used for segmentation; one provides the time scale and other assigns the segmentation rule. The segmented data is displayed second by second successively with different color codes. Segment’s length can be selected as per need of the objective. Proposed algorithm has been tested on the EEG data set obtained from University of California in San Diego’s online data repository. Proposed tool gives a better visualization of the signal in form of segmented epochs of desired length representing the power spectrum variation in data. The algorithm is designed in such a way that it takes the data points with respect to the sampling frequency for each time frame and so it can be improved to use in real time visualization with desired epoch length.

Keywords: de-noising, multi-channel data, PCA, power spectra, segmentation

Procedia PDF Downloads 379
497 Getting to Know ICU Nurses and Their Duties

Authors: Masih Nikgou

Abstract:

ICU nurses or intensive care nurses are highly specialized and trained healthcare personnel. These nurses provide nursing care for patients with life-threatening illnesses or conditions. They provide the experience, knowledge and specialized skills that patients need to survive and recover. Intensive care nurses (ICU) are trained to make momentary decisions and act quickly when the patient's condition changes. Their primary work environment is in the hospital in intensive care units. Typically, ICU patients require a high level of care. ICU nurses work in challenging and complex fields in their nursing profession. They have the primary duty of caring for and saving patients who are fighting for their lives. Intensive care (ICU) nurses are highly trained to provide exceptional care to patients who depend on 24/7 nursing care. A patient in the ICU is often equipped with a ventilator, intubated and connected to several life support machines and medical equipment. Intensive Care Nurses (ICU) have full expertise in considering all aspects of bringing back their patients. Some of the specific responsibilities of ICU nurses include (a) Assessing and monitoring the patient's progress and identifying any sudden changes in the patient's medical condition. (b) Administration of drugs intravenously by injection or through gastric tubes. (c) Provide regular updates on patient progress to physicians, patients, and their families. (d) According to the clinical condition of the patient, perform the approved diagnostic or treatment methods. (e) In case of a health emergency, informing the relevant doctors. (f) To determine the need for emergency interventions, evaluate laboratory data and vital signs of patients. (g) Caring for patient needs during recovery in the ICU. (h) ICU nurses often provide emotional support to patients and their families. (i) Regulating and monitoring medical equipment and devices such as medical ventilators, oxygen delivery devices, transducers, and pressure lines. (j) Assessment of pain level and sedation needs of patients. (k) Maintaining patient reports and records. As the name suggests, critical care nurses work primarily in ICU health care units. ICUs are completely healthy and have proper lighting with strict adherence to health and safety from medical centers. ICU nurses usually move between the intensive care unit, the emergency department, the operating room, and other special departments of the hospital. ICU nurses usually follow a standard shift schedule that includes morning, afternoon, and night schedules. There are also other relocation programs depending on the hospital and region. Nurses who are passionate about data and managing a patient's condition and outcomes typically do well as ICU nurses. An inquisitive mind and attention to processes are equally important. ICU nurses are completely compassionate and are not afraid to advocate for their patients and family members. who are distressed.

Keywords: nursing, intensive care unit, pediatric intensive care unit, mobile intensive care unit, surgical intensive care unite

Procedia PDF Downloads 55
496 White Wine Discrimination Based on Deconvoluted Surface Enhanced Raman Spectroscopy Signals

Authors: Dana Alina Magdas, Nicoleta Simona Vedeanu, Ioana Feher, Rares Stiufiuc

Abstract:

Food and beverages authentication using rapid and non-expensive analytical tools represents nowadays an important challenge. In this regard, the potential of vibrational techniques in food authentication has gained an increased attention during the last years. For wines discrimination, Raman spectroscopy appears more feasible to be used as compared with IR (infrared) spectroscopy, because of the relatively weak water bending mode in the vibrational spectroscopy fingerprint range. Despite this, the use of Raman technique in wine discrimination is in an early stage. Taking this into consideration, the wine discrimination potential of surface-enhanced Raman scattering (SERS) technique is reported in the present work. The novelty of this study, compared with the previously reported studies, concerning the application of vibrational techniques in wine discrimination consists in the fact that the present work presents the wines differentiation based on the individual signals obtained from deconvoluted spectra. In order to achieve wines classification with respect to variety, geographical origin and vintage, the peaks intensities obtained after spectra deconvolution were compared using supervised chemometric methods like Linear Discriminant Analysis (LDA). For this purpose, a set of 20 white Romanian wines from different viticultural Romanian regions four varieties, was considered. Chemometric methods applied directly to row SERS experimental spectra proved their efficiency, but discrimination markers identification found to be very difficult due to the overlapped signals as well as for the band shifts. By using this approach, a better general view related to the differences that appear among the wines in terms of compositional differentiation could be reached.

Keywords: chemometry, SERS, variety, wines discrimination

Procedia PDF Downloads 140
495 Adapting Tools for Text Monitoring and for Scenario Analysis Related to the Field of Social Disasters

Authors: Svetlana Cojocaru, Mircea Petic, Inga Titchiev

Abstract:

Humanity faces more and more often with different social disasters, which in turn can generate new accidents and catastrophes. To mitigate their consequences, it is important to obtain early possible signals about the events which are or can occur and to prepare the corresponding scenarios that could be applied. Our research is focused on solving two problems in this domain: identifying signals related that an accident occurred or may occur and mitigation of some consequences of disasters. To solve the first problem, methods of selecting and processing texts from global network Internet are developed. Information in Romanian is of special interest for us. In order to obtain the mentioned tools, we should follow several steps, divided into preparatory stage and processing stage. Throughout the first stage, we manually collected over 724 news articles and classified them into 10 categories of social disasters. It constitutes more than 150 thousand words. Using this information, a controlled vocabulary of more than 300 keywords was elaborated, that will help in the process of classification and identification of the texts related to the field of social disasters. To solve the second problem, the formalism of Petri net has been used. We deal with the problem of inhabitants’ evacuation in useful time. The analysis methods such as reachability or coverability tree and invariants technique to determine dynamic properties of the modeled systems will be used. To perform a case study of properties of extended evacuation system by adding time, the analysis modules of PIPE such as Generalized Stochastic Petri Nets (GSPN) Analysis, Simulation, State Space Analysis, and Invariant Analysis have been used. These modules helped us to obtain the average number of persons situated in the rooms and the other quantitative properties and characteristics related to its dynamics.

Keywords: lexicon of disasters, modelling, Petri nets, text annotation, social disasters

Procedia PDF Downloads 187
494 A Machine Learning Approach for Detecting and Locating Hardware Trojans

Authors: Kaiwen Zheng, Wanting Zhou, Nan Tang, Lei Li, Yuanhang He

Abstract:

The integrated circuit industry has become a cornerstone of the information society, finding widespread application in areas such as industry, communication, medicine, and aerospace. However, with the increasing complexity of integrated circuits, Hardware Trojans (HTs) implanted by attackers have become a significant threat to their security. In this paper, we proposed a hardware trojan detection method for large-scale circuits. As HTs introduce physical characteristic changes such as structure, area, and power consumption as additional redundant circuits, we proposed a machine-learning-based hardware trojan detection method based on the physical characteristics of gate-level netlists. This method transforms the hardware trojan detection problem into a machine-learning binary classification problem based on physical characteristics, greatly improving detection speed. To address the problem of imbalanced data, where the number of pure circuit samples is far less than that of HTs circuit samples, we used the SMOTETomek algorithm to expand the dataset and further improve the performance of the classifier. We used three machine learning algorithms, K-Nearest Neighbors, Random Forest, and Support Vector Machine, to train and validate benchmark circuits on Trust-Hub, and all achieved good results. In our case studies based on AES encryption circuits provided by trust-hub, the test results showed the effectiveness of the proposed method. To further validate the method’s effectiveness for detecting variant HTs, we designed variant HTs using open-source HTs. The proposed method can guarantee robust detection accuracy in the millisecond level detection time for IC, and FPGA design flows and has good detection performance for library variant HTs.

Keywords: hardware trojans, physical properties, machine learning, hardware security

Procedia PDF Downloads 126
493 Investigation of Different Machine Learning Algorithms in Large-Scale Land Cover Mapping within the Google Earth Engine

Authors: Amin Naboureh, Ainong Li, Jinhu Bian, Guangbin Lei, Hamid Ebrahimy

Abstract:

Large-scale land cover mapping has become a new challenge in land change and remote sensing field because of involving a big volume of data. Moreover, selecting the right classification method, especially when there are different types of landscapes in the study area is quite difficult. This paper is an attempt to compare the performance of different machine learning (ML) algorithms for generating a land cover map of the China-Central Asia–West Asia Corridor that is considered as one of the main parts of the Belt and Road Initiative project (BRI). The cloud-based Google Earth Engine (GEE) platform was used for generating a land cover map for the study area from Landsat-8 images (2017) by applying three frequently used ML algorithms including random forest (RF), support vector machine (SVM), and artificial neural network (ANN). The selected ML algorithms (RF, SVM, and ANN) were trained and tested using reference data obtained from MODIS yearly land cover product and very high-resolution satellite images. The finding of the study illustrated that among three frequently used ML algorithms, RF with 91% overall accuracy had the best result in producing a land cover map for the China-Central Asia–West Asia Corridor whereas ANN showed the worst result with 85% overall accuracy. The great performance of the GEE in applying different ML algorithms and handling huge volume of remotely sensed data in the present study showed that it could also help the researchers to generate reliable long-term land cover change maps. The finding of this research has great importance for decision-makers and BRI’s authorities in strategic land use planning.

Keywords: land cover, google earth engine, machine learning, remote sensing

Procedia PDF Downloads 103
492 The Predictors of Head and Neck Cancer-Head and Neck Cancer-Related Lymphedema in Patients with Resected Advanced Head and Neck Cancer

Authors: Shu-Ching Chen, Li-Yun Lee

Abstract:

The purpose of the study was to identify the factors associated with head and neck cancer-related lymphoedema (HNCRL)-related symptoms, body image, and HNCRL-related functional outcomes among patients with resected advanced head and neck cancer. A cross-sectional correlational design was conducted to examine the predictors of HNCRL-related functional outcomes in patients with resected advanced head and neck cancer. Eligible patients were recruited from a single medical center in northern Taiwan. Consecutive patients were approached and recruited from the Radiation Head and Neck Outpatient Department of this medical center. Eligible subjects were assessed for the Symptom Distress Scale–Modified for Head and Neck Cancer (SDS-mhnc), Brief International Classification of Functioning, Disability and Health (ICF) Core Set for Head and Neck Cancer (BCSQ-H&N), Body Image Scale–Modified (BIS-m), The MD Anderson Head and Neck Lymphedema Rating Scale (MDAHNLRS), The Foldi’s Stages of Lymphedema (Foldi’s Scale), Patterson’s Scale, UCLA Shoulder Rating Scale (UCLA SRS), and Karnofsky’s Performance Status Index (KPS). The results showed that the worst problems with body HNCRL functional outcomes. Patients’ HNCRL symptom distress and performance status are robust predictors across over for overall HNCRL functional outcomes, problems with body HNCRL functional outcomes, and activity and social functioning HNCRL functional outcomes. Based on the results of this period research program, we will develop a Cancer Rehabilitation and Lymphedema Care Program (CRLCP) to use in the care of patients with resected advanced head and neck cancer.

Keywords: head and neck cancer, resected, lymphedema, symptom, body image, functional outcome

Procedia PDF Downloads 236
491 A Critical Study on Unprecedented Employment Discrimination and Growth of Contractual Labour Engaged by Rail Industry in India

Authors: Munmunlisa Mohanty, K. D. Raju

Abstract:

Rail industry is one of the model employers in India has separate national legislation (Railways Act 1989) to regulate its vast employment structure, functioning across the country. Indian Railway is not only the premier transport industry of the country; indeed, it is Asia’s most extensive rail network organisation and the world’s second-largest industry functioning under one management. With the growth of globalization of industrial products, the scope of anti-employment discrimination is no more confined to gender aspect only; instead, it extended to the unregularized classification of labour force applicable in the various industrial establishments in India. And the Indian Rail Industry inadvertently enhanced such discriminatory employment trends by engaging contractual labour in an unprecedented manner. The engagement of contractual labour by rail industry vanished the core “Employer-Employee” relationship between rail management and contractual labour who employed through the contractor. This employment trend reduces the cost of production and supervision, discourages the contractual labour from forming unions, and reduces its collective bargaining capacity. So, the primary intention of this paper is to highlight the increasing discriminatory employment scope for contractual labour engaged by Indian Railways. This paper critically analyses the diminishing perspective of anti-employment opportunity practiced by Indian Railways towards contractual labour and demands an urgent outlook on the probable scope of anti-employment discrimination against contractual labour engaged by Indian Railways. The researcher used doctrinal methodology where primary materials (Railways Act, Contract Labour Act and Occupational, health and Safety Code, 2020) and secondary data (CAG Report 2018, Railways Employment Regulation Rules, ILO Report etc.) are used for the paper.

Keywords: anti-employment, CAG Report, contractual labour, discrimination, Indian Railway, principal employer

Procedia PDF Downloads 143
490 Impact of Marine Hydrodynamics and Coastal Morphology on Changes in Mangrove Forests (Case Study: West of Strait of Hormuz, Iran)

Authors: Fatemeh Parhizkar, Mojtaba Yamani, Abdolla Behboodi, Masoomeh Hashemi

Abstract:

The mangrove forests are natural and valuable gifts that exist in some parts of the world, including Iran. Regarding the threats faced by these forests and the declining area of them all over the world, as well as in Iran, it is very necessary to manage and monitor them. The current study aimed to investigate the changes in mangrove forests and the relationship between these changes and the marine hydrodynamics and coastal morphology in the area between qeshm island and the west coast of the Hormozgan province (i.e. the coastline between Mehran river and Bandar-e Pol port) in the 49-year period. After preprocessing and classifying satellite images using the SVM, MLC, and ANN classifiers and evaluating the accuracy of the maps, the SVM approach with the highest accuracy (the Kappa coefficient of 0.97 and overall accuracy of 98) was selected for preparing the classification map of all images. The results indicate that from 1972 to 1987, the area of these forests have had experienced a declining trend, and in the next years, their expansion was initiated. These forests include the mangrove forests of Khurkhuran wetland, Muriz Deraz Estuary, Haft Baram Estuary, the mangrove forest in the south of the Laft Port, and the mangrove forests between the Tabl Pier, Maleki Village, and Gevarzin Village. The marine hydrodynamic and geomorphological characteristics of the region, such as average intertidal zone, sediment data, the freshwater inlet of Mehran river, wave stability and calmness, topography and slope, as well as mangrove conservation projects make the further expansion of mangrove forests in this area possible. By providing significant and up-to-date information on the development and decline of mangrove forests in different parts of the coast, this study can significantly contribute to taking measures for the conservation and restoration of mangrove forests.

Keywords: mangrove forests, marine hydrodynamics, coastal morphology, west of strait of Hormuz, Iran

Procedia PDF Downloads 78
489 Case Presentation Ectopic Cushing's Syndrome Secondary to Thymic Neuroendocrine Tumors Secreting ACTH

Authors: Hasan Frookh Jamal

Abstract:

This is a case of a 36-year-old Bahraini gentleman diagnosed to have Cushing's Syndrome with a large anterior mediastinal mass. He was sent abroad to the Speciality hospital in Jordan, where he underwent diagnostic video-assisted thoracoscopy, partial thymectomy and pericardial fat excision. Histopathology of the mass was reported to be an Atypical carcinoid tumor with a low Ki67 proliferation index of 5%, the mitotic activity of 4 MF/10HPF and pathological stage classification(pTNM): pT1aN1. MRI of the pituitary gland showed an ill-defined non-enhancing focus of about 3mm on the Rt side of the pituitary on coronal images, with a similar but smaller one on the left side, which could be due to enhancing pattern rather than a real lesion as reported. The patient underwent Ga68 Dotate PET/CT scan post-operatively, which showed multiple somatostatin receptor-positive lesions seen within the tail, body and head of the pancreas and positive somatostatin receptor lymph nodes located between the pancreatic head and IVC. There was no uptake detected at the anterior mediastinum nor at the site of thymic mass resection. There was no evidence of any positive somatostatin uptake at the soft tissue or lymph nodes. The patient underwent IPSS, which proved that the source is, in fact, an ectopic source of ACTH secretion. Unfortunately, the patient's serum cortisol remained elevated after surgery and failed to be suppressed by 1 mg ODST and by 2 days LLDST with a high ACTH value. The patient was started on Osilodrostat for treatment of hypercortisolism for the time being and his future treatment plan with Lutetium-177 Dotate therapy vs. bilateral adrenalectomy is to be considered in an MDT meeting.

Keywords: cushing syndrome, neuroendocrine tumur, carcinoid tumor, Thymoma

Procedia PDF Downloads 66