Search results for: algorithm techniques
1275 Spatio-Temporal Analysis of Land Use Change and Green Cover Index
Authors: Poonam Sharma, Ankur Srivastav
Abstract:
Cities are complex and dynamic systems that constitute a significant challenge to urban planning. The increasing size of the built-up area owing to growing population pressure and economic growth have lead to massive Landuse/Landcover change resulted in the loss of natural habitat and thus reducing the green covers in urban areas. Urban environmental quality is influenced by several aspects, including its geographical configuration, the scale, and nature of human activities occurring and environmental impacts generated. Cities have transformed into complex and dynamic systems that constitute a significant challenge to urban planning. Cities and their sustainability are often discussed together as the cities stand confronted with numerous environmental concerns as the world becoming increasingly urbanized, and the cities are situated in the mesh of global networks in multiple senses. A rapid transformed urban setting plays a crucial role to change the green area of natural habitats. To examine the pattern of urban growth and to measure the Landuse/Landcover change in Gurgoan in Haryana, India through the integration of Geospatial technique is attempted in the research paper. Satellite images are used to measure the spatiotemporal changes that have occurred in the land use and land cover resulting into a new cityscape. It has been observed from the analysis that drastically evident changes in land use has occurred with the massive rise in built up areas and the decrease in green cover and therefore causing the sustainability of the city an important area of concern. The massive increase in built-up area has influenced the localised temperatures and heat concentration. To enhance the decision-making process in urban planning, a detailed and real world depiction of these urban spaces is the need of the hour. Monitoring indicators of key processes in land use and economic development are essential for evaluating policy measures.Keywords: cityscape, geospatial techniques, green cover index, urban environmental quality, urban planning
Procedia PDF Downloads 2771274 Video Analytics on Pedagogy Using Big Data
Authors: Jamuna Loganath
Abstract:
Education is the key to the development of any individual’s personality. Today’s students will be tomorrow’s citizens of the global society. The education of the student is the edifice on which his/her future will be built. Schools therefore should provide an all-round development of students so as to foster a healthy society. The behaviors and the attitude of the students in school play an essential role for the success of the education process. Frequent reports of misbehaviors such as clowning, harassing classmates, verbal insults are becoming common in schools today. If this issue is left unattended, it may develop a negative attitude and increase the delinquent behavior. So, the need of the hour is to find a solution to this problem. To solve this issue, it is important to monitor the students’ behaviors in school and give necessary feedback and mentor them to develop a positive attitude and help them to become a successful grownup. Nevertheless, measuring students’ behavior and attitude is extremely challenging. None of the present technology has proven to be effective in this measurement process because actions, reactions, interactions, response of the students are rarely used in the course of the data due to complexity. The purpose of this proposal is to recommend an effective supervising system after carrying out a feasibility study by measuring the behavior of the Students. This can be achieved by equipping schools with CCTV cameras. These CCTV cameras installed in various schools of the world capture the facial expressions and interactions of the students inside and outside their classroom. The real time raw videos captured from the CCTV can be uploaded to the cloud with the help of a network. The video feeds get scooped into various nodes in the same rack or on the different racks in the same cluster in Hadoop HDFS. The video feeds are converted into small frames and analyzed using various Pattern recognition algorithms and MapReduce algorithm. Then, the video frames are compared with the bench marking database (good behavior). When misbehavior is detected, an alert message can be sent to the counseling department which helps them in mentoring the students. This will help in improving the effectiveness of the education process. As Video feeds come from multiple geographical areas (schools from different parts of the world), BIG DATA helps in real time analysis as it analyzes computationally to reveal patterns, trends, and associations, especially relating to human behavior and interactions. It also analyzes data that can’t be analyzed by traditional software applications such as RDBMS, OODBMS. It has also proven successful in handling human reactions with ease. Therefore, BIG DATA could certainly play a vital role in handling this issue. Thus, effectiveness of the education process can be enhanced with the help of video analytics using the latest BIG DATA technology.Keywords: big data, cloud, CCTV, education process
Procedia PDF Downloads 2401273 A Case Study on an Integrated Analysis of Well Control and Blow out Accident
Authors: Yasir Memon
Abstract:
The complexity and challenges in the offshore industry are increasing more than the past. The oil and gas industry is expanding every day by accomplishing these challenges. More challenging wells such as longer and deeper are being drilled in today’s environment. Blowout prevention phenomena hold a worthy importance in oil and gas biosphere. In recent, so many past years when the oil and gas industry was growing drilling operation were extremely dangerous. There was none technology to determine the pressure of reservoir and drilling hence was blind operation. A blowout arises when an uncontrolled reservoir pressure enters in wellbore. A potential of blowout in the oil industry is the danger for the both environment and the human life. Environmental damage, state/country regulators, and the capital investment causes in loss. There are many cases of blowout in the oil the gas industry caused damage to both human and the environment. A huge capital investment is being in used to stop happening of blowout through all over the biosphere to bring damage at the lowest level. The objective of this study is to promote safety and good resources to assure safety and environmental integrity in all operations during drilling. This study shows that human errors and management failure is the main cause of blowout therefore proper management with the wise use of precautions, prevention methods or controlling techniques can reduce the probability of blowout to a minimum level. It also discusses basic procedures, concepts and equipment involved in well control methods and various steps using at various conditions. Furthermore, another aim of this study work is to highlight management role in oil gas operations. Moreover, this study analyze the causes of Blowout of Macondo well occurred in the Gulf of Mexico on April 20, 2010, and deliver the recommendations and analysis of various aspect of well control methods and also provides the list of mistakes and compromises that British Petroleum and its partner were making during drilling and well completion methods and also the Macondo well disaster happened due to various safety and development rules violation. This case study concludes that Macondo well blowout disaster could be avoided with proper management of their personnel’s and communication between them and by following safety rules/laws it could be brought to minimum environmental damage.Keywords: energy, environment, oil and gas industry, Macondo well accident
Procedia PDF Downloads 1871272 Assessment of Acute Oral Toxicity Studies and Anti Diabetic Activity of Herbal Mediated Nanomedicine
Authors: Shanker Kalakotla, Krishna Mohan Gottumukkala
Abstract:
Diabetes is a metabolic disorder characterized by hyperglycemia, carbohydrates, altered lipids and proteins metabolism. In recent research nanotechnology is a blazing field for the researchers; latterly there has been prodigious excitement in the nanomedicine and nano pharmacological area for the study of silver nanoparticles synthesis using natural products. Biological methods have been used to synthesize silver nanoparticles in presence of medicinally active antidiabetic plants, and this intention made us assess the biologically synthesized silver nanoparticles from the seed extract of Psoralea corylfolia using 1 mM silver nitrate solution. The synthesized herbal mediated silver nanoparticles (HMSNP’s) then subjected to various characterization techniques such as XRD, SEM, EDX, TEM, DLS, UV and FT-IR respectively. In current study, the silver nanoparticles tested for in-vitro anti-diabetic activity and possible toxic effects in healthy female albino mice by following OECD guidelines-425. Herbal mediated silver nanoparticles were successfully obtained from bioreduction of silver nitrate using Psoralea corylifolia plant extract. Silver nanoparticles have been appropriately characterized and confirmed using different types of equipment viz., UV-vis spectroscopy, XRD, FTIR, DLS, SEM and EDX analysis. From the behavioral observations of the study, the female albino mice did not show sedation, respiratory arrest, and convulsions. Test compounds did not cause any mortality at the dose level tested (i.e., 2000 mg/kg body weight) doses till the end of 14 days of observation and were considered safe. It may be concluded that LD50 of the HMSNPs was 2000mg/kg body weight. Since LD50 of the HMSNPs was 2000mg/kg body weight, so the preferred dose range for HMSNPs falls between the levels of 200 and 400 mg/kg. Further In-vivo pharmacological models and biochemical investigations will clearly elucidate the mechanism of action and will be helpful in projecting the currently synthesized silver nanoparticles as a therapeutic target in treating chronic ailments.Keywords: herbal mediated silver nanoparticles, HMSNPs, toxicity of silver nanoparticles, PTP1B in-vitro anti-diabetic assay female albino mice, 425 OECD guidelines
Procedia PDF Downloads 2731271 A Systematic Review of the Psychometric Properties of Augmentative and Alternative Communication Assessment Tools in Adolescents with Complex Communication Needs
Authors: Nadwah Onwi, Puspa Maniam, Azmawanie A. Aziz, Fairus Mukhtar, Nor Azrita Mohamed Zin, Nurul Haslina Mohd Zin, Nurul Fatehah Ismail, Mohamad Safwan Yusoff, Susilidianamanalu Abd Rahman, Siti Munirah Harris, Maryam Aizuddin
Abstract:
Objective: Malaysia has a growing number of individuals with complex communication needs (CCN). The initiation of augmentative and alternative communication (AAC) intervention may facilitate individuals with CCN to understand and express themselves optimally and actively participate in activities in their daily life. AAC is defined as multimodal use of communication ability to allow individuals to use every mode possible to communicate with others using a set of symbols or systems that may include the symbols, aids, techniques, and strategies. It is consequently critical to evaluate the deficits to inform treatment for AAC intervention. However, no known measurement tools are available to evaluate the user with CCN available locally. Design: A systematic review (SR) is designed to analyze the psychometric properties of AAC assessment for adolescents with CCN published in peer-reviewed journals. Tools are rated by the methodological quality of studies and the psychometric measurement qualities of each tool. Method: A literature search identifying AAC assessment tools with psychometrically robust properties and conceptual framework was considered. Two independent reviewers screened the abstracts and full-text articles and review bibliographies for further references. Data were extracted using standardized forms and study risk of bias was assessed. Result: The review highlights the psychometric properties of AAC assessment tools that can be used by speech-language therapists applicable to be used in the Malaysian context. The work outlines how systematic review methods may be applied to the consideration of published material that provides valuable data to initiate the development of Malay Language AAC assessment tools. Conclusion: The synthesis of evidence has provided a framework for Malaysia Speech-Language therapists in making an informed decision for AAC intervention in our standard operating procedure in the Ministry of Health, Malaysia.Keywords: augmentative and alternative communication, assessment, adolescents, complex communication needs
Procedia PDF Downloads 1521270 Lead Chalcogenide Quantum Dots for Use in Radiation Detectors
Authors: Tom Nakotte, Hongmei Luo
Abstract:
Lead chalcogenide-based (PbS, PbSe, and PbTe) quantum dots (QDs) were synthesized for the purpose of implementing them in radiation detectors. Pb based materials have long been of interest for gamma and x-ray detection due to its high absorption cross section and Z number. The emphasis of the studies was on exploring how to control charge carrier transport within thin films containing the QDs. The properties of QDs itself can be altered by changing the size, shape, composition, and surface chemistry of the dots, while the properties of carrier transport within QD films are affected by post-deposition treatment of the films. The QDs were synthesized using colloidal synthesis methods and films were grown using multiple film coating techniques, such as spin coating and doctor blading. Current QD radiation detectors are based on the QD acting as fluorophores in a scintillation detector. Here the viability of using QDs in solid-state radiation detectors, for which the incident detectable radiation causes a direct electronic response within the QD film is explored. Achieving high sensitivity and accurate energy quantification in QD radiation detectors requires a large carrier mobility and diffusion lengths in the QD films. Pb chalcogenides-based QDs were synthesized with both traditional oleic acid ligands as well as more weakly binding oleylamine ligands, allowing for in-solution ligand exchange making the deposition of thick films in a single step possible. The PbS and PbSe QDs showed better air stability than PbTe. After precipitation the QDs passivated with the shorter ligand are dispersed in 2,6-difloupyridine resulting in colloidal solutions with concentrations anywhere from 10-100 mg/mL for film processing applications, More concentrated colloidal solutions produce thicker films during spin-coating, while an extremely concentrated solution (100 mg/mL) can be used to produce several micrometer thick films using doctor blading. Film thicknesses of micrometer or even millimeters are needed for radiation detector for high-energy gamma rays, which are of interest for astrophysics or nuclear security, in order to provide sufficient stopping power.Keywords: colloidal synthesis, lead chalcogenide, radiation detectors, quantum dots
Procedia PDF Downloads 1271269 Atomic Scale Storage Mechanism Study of the Advanced Anode Materials for Lithium-Ion Batteries
Authors: Xi Wang, Yoshio Bando
Abstract:
Lithium-ion batteries (LIBs) can deliver high levels of energy storage density and offer long operating lifetimes, but their power density is too low for many important applications. Therefore, we developed some new strategies and fabricated novel electrodes for fast Li transport and its facile synthesis including N-doped graphene-SnO2 sandwich papers, bicontinuous nanoporous Cu/Li4Ti5O12 electrode, and binder-free N-doped graphene papers. In addition, by using advanced in-TEM, STEM techniques and the theoretical simulations, we systematically studied and understood their storage mechanisms at the atomic scale, which shed a new light on the reasons of the ultrafast lithium storage property and high capacity for these advanced anodes. For example, by using advanced in-situ TEM, we directly investigated these processes using an individual CuO nanowire anode and constructed a LIB prototype within a TEM. Being promising candidates for anodes in lithium-ion batteries (LIBs), transition metal oxide anodes utilizing the so-called conversion mechanism principle typically suffer from the severe capacity fading during the 1st cycle of lithiation–delithiation. Also we report on the atomistic insights of the GN energy storage as revealed by in situ TEM. The lithiation process on edges and basal planes is directly visualized, the pyrrolic N "hole" defect and the perturbed solid-electrolyte-interface (SEI) configurations are observed, and charge transfer states for three N-existing forms are also investigated. In situ HRTEM experiments together with theoretical calculations provide a solid evidence that enlarged edge {0001} spacings and surface "hole" defects result in improved surface capacitive effects and thus high rate capability and the high capacity is owing to short-distance orderings at the edges during discharging and numerous surface defects; the phenomena cannot be understood previously by standard electron or X-ray diffraction analyses.Keywords: in-situ TEM, STEM, advanced anode, lithium-ion batteries, storage mechanism
Procedia PDF Downloads 3521268 External Business Environment and Sustainability of Micro, Small and Medium Enterprises in Jigawa State, Nigeria
Authors: Shehu Isyaku
Abstract:
The general objective of the study was to investigate ‘the relationship between the external business environment and the sustainability of micro, small and medium enterprises (MSMEs) in Jigawa state’, Nigeria. Specifically, the study was to examine the relationship between 1) the economic environment, 2) the social environment, 3) the technological environment, and 4) the political environment and the sustainability of MSMEs in Jigawa state, Nigeria. The study was drawn on Resource-Based View (RBV) Theory and Knowledge-Based View (KBV). The study employed a descriptive cross-sectional survey design. A researcher-made questionnaire was used to collect data from the 350 managers/owners who were selected using stratified, purposive and simple random sampling techniques. Data analysis was done using means and standard deviations, factor analysis, Correlation Coefficient, and Pearson Linear Regression analysis. The findings of the study revealed that the sustainability potentials of the managers/owners were rated as high potential (economic, environmental, and social sustainability using 5 5-point Likert scale. Mean ratings of effectiveness of the external business environment were; as highly effective. The results from the Pearson Linear Regression Analysis rejected the hypothesized non-significant effect of the external business environment on the sustainability of MSMEs. Specifically, there is a positive significant relationship between 1) economic environment and sustainability; 2) social environment and sustainability; 3) technological environment and sustainability and political environment and sustainability. The researcher concluded that MSME managers/owners have a high potential for economic, social and environmental sustainability and that all the constructs of the external business environment (economic environment, social environment, technological environment and political environment) have a positive significant relationship with the sustainability of MSMEs. Finally, the researcher recommended that 1) MSME managers/owners need to develop marketing strategies and intelligence systems to accumulate information about the competitors and customers' demands, 2) managers/owners should utilize the customers’ cultural and religious beliefs as an opportunity that should be utilized while formulating business strategies.Keywords: business environment, sustainability, small and medium enterprises, external business environment
Procedia PDF Downloads 531267 Factors Militating the Organization of Intramural Sport Programs in Secondary Schools: A Case Study of the Ekiti West Local Government Area of Ekiti State, Nigeria
Authors: Adewole Taiwo Adelabu
Abstract:
The study investigated the factors militating the organization of intramural sports programs in secondary schools in Ekiti State, Nigeria. The purpose of the study was to identify the factors affecting the organization of sports in secondary schools and also to proffer possible solutions to these factors. The study employed the inferential statistics of chi-square (x2). Five research hypotheses were formulated. The population for the study was all the students in the government-owned secondary schools in Ekiti West Local Government of Ekiti State Nigeria. The sample for the study was 60 students in three schools within the local government selected through simple random sampling techniques. The instrument used for the study was a self-developed questionnaire by the researcher for data collection. The instrument was presented to experts and academicians in the field of Human Kinetics and Health Education for construct and content validation. A reliability test was conducted which involves 10 students who are not part of the study. The test-retest coefficient of 0.74 was obtained which attested to the fact that the instrument was reliable enough for the study. The validated questionnaire was administered to the students in their various schools by the researcher with the help of two research assistants; the questionnaires were filled and returned to the researcher immediately. The data collected were analyzed using the descriptive statistics of frequency count, percentage and mean to analyze demographic data in section A of the questionnaire, while inferential statistics of chi-square was used to test the hypotheses at 0.05 alpha level. The results of the study revealed that personnel, fund, schedule (time) were significant factors that affect the organization of intramural sport programs among students in secondary schools in Ekiti West Local Government Area of the State. The study also revealed that organization of intramural sports programs among students of secondary schools will improve and motivate students’ participation in sports beyond the local level. However, facilities and equipment is not a significant factor affecting the organization of intramural sports among secondary school students in Ekiti West Local Government Area.Keywords: challenge, intramural sport, militating, programmes
Procedia PDF Downloads 1491266 Creating and Questioning Research-Oriented Digital Outputs to Manuscript Metadata: A Case-Based Methodological Investigation
Authors: Diandra Cristache
Abstract:
The transition of traditional manuscript studies into the digital framework closely affects the methodological premises upon which manuscript descriptions are modeled, created, and questioned for the purpose of research. This paper intends to explore the issue by presenting a methodological investigation into the process of modeling, creating, and questioning manuscript metadata. The investigation is founded on a close observation of the Polonsky Greek Manuscripts Project, a collaboration between the Universities of Cambridge and Heidelberg. More than just providing a realistic ground for methodological exploration, along with a complete metadata set for computational demonstration, the case study also contributes to a broader purpose: outlining general methodological principles for making the most out of manuscript metadata by means of research-oriented digital outputs. The analysis mainly focuses on the scholarly approach to manuscript descriptions, in the specific instance where the act of metadata recording does not have a programmatic research purpose. Close attention is paid to the encounter of 'traditional' practices in manuscript studies with the formal constraints of the digital framework: does the shift in practices (especially from the straight narrative of free writing towards the hierarchical constraints of the TEI encoding model) impact the structure of metadata and its capability to respond specific research questions? It is argued that flexible structure of TEI and traditional approaches to manuscript description lead to a proliferation of markup: does an 'encyclopedic' descriptive approach ensure the epistemological relevance of the digital outputs to metadata? To provide further insight on the computational approach to manuscript metadata, the metadata of the Polonsky project are processed with techniques of distant reading and data networking, thus resulting in a new group of digital outputs (relational graphs, geographic maps). The computational process and the digital outputs are thoroughly illustrated and discussed. Eventually, a retrospective analysis evaluates how the digital outputs respond to the scientific expectations of research, and the other way round, how the requirements of research questions feed back into the creation and enrichment of metadata in an iterative loop.Keywords: digital manuscript studies, digital outputs to manuscripts metadata, metadata interoperability, methodological issues
Procedia PDF Downloads 1401265 Uncertainty Quantification of Corrosion Anomaly Length of Oil and Gas Steel Pipelines Based on Inline Inspection and Field Data
Authors: Tammeen Siraj, Wenxing Zhou, Terry Huang, Mohammad Al-Amin
Abstract:
The high resolution inline inspection (ILI) tool is used extensively in the pipeline industry to identify, locate, and measure metal-loss corrosion anomalies on buried oil and gas steel pipelines. Corrosion anomalies may occur singly (i.e. individual anomalies) or as clusters (i.e. a colony of corrosion anomalies). Although the ILI technology has advanced immensely, there are measurement errors associated with the sizes of corrosion anomalies reported by ILI tools due limitations of the tools and associated sizing algorithms, and detection threshold of the tools (i.e. the minimum detectable feature dimension). Quantifying the measurement error in the ILI data is crucial for corrosion management and developing maintenance strategies that satisfy the safety and economic constraints. Studies on the measurement error associated with the length of the corrosion anomalies (in the longitudinal direction of the pipeline) has been scarcely reported in the literature and will be investigated in the present study. Limitations in the ILI tool and clustering process can sometimes cause clustering error, which is defined as the error introduced during the clustering process by including or excluding a single or group of anomalies in or from a cluster. Clustering error has been found to be one of the biggest contributory factors for relatively high uncertainties associated with ILI reported anomaly length. As such, this study focuses on developing a consistent and comprehensive framework to quantify the measurement errors in the ILI-reported anomaly length by comparing the ILI data and corresponding field measurements for individual and clustered corrosion anomalies. The analysis carried out in this study is based on the ILI and field measurement data for a set of anomalies collected from two segments of a buried natural gas pipeline currently in service in Alberta, Canada. Data analyses showed that the measurement error associated with the ILI-reported length of the anomalies without clustering error, denoted as Type I anomalies is markedly less than that for anomalies with clustering error, denoted as Type II anomalies. A methodology employing data mining techniques is further proposed to classify the Type I and Type II anomalies based on the ILI-reported corrosion anomaly information.Keywords: clustered corrosion anomaly, corrosion anomaly assessment, corrosion anomaly length, individual corrosion anomaly, metal-loss corrosion, oil and gas steel pipeline
Procedia PDF Downloads 3091264 Topological Language for Classifying Linear Chord Diagrams via Intersection Graphs
Authors: Michela Quadrini
Abstract:
Chord diagrams occur in mathematics, from the study of RNA to knot theory. They are widely used in theory of knots and links for studying the finite type invariants, whereas in molecular biology one important motivation to study chord diagrams is to deal with the problem of RNA structure prediction. An RNA molecule is a linear polymer, referred to as the backbone, that consists of four types of nucleotides. Each nucleotide is represented by a point, whereas each chord of the diagram stands for one interaction for Watson-Crick base pairs between two nonconsecutive nucleotides. A chord diagram is an oriented circle with a set of n pairs of distinct points, considered up to orientation preserving diffeomorphisms of the circle. A linear chord diagram (LCD) is a special kind of graph obtained cutting the oriented circle of a chord diagram. It consists of a line segment, called its backbone, to which are attached a number of chords with distinct endpoints. There is a natural fattening on any linear chord diagram; the backbone lies on the real axis, while all the chords are in the upper half-plane. Each linear chord diagram has a natural genus of its associated surface. To each chord diagram and linear chord diagram, it is possible to associate the intersection graph. It consists of a graph whose vertices correspond to the chords of the diagram, whereas the chord intersections are represented by a connection between the vertices. Such intersection graph carries a lot of information about the diagram. Our goal is to define an LCD equivalence class in terms of identity of intersection graphs, from which many chord diagram invariants depend. For studying these invariants, we introduce a new representation of Linear Chord Diagrams based on a set of appropriate topological operators that permits to model LCD in terms of the relations among chords. Such set is composed of: crossing, nesting, and concatenations. The crossing operator is able to generate the whole space of linear chord diagrams, and a multiple context free grammar able to uniquely generate each LDC starting from a linear chord diagram adding a chord for each production of the grammar is defined. In other words, it allows to associate a unique algebraic term to each linear chord diagram, while the remaining operators allow to rewrite the term throughout a set of appropriate rewriting rules. Such rules define an LCD equivalence class in terms of the identity of intersection graphs. Starting from a modelled RNA molecule and the linear chord, some authors proposed a topological classification and folding. Our LCD equivalence class could contribute to the RNA folding problem leading to the definition of an algorithm that calculates the free energy of the molecule more accurately respect to the existing ones. Such LCD equivalence class could be useful to obtain a more accurate estimate of link between the crossing number and the topological genus and to study the relation among other invariants.Keywords: chord diagrams, linear chord diagram, equivalence class, topological language
Procedia PDF Downloads 2011263 A Methodology Based on Image Processing and Deep Learning for Automatic Characterization of Graphene Oxide
Authors: Rafael do Amaral Teodoro, Leandro Augusto da Silva
Abstract:
Originated from graphite, graphene is a two-dimensional (2D) material that promises to revolutionize technology in many different areas, such as energy, telecommunications, civil construction, aviation, textile, and medicine. This is possible because its structure, formed by carbon bonds, provides desirable optical, thermal, and mechanical characteristics that are interesting to multiple areas of the market. Thus, several research and development centers are studying different manufacturing methods and material applications of graphene, which are often compromised by the scarcity of more agile and accurate methodologies to characterize the material – that is to determine its composition, shape, size, and the number of layers and crystals. To engage in this search, this study proposes a computational methodology that applies deep learning to identify graphene oxide crystals in order to characterize samples by crystal sizes. To achieve this, a fully convolutional neural network called U-net has been trained to segment SEM graphene oxide images. The segmentation generated by the U-net is fine-tuned with a standard deviation technique by classes, which allows crystals to be distinguished with different labels through an object delimitation algorithm. As a next step, the characteristics of the position, area, perimeter, and lateral measures of each detected crystal are extracted from the images. This information generates a database with the dimensions of the crystals that compose the samples. Finally, graphs are automatically created showing the frequency distributions by area size and perimeter of the crystals. This methodological process resulted in a high capacity of segmentation of graphene oxide crystals, presenting accuracy and F-score equal to 95% and 94%, respectively, over the test set. Such performance demonstrates a high generalization capacity of the method in crystal segmentation, since its performance considers significant changes in image extraction quality. The measurement of non-overlapping crystals presented an average error of 6% for the different measurement metrics, thus suggesting that the model provides a high-performance measurement for non-overlapping segmentations. For overlapping crystals, however, a limitation of the model was identified. To overcome this limitation, it is important to ensure that the samples to be analyzed are properly prepared. This will minimize crystal overlap in the SEM image acquisition and guarantee a lower error in the measurements without greater efforts for data handling. All in all, the method developed is a time optimizer with a high measurement value, considering that it is capable of measuring hundreds of graphene oxide crystals in seconds, saving weeks of manual work.Keywords: characterization, graphene oxide, nanomaterials, U-net, deep learning
Procedia PDF Downloads 1601262 Mutations in rpoB, katG and inhA Genes: The Association with Resistance to Rifampicin and Isoniazid in Egyptian Mycobacterium tuberculosis Clinical Isolates
Authors: Ayman K. El Essawy, Amal M. Hosny, Hala M. Abu Shady
Abstract:
The rapid detection of TB and drug resistance, both optimizes treatment and improves outcomes. In the current study, respiratory specimens were collected from 155 patients. Conventional susceptibility testing and MIC determination were performed for rifampicin (RIF) and isoniazid (INH). Genotype MTBDRplus assay, which is a molecular genetic assay based on the DNA-STRIP technology and specific gene sequencing with primers for rpoB, KatG, and mab-inhA genes were used to detect mutations associated with resistance to rifampicin and isoniazid. In comparison to other categories, most of rifampicin resistant (61.5%) and isoniazid resistant isolates (47.1%) were from patients relapsed in treatment. The genotypic profile (using Genotype MTBDRplus assay) of multi-drug resistant (MDR) isolates showed missing of katG wild type 1 (WT1) band and appearance of mutation band katG MUT2. For isoniazid mono-resistant isolates, 80% showed katG MUT1, 20% showed katG MUT1, and inhA MUT1, 20% showed only inhA MUT1. Accordingly, 100% of isoniazid resistant strains were detected by this assay. Out of 17 resistant strains, 16 had mutation bands for katG distinguished high resistance to isoniazid. The assay could clearly detect rifampicin resistance among 66.7% of MDR isolates that showed mutation band rpoB MUT3 while 33.3% of them were considered as unknown. One mono-resistant rifampicin isolate did not show rifampicin mutation bands by Genotype MTBDRplus assay, but it showed an unexpected mutation in Codon 531 of rpoB by DNA sequence analysis. Rifampicin resistance in this strain could be associated with a mutation in codon 531 of rpoB (based on molecular sequencing), and Genotype MTBDRplus assay could not detect the associated mutation. If the results of Genotype MTBDRplus assay and sequencing were combined, this strain shows hetero-resistance pattern. Gene sequencing of eight selected isolates, previously tested by Genotype MTBDRplus assay, could detect resistance mutations mainly in codon 315 (katG gene), position -15 in inhA promotes gene for isoniazid resistance and codon 531 (rpoB gene) for rifampicin resistance. Genotyping techniques allow distinguishing between recurrent cases of reinfection or reactivation and supports epidemiological studies.Keywords: M. tuberculosis, rpoB, KatG, inhA, genotype MTBDRplus
Procedia PDF Downloads 1661261 Finite Element Analysis of Mechanical Properties of Additively Manufactured 17-4 PH Stainless Steel
Authors: Bijit Kalita, R. Jayaganthan
Abstract:
Additive manufacturing (AM) is a novel manufacturing method which provides more freedom in design, manufacturing near-net-shaped parts as per demand, lower cost of production, and expedition in delivery time to market. Among various metals, AM techniques, Laser Powder Bed Fusion (L-PBF) is the most prominent one that provides higher accuracy and powder proficiency in comparison to other methods. Particularly, 17-4 PH alloy is martensitic precipitation hardened (PH) stainless steel characterized by resistance to corrosion up to 300°C and tailorable strengthening by copper precipitates. Additively manufactured 17-4 PH stainless steel exhibited a dendritic/cellular solidification microstructure in the as-built condition. It is widely used as a structural material in marine environments, power plants, aerospace, and chemical industries. The excellent weldability of 17-4 PH stainless steel and its ability to be heat treated to improve mechanical properties make it a good material choice for L-PBF. In this study, the microstructures of martensitic stainless steels in the as-built state, as well as the effects of process parameters, building atmosphere, and heat treatments on the microstructures, are reviewed. Mechanical properties of fabricated parts are studied through micro-hardness and tensile tests. Tensile tests are carried out under different strain rates at room temperature. In addition, the effect of process parameters and heat treatment conditions on mechanical properties is critically reviewed. These studies revealed the performance of L-PBF fabricated 17–4 PH stainless-steel parts under cyclic loading, and the results indicated that fatigue properties were more sensitive to the defects generated by L-PBF (e.g., porosity, microcracks), leading to the low fracture strains and stresses under cyclic loading. Rapid melting, solidification, and re-melting of powders during the process and different combinations of processing parameters result in a complex thermal history and heterogeneous microstructure and are necessary to better control the microstructures and properties of L-PBF PH stainless steels through high-efficiency and low-cost heat treatments.Keywords: 17–4 PH stainless steel, laser powder bed fusion, selective laser melting, microstructure, additive manufacturing
Procedia PDF Downloads 1171260 Biodegradable Poly-ε-Caprolactone-Based Siloxane Polymer
Authors: Maria E. Fortună, Elena Ungureanu, Răzvan Rotaru, Valeria Harabagiu
Abstract:
Polymers are used in a variety of areas due to their unique mechanical and chemical properties. Natural polymers are biodegradable, whereas synthetic polymers are rarely biodegradable but can be modified. As a result, by combining the benefits of natural and synthetic polymers, composite materials that are biodegradable can be obtained with potential for biomedical and environmental applications. However, because of their strong resistance to degradation, it may be difficult to eliminate waste. As a result, interest in developing biodegradable polymers has risen significantly. This research involves obtaining and characterizing two biodegradable poly-ε-caprolactone-polydimethylsiloxane copolymers. A comparison study was conducted using an aminopropyl-terminated polydimethylsiloxane macroinitiator with two distinct molecular weights. The copolymers were obtained by ring-opening polymerization of poly (ɛ-caprolactone) in the presence of aminopropyl-terminated polydimethylsiloxane as initiator and comonomers and stannous 2-ethylhexanoate as a catalyst. The materials were characterized using a number of techniques, including NMR, FTIR, EDX, SEM, AFM, and DSC. Additionally, the water contact angle and water vapor sorption capacity were assessed. Furthermore, the copolymers were examined for environmental susceptibility by conducting biological tests on tomato plants (Lypercosium esculentum), with an accent on biological stability and metabolism. Subsequent to the copolymer's degradation, the dynamics of nitrogen experience evolutionary alterations, validating the progression of the process accompanied by the liberation of organic nitrogen. The biological tests performed (germination index, average seedling height, green and dry biomass) on Lypercosium esculentum, San Marzano variety tomato plants in direct contact with the copolymer indicated normal growth and development, suggesting a minimal toxic effect and, by extension, compatibility of the copolymer with the environment. The total chlorophyll concentration of plant leaves in contact with copolymers was determined, considering the pigment's critical role in photosynthesis and, implicitly, plant metabolism and physiological state.Keywords: biodegradable, biological stability, copolymers, polydimethylsiloxane
Procedia PDF Downloads 221259 Bi-Directional Impulse Turbine for Thermo-Acoustic Generator
Authors: A. I. Dovgjallo, A. B. Tsapkova, A. A. Shimanov
Abstract:
The paper is devoted to one of engine types with external heating – a thermoacoustic engine. In thermoacoustic engine heat energy is converted to an acoustic energy. Further, acoustic energy of oscillating gas flow must be converted to mechanical energy and this energy in turn must be converted to electric energy. The most widely used way of transforming acoustic energy to electric one is application of linear generator or usual generator with crank mechanism. In both cases, the piston is used. Main disadvantages of piston use are friction losses, lubrication problems and working fluid pollution which cause decrease of engine power and ecological efficiency. Using of a bidirectional impulse turbine as an energy converter is suggested. The distinctive feature of this kind of turbine is that the shock wave of oscillating gas flow passing through the turbine is reflected and passes through the turbine again in the opposite direction. The direction of turbine rotation does not change in the process. Different types of bidirectional impulse turbines for thermoacoustic engines are analyzed. The Wells turbine is the simplest and least efficient of them. A radial impulse turbine has more complicated design and is more efficient than the Wells turbine. The most appropriate type of impulse turbine was chosen. This type is an axial impulse turbine, which has a simpler design than that of a radial turbine and similar efficiency. The peculiarities of the method of an impulse turbine calculating are discussed. They include changes in gas pressure and velocity as functions of time during the generation of gas oscillating flow shock waves in a thermoacoustic system. In thermoacoustic system pressure constantly changes by a certain law due to acoustic waves generation. Peak values of pressure are amplitude which determines acoustic power. Gas, flowing in thermoacoustic system, periodically changes its direction and its mean velocity is equal to zero but its peak values can be used for bi-directional turbine rotation. In contrast with feed turbine, described turbine operates on un-steady oscillating flows with direction changes which significantly influence the algorithm of its calculation. Calculated power output is 150 W with frequency 12000 r/min and pressure amplitude 1,7 kPa. Then, 3-d modeling and numerical research of impulse turbine was carried out. As a result of numerical modeling, main parameters of the working fluid in turbine were received. On the base of theoretical and numerical data model of impulse turbine was made on 3D printer. Experimental unit was designed for numerical modeling results verification. Acoustic speaker was used as acoustic wave generator. Analysis if the acquired data shows that use of the bi-directional impulse turbine is advisable. By its characteristics as a converter, it is comparable with linear electric generators. But its lifetime cycle will be higher and engine itself will be smaller due to turbine rotation motion.Keywords: acoustic power, bi-directional pulse turbine, linear alternator, thermoacoustic generator
Procedia PDF Downloads 3781258 Airborne Particulate Matter Passive Samplers for Indoor and Outdoor Exposure Monitoring: Development and Evaluation
Authors: Kholoud Abdulaziz, Kholoud Al-Najdi, Abdullah Kadri, Konstantinos E. Kakosimos
Abstract:
The Middle East area is highly affected by air pollution induced by anthropogenic and natural phenomena. There is evidence that air pollution, especially particulates, greatly affects the population health. Many studies have raised a warning of the high concentration of particulates and their affect not just around industrial and construction areas but also in the immediate working and living environment. One of the methods to study air quality is continuous and periodic monitoring using active or passive samplers. Active monitoring and sampling are the default procedures per the European and US standards. However, in many cases they have been inefficient to accurately capture the spatial variability of air pollution due to the small number of installations; which eventually is attributed to the high cost of the equipment and the limited availability of users with expertise and scientific background. Another alternative has been found to account for the limitations of the active methods that is the passive sampling. It is inexpensive, requires no continuous power supply, and easy to assemble which makes it a more flexible option, though less accurate. This study aims to investigate and evaluate the use of passive sampling for particulate matter pollution monitoring in dry tropical climates, like in the Middle East. More specifically, a number of field measurements have be conducted, both indoors and outdoors, at Qatar and the results have been compared with active sampling equipment and the reference methods. The samples have been analyzed, that is to obtain particle size distribution, by applying existing laboratory techniques (optical microscopy) and by exploring new approaches like the white light interferometry to. Then the new parameters of the well-established model have been calculated in order to estimate the atmospheric concentration of particulates. Additionally, an extended literature review will investigate for new and better models. The outcome of this project is expected to have an impact on the public, as well, as it will raise awareness among people about the quality of life and about the importance of implementing research culture in the community.Keywords: air pollution, passive samplers, interferometry, indoor, outdoor
Procedia PDF Downloads 3981257 Effectiveness of Research Promotion Organizations in Higher Education and Research (ESR)
Authors: Jonas Sanon
Abstract:
The valorization of research is becoming a transversal instrument linking different sectors (academic, public and industrial). The practice of valorization seems to impact innovation techniques within companies where, there is often the implementation of industrial conventions of training through research (CIFRE), continuous training programs for employees, collaborations and partnerships around joint research and R&D laboratories focused on the needs of companies to improve or develop more efficient innovations. Furthermore, many public initiatives to support innovation and technology transfer have been developed at the international, European and national levels, with significant budget allocations. Thus, in the context of this work, we tried to analyze the way in which research transfer structures are evaluated within the Saclay ecosystem. In fact, the University-Paris-Saclay is one of the best French universities; it is made up of 10 university components, more than 275 laboratories and is in partnership with the largest French research centers This work mainly focused on how evaluations affected research transfer structures, how evaluations were conducted, and what the managers of research transfer structures thought about assessments. Thus, with the aid of the conducted interviews, it appears that the evaluations do not have a significant impact on the qualitative aspect of research and innovation, but is rather present a directive aspect to allow the structures to benefit or not from the financial resources to develop certain research work, sometimes directed and influenced by the market, some researchers might try to accentuate their research and experimentation work on themes that are not necessarily their areas of interest, but just to comply with the calls for proposed thematic projects. The field studies also outline the primary indicators used to assess the effectiveness of valorization structures as "the number of start-ups generated, the license agreements signed, the structure's patent portfolio, and the innovations of items developed from public research.". Finally, after mapping the actors, it became clear that the ecosystem of the University of Paris-Saclay benefits from a richness allowing it to better value its research in relation to the three categories of actors it has (internal, external and transversal), united and linked by a relationship of proximity of sharing and endowed with a real opportunity to innovate openly.Keywords: research valorization, technology transfer, innovation, evaluation, impacts and performances, innovation policy
Procedia PDF Downloads 731256 Effect of Dose-Dependent Gamma Irradiation on the Fatty Acid Profile of Mud Crab, Scylla Serrata: A GC-FID Study
Authors: Keethadath Arshad, Kappalli Sudha
Abstract:
Mud crab, Scylla Serrata, a commercially important shellfish with high global demand appears to be the rich source of dietary fatty acids. Its increased production through aquaculture and highly perishable nature would necessitate improved techniques for their proper preservation. Optimized irradiation has been identified as an effective method to facilitate safety and extended shelf life for a broad range of the perishable food items including finfishes and shellfishes. The present study analyzed the effects of dose-dependent gamma irradiation on the fatty acid profile of the muscle derived from the candidate species (S. serrata) at both qualitative and quantitative levels. Wild grown, average sized, intermolt male S. Serrata were gamma irradiated (^60C, 3.8kGy/ hour) at the dosage of 0.5kGy, 1.0kGy and 2.0kGy using gamma chamber. Total lipid extracted by Folch method, after methylation, were analyzed for the presence fatty acids adopting Gas Chromatograph equipped with flame ionization detector by comparing with the authentic FAME reference standards. The tissue from non-irradiated S. serrata showed the presence of 12 SFA, 6 MUFA, 8PUFA and 2 TF; PUFA includes medicinally important ω-3 FA such as C18:3, C20:5 and C22:6 and ω-6 FA such as γ- C18:3 and C20:2. Dose-dependent gamma irradiation reduced the number of detectable fatty acids (10, 8 and 8 SFA, 6, 6 and 5MUFA, 7, 7, and 6 PUFA and 1, 1, and 0 TF in 0.5kGy, 1.0kGy and 2kGy irradiated samples respectively). Major fatty acids detected in both irradiated and non-irradiated samples were as follows: SFA- C16:0, C18:0, C22:0 and C14:0; MUFA - C18:1 and C16:1and PUFA- C18:2, C20:5, C20:2 and C22:6. Irradiation doses ranging from 1-2kGy substantially reduced the ω-6 C18:3 and ω-3 C18:3. However, the omega fatty acids such as C20:5, C22:6 and C20:2 could survive even after 2kGy irradiation. Significantly, trans fat like C18:2T and C18:1T were completely disappeared upon 2kGy irradiation. From the overall observations made from the present study, it is suggested that irradiation dose up to 1kGy is optimum to maintain the fatty acid profile and eradicate the trans fat of the muscle derived from S. serrata.Keywords: fatty acid profile, food preservation, gamma irradiation, scylla serrata
Procedia PDF Downloads 2761255 Electrochemical Impedance Spectroscopy Based Label-Free Detection of TSG101 by Electric Field Lysis of Immobilized Exosomes from Human Serum
Authors: Nusrat Praween, Krishna Thej Pammi Guru, Palash Kumar Basu
Abstract:
Designing non-invasive biosensors for cancer diagnosis is essential for developing an affordable and specific tool to measure cancer-related exosome biomarkers. Exosomes, released by healthy as well as cancer cells, contain valuable information about the biomarkers of various diseases, including cancer. Despite the availability of various isolation techniques, ultracentrifugation is the standard technique that is being employed. Post isolation, exosomes are traditionally exposed to detergents for extracting their proteins, which can often lead to protein degradation. Further to this, it is very essential to develop a sensing platform for the quantification of clinically relevant proteins in a wider range to ensure practicality. In this study, exosomes were immobilized on the Au Screen Printed Electrode (SPE) using EDC/NHS chemistry to facilitate binding. After immobilizing the exosomes on the screen-printed electrode (SPE), we investigated the impact of the electric field by applying various voltages to induce exosome lysis and release their contents. The lysed solution was used for sensing TSG101, a crucial biomarker associated with various cancers, using both faradaic and non-faradaic electrochemical impedance spectroscopy (EIS) methods. The results of non-faradaic and faradaic EIS were comparable and showed good consistency, indicating that non-faradaic sensing can be a reliable alternative. Hence, the non-faradaic sensing technique was used for label-free quantification of the TSG101 biomarker. The results were validated using ELISA. Our electrochemical immunosensor demonstrated a consistent response of TSG101 from 125 pg/mL to 8000 pg/mL, with a detection limit of 0.125 pg/mL at room temperature. Additionally, since non-faradic sensing is label-free, the ease of usage and cost of the final sensor developed can be reduced. The proposed immunosensor is capable of detecting the TSG101 protein at low levels in healthy serum with good sensitivity and specificity, making it a promising platform for biomarker detection.Keywords: biosensor, exosomes isolation on SPE, electric field lysis of exosome, EIS sensing of TSG101
Procedia PDF Downloads 461254 Genotyping and Phylogeny of Phaeomoniella Genus Associated with Grapevine Trunk Diseases in Algeria
Authors: A. Berraf-Tebbal, Z. Bouznad, , A.J.L. Phillips
Abstract:
Phaeomoniella is a fungus genus in the mitosporic ascomycota which includes Phaeomoniella chlamydospora specie associated with two declining diseases on grapevine (Vitis vinifera) namely Petri disease and esca. Recent studies have shown that several Phaeomoniella species also cause disease on many other woody crops, such as forest trees and woody ornamentals. Two new species, Phaeomoniella zymoides and Phaeomoniella pinifoliorum H.B. Lee, J.Y. Park, R.C. Summerbell et H.S. Jung, were isolated from the needle surface of Pinus densiflora Sieb. et Zucc. in Korea. The identification of species in Phaeomoniella genus can be a difficult task if based solely on morphological and cultural characters. In this respect, the application of molecular methods, particularly PCR-based techniques, may provide an important contribution. MSP-PCR (microsatellite primed-PCR) fingerprinting has proven useful in the molecular typing of fungal strains. The high discriminatory potential of this method is particularly useful when dealing with closely related or cryptic species. In the present study, the application of PCR fingerprinting was performed using the micro satellite primer M13 for the purpose of species identification and strain typing of 84 Phaeomoniella -like isolates collected from grapevines with typical symptoms of dieback. The bands produced by MSP-PCR profiles divided the strains into 3 clusters and 5 singletons with a reproducibility level of 80%. Representative isolates from each group and, when possible, isolates from Eutypa dieback and esca symptoms were selected for sequencing of the ITS region. The ITS sequences for the 16 isolates selected from the MSP-PCR profiles were combined and aligned with sequences of 18 isolates retrieved from GenBank, representing a selection of all known Phaeomoniella species. DNA sequences were compared with those available in GenBank using Neighbor-joining (NJ) and Maximum-parsimony (MP) analyses. The phylogenetic trees of the ITS region revealed that the Phaeomoniella isolates clustered with Phaeomoniella chlamydospora reference sequences with a bootstrap support of 100 %. The complexity of the pathosystems vine-trunk diseases shows clearly the need to identify unambiguously the fungal component in order to allow a better understanding of the etiology of these diseases and justify the establishment of control strategies against these fungal agents.Keywords: Genotyping, MSP-PCR, ITS, phylogeny, trunk diseases
Procedia PDF Downloads 4801253 Predictors of Clinical Failure After Endoscopic Lumbar Spine Surgery During the Initial Learning Curve
Authors: Daniel Scherman, Daniel Madani, Shanu Gambhir, Marcus Ling Zhixing, Yingda Li
Abstract:
Objective: This study aims to identify clinical factors that may predict failed endoscopic lumbar spine surgery to guide surgeons with patient selection during the initial learning curve. Methods: This is an Australasian prospective analysis of the first 105 patients to undergo lumbar endoscopic spine decompression by 3 surgeons. Modified MacNab outcomes, Oswestry Disability Index (ODI) and Visual Analogue Score (VAS) scores were utilized to evaluate clinical outcomes at 6 months postoperatively. Descriptive statistics and Anova t-tests were performed to measure statistically significant (p<0.05) associations between variables using GraphPad Prism v10. Results: Patients undergoing endoscopic lumbar surgery via an interlaminar or transforaminal approach have overall good/excellent modified MacNab outcomes and a significant reduction in post-operative VAS and ODI scores. Regardless of the anatomical location of disc herniations, good/excellent modified MacNab outcomes and significant reductions in VAS and ODI were reported post-operatively; however, not in patients with calcified disc herniations. Patients with central and foraminal stenosis overall reported poor/fair modified MacNab outcomes. However, there were significant reductions in VAS and ODI scores post-operatively. Patients with subarticular stenosis or an associated spondylolisthesis reported good/excellent modified MacNab outcomes and significant reductions in VAS and ODI scores post-operatively. Patients with disc herniation and concurrent degenerative stenosis had generally poor/fair modified MacNab outcomes. Conclusion: The outcomes of endoscopic spine surgery are encouraging, with a low complication and reoperation rate. However, patients with calcified disc herniations, central canal stenosis or a disc herniation with concurrent degenerative stenosis present challenges during the initial learning curve and may benefit from traditional open or other minimally invasive techniques.Keywords: complications, lumbar disc herniation, lumbar endoscopic spine surgery, predictors of failed endoscopic spine surgery
Procedia PDF Downloads 1541252 Development of an EEG-Based Real-Time Emotion Recognition System on Edge AI
Authors: James Rigor Camacho, Wansu Lim
Abstract:
Over the last few years, the development of new wearable and processing technologies has accelerated in order to harness physiological data such as electroencephalograms (EEGs) for EEG-based applications. EEG has been demonstrated to be a source of emotion recognition signals with the highest classification accuracy among physiological signals. However, when emotion recognition systems are used for real-time classification, the training unit is frequently left to run offline or in the cloud rather than working locally on the edge. That strategy has hampered research, and the full potential of using an edge AI device has yet to be realized. Edge AI devices are computers with high performance that can process complex algorithms. It is capable of collecting, processing, and storing data on its own. It can also analyze and apply complicated algorithms like localization, detection, and recognition on a real-time application, making it a powerful embedded device. The NVIDIA Jetson series, specifically the Jetson Nano device, was used in the implementation. The cEEGrid, which is integrated to the open-source brain computer-interface platform (OpenBCI), is used to collect EEG signals. An EEG-based real-time emotion recognition system on Edge AI is proposed in this paper. To perform graphical spectrogram categorization of EEG signals and to predict emotional states based on input data properties, machine learning-based classifiers were used. Until the emotional state was identified, the EEG signals were analyzed using the K-Nearest Neighbor (KNN) technique, which is a supervised learning system. In EEG signal processing, after each EEG signal has been received in real-time and translated from time to frequency domain, the Fast Fourier Transform (FFT) technique is utilized to observe the frequency bands in each EEG signal. To appropriately show the variance of each EEG frequency band, power density, standard deviation, and mean are calculated and employed. The next stage is to identify the features that have been chosen to predict emotion in EEG data using the K-Nearest Neighbors (KNN) technique. Arousal and valence datasets are used to train the parameters defined by the KNN technique.Because classification and recognition of specific classes, as well as emotion prediction, are conducted both online and locally on the edge, the KNN technique increased the performance of the emotion recognition system on the NVIDIA Jetson Nano. Finally, this implementation aims to bridge the research gap on cost-effective and efficient real-time emotion recognition using a resource constrained hardware device, like the NVIDIA Jetson Nano. On the cutting edge of AI, EEG-based emotion identification can be employed in applications that can rapidly expand the research and implementation industry's use.Keywords: edge AI device, EEG, emotion recognition system, supervised learning algorithm, sensors
Procedia PDF Downloads 1051251 The Challenges of Digital Crime Nowadays
Authors: Bendes Ákos
Abstract:
Digital evidence will be the most widely used type of evidence in the future. With the development of the modern world, more and more new types of crimes have evolved and transformed. For this reason, it is extremely important to examine these types of crimes in order to get a comprehensive picture of them, with which we can help the authorities work. In 1865, with early technologies, people were able to forge a picture of a quality that is not even recognized today. With the help of today's technology, authorities receive a lot of false evidence. Officials are not able to process such a large amount of data, nor do they have the necessary technical knowledge to get a real picture of the authenticity of the given evidence. The digital world has many dangers. Unfortunately, we live in an age where we must protect everything digitally: our phones, our computers, our cars, and all the smart devices that are present in our personal lives and this is not only a burden on us, since companies, state and public utilities institutions are also forced to do so. The training of specialists and experts is essential so that the authorities can manage the incoming digital evidence at some level. When analyzing evidence, it is important to be able to examine it from the moment it is created. Establishing authenticity is a very important issue during official procedures. After the proper acquisition of the evidence, it is essential to store it safely and use it professionally. After the proper acquisition of the evidence, it is essential to store it safely and use it professionally. Otherwise, they will not have sufficient probative value and in case of doubt, the court will always decide in favor of the defendant. One of the most common problems in the world of digital data and evidence is doubt, which is why it is extremely important to examine the above-mentioned problems. The most effective way to avoid digital crimes is to prevent them, for which proper education and knowledge are essential. The aim is to present the dangers inherent in the digital world and the new types of digital crimes. After the comparison of the Hungarian investigative techniques with international practice, modernizing proposals will be given. A sufficiently stable yet flexible legislation is needed that can monitor the rapid changes in the world and not regulate afterward but rather provide an appropriate framework. It is also important to be able to distinguish between digital and digitalized evidence, as the degree of probative force differs greatly. The aim of the research is to promote effective international cooperation and uniform legal regulation in the world of digital crimes.Keywords: digital crime, digital law, cyber crime, international cooperation, new crimes, skepticism
Procedia PDF Downloads 631250 Expectation for Professionalism Effects Reality Shock: A Qualitative And Quantitative Study of Reality Shock among New Human Service Professionals
Authors: Hiromi Takafuji
Abstract:
It is a well-known fact that health care and welfare are the foundation of human activities, and human service professionals such as nurses and child care workers support these activities. COVID-19 pandemic has made the severity of the working environment in these fields even more known. It is high time to discuss the work of human service workers for the sustainable development of the human environment. Early turnover has been recognized as a long-standing issue in these fields. In Japan, the attrition rate within three years of graduation for these occupations has remained high at about 40% for more than 20 years. One of the reasons for this is Reality Shock: RS, which refers to the stress caused by the gap between pre-employment expectations and the post-employment reality experienced by new workers. The purpose of this study was to academically elucidate the mechanism of RS among human service professionals and to contribute to countermeasures against it. Firstly, to explore the structure of the relationship between professionalism and workers' RS, an exploratory interview survey was conducted and analyzed by text mining and content analysis. The results showed that the expectation of professionalism influences RS as a pre-employment job expectation. Next, the expectations of professionalism were quantified and categorized, and the responses of a total of 282 human service work professionals, nurses, child care workers, and caregivers; were finalized for data analysis. The data were analyzed using exploratory factor analysis, confirmatory factor analysis, multiple regression analysis, and structural equation modeling techniques. The results revealed that self-control orientation and authority orientation by qualification had a direct positive significant impact on RS. On the other hand, interpersonal helping orientation and altruistic orientation were found to have a direct negative significant impact and an indirect positive significant impact on RS.; we were able to clarify the structure of work expectations that affect the RS of welfare professionals, which had not been clarified in previous studies. We also explained the limitations, practical implications, and directions for future research.Keywords: human service professional, new hire turnover, SEM, reality shock
Procedia PDF Downloads 991249 Revealing Single Crystal Quality by Insight Diffraction Imaging Technique
Authors: Thu Nhi Tran Caliste
Abstract:
X-ray Bragg diffraction imaging (“topography”)entered into practical use when Lang designed an “easy” technical setup to characterise the defects / distortions in the high perfection crystals produced for the microelectronics industry. The use of this technique extended to all kind of high quality crystals, and deposited layers, and a series of publications explained, starting from the dynamical theory of diffraction, the contrast of the images of the defects. A quantitative version of “monochromatic topography” known as“Rocking Curve Imaging” (RCI) was implemented, by using synchrotron light and taking advantage of the dramatic improvement of the 2D-detectors and computerised image processing. The rough data is constituted by a number (~300) of images recorded along the diffraction (“rocking”) curve. If the quality of the crystal is such that a one-to-onerelation between a pixel of the detector and a voxel within the crystal can be established (this approximation is very well fulfilled if the local mosaic spread of the voxel is < 1 mradian), a software we developped provides, from the each rocking curve recorded on each of the pixels of the detector, not only the “voxel” integrated intensity (the only data provided by the previous techniques) but also its “mosaic spread” (FWHM) and peak position. We will show, based on many examples, that this new data, never recorded before, open the field to a highly enhanced characterization of the crystal and deposited layers. These examples include the characterization of dislocations and twins occurring during silicon growth, various growth features in Al203, GaNand CdTe (where the diffraction displays the Borrmannanomalous absorption, which leads to a new type of images), and the characterisation of the defects within deposited layers, or their effect on the substrate. We could also observe (due to the very high sensitivity of the setup installed on BM05, which allows revealing these faint effects) that, when dealing with very perfect crystals, the Kato’s interference fringes predicted by dynamical theory are also associated with very small modifications of the local FWHM and peak position (of the order of the µradian). This rather unexpected (at least for us) result appears to be in keeping with preliminary dynamical theory calculations.Keywords: rocking curve imaging, X-ray diffraction, defect, distortion
Procedia PDF Downloads 1311248 Modeling of Foundation-Soil Interaction Problem by Using Reduced Soil Shear Modulus
Authors: Yesim Tumsek, Erkan Celebi
Abstract:
In order to simulate the infinite soil medium for soil-foundation interaction problem, the essential geotechnical parameter on which the foundation stiffness depends, is the value of soil shear modulus. This parameter directly affects the site and structural response of the considered model under earthquake ground motions. Strain-dependent shear modulus under cycling loads makes difficult to estimate the accurate value in computation of foundation stiffness for the successful dynamic soil-structure interaction analysis. The aim of this study is to discuss in detail how to use the appropriate value of soil shear modulus in the computational analyses and to evaluate the effect of the variation in shear modulus with strain on the impedance functions used in the sub-structure method for idealizing the soil-foundation interaction problem. Herein, the impedance functions compose of springs and dashpots to represent the frequency-dependent stiffness and damping characteristics at the soil-foundation interface. Earthquake-induced vibration energy is dissipated into soil by both radiation and hysteretic damping. Therefore, flexible-base system damping, as well as the variability in shear strengths, should be considered in the calculation of impedance functions for achievement a more realistic dynamic soil-foundation interaction model. In this study, it has been written a Matlab code for addressing these purposes. The case-study example chosen for the analysis is considered as a 4-story reinforced concrete building structure located in Istanbul consisting of shear walls and moment resisting frames with a total height of 12m from the basement level. The foundation system composes of two different sized strip footings on clayey soil with different plasticity (Herein, PI=13 and 16). In the first stage of this study, the shear modulus reduction factor was not considered in the MATLAB algorithm. The static stiffness, dynamic stiffness modifiers and embedment correction factors of two rigid rectangular foundations measuring 2m wide by 17m long below the moment frames and 7m wide by 17m long below the shear walls are obtained for translation and rocking vibrational modes. Afterwards, the dynamic impedance functions of those have been calculated for reduced shear modulus through the developed Matlab code. The embedment effect of the foundation is also considered in these analyses. It can easy to see from the analysis results that the strain induced in soil will depend on the extent of the earthquake demand. It is clearly observed that when the strain range increases, the dynamic stiffness of the foundation medium decreases dramatically. The overall response of the structure can be affected considerably because of the degradation in soil stiffness even for a moderate earthquake. Therefore, it is very important to arrive at the corrected dynamic shear modulus for earthquake analysis including soil-structure interaction.Keywords: clay soil, impedance functions, soil-foundation interaction, sub-structure approach, reduced shear modulus
Procedia PDF Downloads 2691247 Community-Based Assessment Approach to Empower Child with Disabilities: Institutional Study on Deaf Art Community in Yogyakarta, Indonesia
Authors: Mukhamad Fatkhullah, Arfan Fadli, Marini Kristina Situmeang, Siti Hazar Sitorus
Abstract:
The emergence of a community of people with disabilities along with the various works produced has made great progress to open the public eye to their existence in society. This study focuses attention on a community that is suspected to be one of the pioneers in pursuing the movement. It is Deaf Art Community (DAC), a community of persons with disabilities based in Yogyakarta, with deaf and speech-impaired members who use sign language in everyday communication. Knowing the movement of disabled communities is a good thing, the description of the things behind it then important to know as the basis for initiating similar movements. This research focuses on the question of how community of people with disabilities begin to take shape in different regions and interact with collaborative events. Qualitative method with in-depth interview as data collection techniques was used to describe the process of formation and the emergence of community. The analytical unit in the study initially focuses on the subject in the community, but in the process, it develops to institutional analysis. Therefore some informants were determined purposively and expanded using the snowball technique. The theory used in this research is Phenomenology of Alfred Schutz to be able to see reality from the subject and institutional point of view. The results of this study found that the community is formed because the existing educational institutions (both SLB and inclusion) are less able to empower and make children with disabilities become equal with the society. Through the SLB, the presence of children with disabilities becomes isolated from the society, especially in children of his or her age. Therefore, discrimination and labeling will never be separated from society's view. Meanwhile, facilities for the basic needs of children with disabilities can not be fully provided. Besides that, the guarantee of discrimination, glances, and unpleasant behavior from children without disability does not exist, which then indicates that the existing inclusion schools offer only symbolic acceptance. Thus, both in SLB and Inclusive Schools can not empower children with disabilities. Community-based assistance, in this case, has become an alternative to actually empowering children with disabilities. Not only giving them a place to interact, through the same community, children with disabilities will be guided to discover their talents and develop their potential to be self-reliant in the future.Keywords: children with disabilities, community-based assessment, community empowerment, social equity
Procedia PDF Downloads 2631246 Solid Particles Transport and Deposition Prediction in a Turbulent Impinging Jet Using the Lattice Boltzmann Method and a Probabilistic Model on GPU
Authors: Ali Abdul Kadhim, Fue Lien
Abstract:
Solid particle distribution on an impingement surface has been simulated utilizing a graphical processing unit (GPU). In-house computational fluid dynamics (CFD) code has been developed to investigate a 3D turbulent impinging jet using the lattice Boltzmann method (LBM) in conjunction with large eddy simulation (LES) and the multiple relaxation time (MRT) models. This paper proposed an improvement in the LBM-cellular automata (LBM-CA) probabilistic method. In the current model, the fluid flow utilizes the D3Q19 lattice, while the particle model employs the D3Q27 lattice. The particle numbers are defined at the same regular LBM nodes, and transport of particles from one node to its neighboring nodes are determined in accordance with the particle bulk density and velocity by considering all the external forces. The previous models distribute particles at each time step without considering the local velocity and the number of particles at each node. The present model overcomes the deficiencies of the previous LBM-CA models and, therefore, can better capture the dynamic interaction between particles and the surrounding turbulent flow field. Despite the increasing popularity of LBM-MRT-CA model in simulating complex multiphase fluid flows, this approach is still expensive in term of memory size and computational time required to perform 3D simulations. To improve the throughput of each simulation, a single GeForce GTX TITAN X GPU is used in the present work. The CUDA parallel programming platform and the CuRAND library are utilized to form an efficient LBM-CA algorithm. The methodology was first validated against a benchmark test case involving particle deposition on a square cylinder confined in a duct. The flow was unsteady and laminar at Re=200 (Re is the Reynolds number), and simulations were conducted for different Stokes numbers. The present LBM solutions agree well with other results available in the open literature. The GPU code was then used to simulate the particle transport and deposition in a turbulent impinging jet at Re=10,000. The simulations were conducted for L/D=2,4 and 6, where L is the nozzle-to-surface distance and D is the jet diameter. The effect of changing the Stokes number on the particle deposition profile was studied at different L/D ratios. For comparative studies, another in-house serial CPU code was also developed, coupling LBM with the classical Lagrangian particle dispersion model. Agreement between results obtained with LBM-CA and LBM-Lagrangian models and the experimental data is generally good. The present GPU approach achieves a speedup ratio of about 350 against the serial code running on a single CPU.Keywords: CUDA, GPU parallel programming, LES, lattice Boltzmann method, MRT, multi-phase flow, probabilistic model
Procedia PDF Downloads 207