Search results for: predictive accuracy
331 Application of Deep Learning and Ensemble Methods for Biomarker Discovery in Diabetic Nephropathy through Fibrosis and Propionate Metabolism Pathways
Authors: Oluwafunmibi Omotayo Fasanya, Augustine Kena Adjei
Abstract:
Diabetic nephropathy (DN) is a major complication of diabetes, with fibrosis and propionate metabolism playing critical roles in its progression. Identifying biomarkers linked to these pathways may provide novel insights into DN diagnosis and treatment. This study aims to identify biomarkers associated with fibrosis and propionate metabolism in DN. Analyze the biological pathways and regulatory mechanisms of these biomarkers. Develop a machine learning model to predict DN-related biomarkers and validate their functional roles. Publicly available transcriptome datasets related to DN (GSE96804 and GSE104948) were obtained from the GEO database (https://www.ncbi.nlm.nih.gov/gds), and 924 propionate metabolism-related genes (PMRGs) and 656 fibrosis-related genes (FRGs) were identified. The analysis began with the extraction of DN-differentially expressed genes (DN-DEGs) and propionate metabolism-related DEGs (PM-DEGs), followed by the intersection of these with fibrosis-related genes to identify key intersected genes. Instead of relying on traditional models, we employed a combination of deep neural networks (DNNs) and ensemble methods such as Gradient Boosting Machines (GBM) and XGBoost to enhance feature selection and biomarker discovery. Recursive feature elimination (RFE) was coupled with these advanced algorithms to refine the selection of the most critical biomarkers. Functional validation was conducted using convolutional neural networks (CNN) for gene set enrichment and immunoinfiltration analysis, revealing seven significant biomarkers—SLC37A4, ACOX2, GPD1, ACE2, SLC9A3, AGT, and PLG. These biomarkers are involved in critical biological processes such as fatty acid metabolism and glomerular development, providing a mechanistic link to DN progression. Furthermore, a TF–miRNA–mRNA regulatory network was constructed using natural language processing models to identify 8 transcription factors and 60 miRNAs that regulate these biomarkers, while a drug–gene interaction network revealed potential therapeutic targets such as UROKINASE–PLG and ATENOLOL–AGT. This integrative approach, leveraging deep learning and ensemble models, not only enhances the accuracy of biomarker discovery but also offers new perspectives on DN diagnosis and treatment, specifically targeting fibrosis and propionate metabolism pathways.Keywords: diabetic nephropathy, deep neural networks, gradient boosting machines (GBM), XGBoost
Procedia PDF Downloads 9330 A Geometric Based Hybrid Approach for Facial Feature Localization
Authors: Priya Saha, Sourav Dey Roy Jr., Debotosh Bhattacharjee, Mita Nasipuri, Barin Kumar De, Mrinal Kanti Bhowmik
Abstract:
Biometric face recognition technology (FRT) has gained a lot of attention due to its extensive variety of applications in both security and non-security perspectives. It has come into view to provide a secure solution in identification and verification of person identity. Although other biometric based methods like fingerprint scans, iris scans are available, FRT is verified as an efficient technology for its user-friendliness and contact freeness. Accurate facial feature localization plays an important role for many facial analysis applications including biometrics and emotion recognition. But, there are certain factors, which make facial feature localization a challenging task. On human face, expressions can be seen from the subtle movements of facial muscles and influenced by internal emotional states. These non-rigid facial movements cause noticeable alterations in locations of facial landmarks, their usual shapes, which sometimes create occlusions in facial feature areas making face recognition as a difficult problem. The paper proposes a new hybrid based technique for automatic landmark detection in both neutral and expressive frontal and near frontal face images. The method uses the concept of thresholding, sequential searching and other image processing techniques for locating the landmark points on the face. Also, a Graphical User Interface (GUI) based software is designed that could automatically detect 16 landmark points around eyes, nose and mouth that are mostly affected by the changes in facial muscles. The proposed system has been tested on widely used JAFFE and Cohn Kanade database. Also, the system is tested on DeitY-TU face database which is created in the Biometrics Laboratory of Tripura University under the research project funded by Department of Electronics & Information Technology, Govt. of India. The performance of the proposed method has been done in terms of error measure and accuracy. The method has detection rate of 98.82% on JAFFE database, 91.27% on Cohn Kanade database and 93.05% on DeitY-TU database. Also, we have done comparative study of our proposed method with other techniques developed by other researchers. This paper will put into focus emotion-oriented systems through AU detection in future based on the located features.Keywords: biometrics, face recognition, facial landmarks, image processing
Procedia PDF Downloads 412329 Sample Preparation and Coring of Highly Friable and Heterogeneous Bonded Geomaterials
Authors: Mohammad Khoshini, Arman Khoshghalb, Meghdad Payan, Nasser Khalili
Abstract:
Most of the Earth’s crust surface rocks are technically categorized as weak rocks or weakly bonded geomaterials. Deeply weathered, weakly cemented, friable and easily erodible, they demonstrate complex material behaviour and understanding the overlooked mechanical behaviour of such materials is of particular importance in geotechnical engineering practice. Weakly bonded geomaterials are so susceptible to surface shear and moisture that conventional methods of core drilling fail to extract high-quality undisturbed samples out of them. Moreover, most of these geomaterials are of high heterogeneity rendering less reliable and feasible material characterization. In order to compensate for the unpredictability of the material response, either numerous experiments are needed to be conducted or large factors of safety must be implemented in the design process. However, none of these approaches is sustainable. In this study, a method for dry core drilling of such materials is introduced to take high-quality undisturbed core samples. By freezing the material at certain moisture content, a secondary structure is developed throughout the material which helps the whole structure to remain intact during the core drilling process. Moreover, to address the heterogeneity issue, the natural material was reconstructed artificially to obtain a homogeneous material with very high similarity to the natural one in both micro and macro-mechanical perspectives. The method is verified for both micro and macro scale. In terms of micro-scale analysis, using Scanning Electron Microscopy (SEM), pore spaces and inter-particle bonds were investigated and compared between natural and artificial materials. X-Ray Diffraction, XRD, analyses are also performed to control the chemical composition. At the macro scale, several uniaxial compressive strength tests, as well as triaxial tests, were performed to verify the similar mechanical response of the materials. A high level of agreement is observed between micro and macro results of natural and artificially bonded geomaterials. The proposed methods can play an important role to cut down the costs of experimental programs for material characterization and also to promote the accuracy of the numerical modellings based on the experimental results.Keywords: Artificial geomaterial, core drilling, macro-mechanical behavior, micro-scale, sample preparation, SEM photography, weakly bonded geomaterials
Procedia PDF Downloads 216328 Comparison between Experimental and Numerical Studies of Fully Encased Composite Columns
Authors: Md. Soebur Rahman, Mahbuba Begum, Raquib Ahsan
Abstract:
Composite column is a structural member that uses a combination of structural steel shapes, pipes or tubes with or without reinforcing steel bars and reinforced concrete to provide adequate load carrying capacity to sustain either axial compressive loads alone or a combination of axial loads and bending moments. Composite construction takes the advantages of the speed of construction, light weight and strength of steel, and the higher mass, stiffness, damping properties and economy of reinforced concrete. The most usual types of composite columns are the concrete filled steel tubes and the partially or fully encased steel profiles. Fully encased composite column (FEC) provides compressive strength, stability, stiffness, improved fire proofing and better corrosion protection. This paper reports experimental and numerical investigations of the behaviour of concrete encased steel composite columns subjected to short-term axial load. In this study, eleven short FEC columns with square shaped cross section were constructed and tested to examine the load-deflection behavior. The main variables in the test were considered as concrete compressive strength, cross sectional size and percentage of structural steel. A nonlinear 3-D finite element (FE) model has been developed to analyse the inelastic behaviour of steel, concrete, and longitudinal reinforcement as well as the effect of concrete confinement of the FEC columns. FE models have been validated against the current experimental study conduct in the laboratory and published experimental results under concentric load. It has been observed that FE model is able to predict the experimental behaviour of FEC columns under concentric gravity loads with good accuracy. Good agreement has been achieved between the complete experimental and the numerical load-deflection behaviour in this study. The capacities of each constituent of FEC columns such as structural steel, concrete and rebar's were also determined from the numerical study. Concrete is observed to provide around 57% of the total axial capacity of the column whereas the steel I-sections contributes to the rest of the capacity as well as ductility of the overall system. The nonlinear FE model developed in this study is also used to explore the effect of concrete strength and percentage of structural steel on the behaviour of FEC columns under concentric loads. The axial capacity of FEC columns has been found to increase significantly by increasing the strength of concrete.Keywords: composite, columns, experimental, finite element, fully encased, strength
Procedia PDF Downloads 290327 Double Wishbone Pushrod Suspension Systems Co-Simulation for Racing Applications
Authors: Suleyman Ogul Ertugrul, Mustafa Turgut, Serkan Inandı, Mustafa Gorkem Coban, Mustafa Kıgılı, Ali Mert, Oguzhan Kesmez, Murat Ozancı, Caglar Uyulan
Abstract:
In high-performance automotive engineering, the realistic simulation of suspension systems is crucial for enhancing vehicle dynamics and handling. This study focuses on the double wishbone suspension system, prevalent in racing vehicles due to its superior control and stability characteristics. Utilizing MATLAB and Adams Car simulation software, we conduct a comprehensive analysis of displacement behaviors and damper sizing under various dynamic conditions. The initial phase involves using MATLAB to simulate the entire suspension system, allowing for the preliminary determination of damper size based on the system's response under simulated conditions. Following this, manual calculations of wheel loads are performed to assess the forces acting on the front and rear suspensions during scenarios such as braking, cornering, maximum vertical loads, and acceleration. Further dynamic force analysis is carried out using MATLAB Simulink, focusing on the interactions between suspension components during key movements such as bumps and rebounds. This simulation helps in formulating precise force equations and in calculating the stiffness of the suspension springs. To enhance the accuracy of our findings, we focus on a detailed kinematic and dynamic analysis. This includes the creation of kinematic loops, derivation of relevant equations, and computation of Jacobian matrices to accurately determine damper travel and compression metrics. The calculated spring stiffness is crucial in selecting appropriate springs to ensure optimal suspension performance. To validate and refine our results, we replicate the analyses using the Adams Car software, renowned for its detailed handling of vehicular dynamics. The goal is to achieve a robust, reliable suspension setup that maximizes performance under the extreme conditions encountered in racing scenarios. This study exemplifies the integration of theoretical mechanics with advanced simulation tools to achieve a high-performance suspension setup that can significantly improve race car performance, providing a methodology that can be adapted for different types of racing vehicles.Keywords: FSAE, suspension system, Adams Car, kinematic
Procedia PDF Downloads 51326 Copyright Clearance for Artificial Intelligence Training Data: Challenges and Solutions
Authors: Erva Akin
Abstract:
– The use of copyrighted material for machine learning purposes is a challenging issue in the field of artificial intelligence (AI). While machine learning algorithms require large amounts of data to train and improve their accuracy and creativity, the use of copyrighted material without permission from the authors may infringe on their intellectual property rights. In order to overcome copyright legal hurdle against the data sharing, access and re-use of data, the use of copyrighted material for machine learning purposes may be considered permissible under certain circumstances. For example, if the copyright holder has given permission to use the data through a licensing agreement, then the use for machine learning purposes may be lawful. It is also argued that copying for non-expressive purposes that do not involve conveying expressive elements to the public, such as automated data extraction, should not be seen as infringing. The focus of such ‘copy-reliant technologies’ is on understanding language rules, styles, and syntax and no creative ideas are being used. However, the non-expressive use defense is within the framework of the fair use doctrine, which allows the use of copyrighted material for research or educational purposes. The questions arise because the fair use doctrine is not available in EU law, instead, the InfoSoc Directive provides for a rigid system of exclusive rights with a list of exceptions and limitations. One could only argue that non-expressive uses of copyrighted material for machine learning purposes do not constitute a ‘reproduction’ in the first place. Nevertheless, the use of machine learning with copyrighted material is difficult because EU copyright law applies to the mere use of the works. Two solutions can be proposed to address the problem of copyright clearance for AI training data. The first is to introduce a broad exception for text and data mining, either mandatorily or for commercial and scientific purposes, or to permit the reproduction of works for non-expressive purposes. The second is that copyright laws should permit the reproduction of works for non-expressive purposes, which opens the door to discussions regarding the transposition of the fair use principle from the US into EU law. Both solutions aim to provide more space for AI developers to operate and encourage greater freedom, which could lead to more rapid innovation in the field. The Data Governance Act presents a significant opportunity to advance these debates. Finally, issues concerning the balance of general public interests and legitimate private interests in machine learning training data must be addressed. In my opinion, it is crucial that robot-creation output should fall into the public domain. Machines depend on human creativity, innovation, and expression. To encourage technological advancement and innovation, freedom of expression and business operation must be prioritised.Keywords: artificial intelligence, copyright, data governance, machine learning
Procedia PDF Downloads 83325 Hydraulic Performance of Curtain Wall Breakwaters Based on Improved Moving Particle Semi-Implicit Method
Authors: Iddy Iddy, Qin Jiang, Changkuan Zhang
Abstract:
This paper addresses the hydraulic performance of curtain wall breakwaters as a coastal structure protection based on the particles method modelling. The hydraulic functions of curtain wall as wave barriers by reflecting large parts of incident waves through the vertical wall, a part transmitted and a particular part was dissipating the wave energies through the eddy flows formed beneath the lower end of the plate. As a Lagrangian particle, the Moving Particle Semi-implicit (MPS) method which has a robust capability for numerical representation has proven useful for design of structures application that concern free-surface hydrodynamic flow, such as wave breaking and overtopping. In this study, a vertical two-dimensional numerical model for the simulation of violent flow associated with the interaction between the curtain-wall breakwaters and progressive water waves is developed by MPS method in which a higher precision pressure gradient model and free surface particle recognition model were proposed. The wave transmission, reflection, and energy dissipation of the vertical wall were experimentally and theoretically examined. With the numerical wave flume by particle method, very detailed velocity and pressure fields around the curtain-walls under the action of waves can be computed in each calculation steps, and the effect of different wave and structural parameters on the hydrodynamic characteristics was investigated. Also, the simulated results of temporal profiles and distributions of velocity and pressure in the vicinity of curtain-wall breakwaters are compared with the experimental data. Herein, the numerical investigation of hydraulic performance of curtain wall breakwaters indicated that the incident wave is largely reflected from the structure, while the large eddies or turbulent flows occur beneath the curtain-wall resulting in big energy losses. The improved MPS method shows a good agreement between numerical results and analytical/experimental data which are compared to related researches. It is thus verified that the improved pressure gradient model and free surface particle recognition methods are useful for enhancement of stability and accuracy of MPS model for water waves and marine structures. Therefore, it is possible for particle method (MPS method) to achieve an appropriate level of correctness to be applied in engineering fields through further study.Keywords: curtain wall breakwaters, free surface flow, hydraulic performance, improved MPS method
Procedia PDF Downloads 149324 Improving Low English Oral Skills of 5 Second-Year English Major Students at Debark University
Authors: Belyihun Muchie
Abstract:
This study investigates the low English oral communication skills of 5 second-year English major students at Debark University. It aims to identify the key factors contributing to their weaknesses and propose effective interventions to improve their spoken English proficiency. Mixed-methods research will be employed, utilizing observations, questionnaires, and semi-structured interviews to gather data from the participants. To clearly identify these factors, structured and informal observations will be employed; the former will be used to identify their fluency, pronunciation, vocabulary use, and grammar accuracy, and the later will be suited to observe the natural interactions and communication patterns of learners in the classroom setting. The questionnaires will assess their self-perceptions of their skills, perceived barriers to fluency, and preferred learning styles. Interviews will also delve deeper into their experiences and explore specific obstacles faced in oral communication. Data analysis will involve both quantitative and qualitative responses. The structured observation and questionnaire will be analyzed quantitatively, whereas the informal observation and interview transcripts will be analyzed thematically. Findings will be used to identify the major causes of low oral communication skills, such as limited vocabulary, grammatical errors, pronunciation difficulties, or lack of confidence. They are also helpful to develop targeted solutions addressing these causes, such as intensive pronunciation practice, conversation simulations, personalized feedback, or anxiety-reduction techniques. Finally, the findings will guide designing an intervention plan for implementation during the action research phase. The study's outcomes are expected to provide valuable insights into the challenges faced by English major students in developing oral communication skills, contribute to the development of evidence-based interventions for improving spoken English proficiency in similar contexts, and offer practical recommendations for English language instructors and curriculum developers to enhance student learning outcomes. By addressing the specific needs of these students and implementing tailored interventions, this research aims to bridge the gap between theoretical knowledge and practical speaking ability, equipping them with the confidence and skills to flourish in English communication settings.Keywords: oral communication skills, mixed-methods, evidence-based interventions, spoken English proficiency
Procedia PDF Downloads 51323 Simulation and Characterization of Stretching and Folding in Microchannel Electrokinetic Flows
Authors: Justo Rodriguez, Daming Chen, Amador M. Guzman
Abstract:
The detection, treatment, and control of rapidly propagating, deadly viruses such as COVID-19, require the development of inexpensive, fast, and accurate devices to address the urgent needs of the population. Microfluidics-based sensors are amongst the different methods and techniques for detection that are easy to use. A micro analyzer is defined as a microfluidics-based sensor, composed of a network of microchannels with varying functions. Given their size, portability, and accuracy, they are proving to be more effective and convenient than other solutions. A micro analyzer based on the concept of “Lab on a Chip” presents advantages concerning other non-micro devices due to its smaller size, and it is having a better ratio between useful area and volume. The integration of multiple processes in a single microdevice reduces both the number of necessary samples and the analysis time, leading the next generation of analyzers for the health-sciences. In some applications, the flow of solution within the microchannels is originated by a pressure gradient, which can produce adverse effects on biological samples. A more efficient and less dangerous way of controlling the flow in a microchannel-based analyzer is applying an electric field to induce the fluid motion and either enhance or suppress the mixing process. Electrokinetic flows are characterized by no less than two non-dimensional parameters: the electric Rayleigh number and its geometrical aspect ratio. In this research, stable and unstable flows have been studied numerically (and when possible, will be experimental) in a T-shaped microchannel. Additionally, unstable electrokinetic flows for Rayleigh numbers higher than critical have been characterized. The flow mixing enhancement was quantified in relation to the stretching and folding that fluid particles undergo when they are subjected to supercritical electrokinetic flows. Computational simulations were carried out using a finite element-based program while working with the flow mixing concepts developed by Gollub and collaborators. Hundreds of seeded massless particles were tracked along the microchannel from the entrance to exit for both stable and unstable flows. After post-processing, their trajectories, the folding and stretching values for the different flows were found. Numerical results show that for supercritical electrokinetic flows, the enhancement effects of the folding and stretching processes become more apparent. Consequently, there is an improvement in the mixing process, ultimately leading to a more homogenous mixture.Keywords: microchannel, stretching and folding, electro kinetic flow mixing, micro-analyzer
Procedia PDF Downloads 126322 A Novel Approach to 3D Thrust Vectoring CFD via Mesh Morphing
Authors: Umut Yıldız, Berkin Kurtuluş, Yunus Emre Muslubaş
Abstract:
Thrust vectoring, especially in military aviation, is a concept that sees much use to improve maneuverability in already agile aircraft. As this concept is fairly new and cost intensive to design and test, computational methods are useful in easing the preliminary design process. Computational Fluid Dynamics (CFD) can be utilized in many forms to simulate nozzle flow, and there exist various CFD studies in both 2D mechanical and 3D injection based thrust vectoring, and yet, 3D mechanical thrust vectoring analyses, at this point in time, are lacking variety. Additionally, the freely available test data is constrained to limited pitch angles and geometries. In this study, based on a test case provided by NASA, both steady and unsteady 3D CFD simulations are conducted to examine the aerodynamic performance of a mechanical thrust vectoring nozzle model and to validate the utilized numerical model. Steady analyses are performed to verify the flow characteristics of the nozzle at pitch angles of 0, 10 and 20 degrees, and the results are compared with experimental data. It is observed that the pressure data obtained on the inner surface of the nozzle at each specified pitch angle and under different flow conditions with pressure ratios of 1.5, 2 and 4, as well as at azimuthal angle of 0, 45, 90, 135, and 180 degrees exhibited a high level of agreement with the corresponding experimental results. To validate the CFD model, the insights from the steady analyses are utilized, followed by unsteady analyses covering a wide range of pitch angles from 0 to 20 degrees. Throughout the simulations, a mesh morphing method using a carefully calculated mathematical shape deformation model that simulates the vectored nozzle shape exactly at each point of its travel is employed to dynamically alter the divergent part of the nozzle over time within this pitch angle range. The mesh morphing based vectored nozzle shapes were compared with the drawings provided by NASA, ensuring a complete match was achieved. This computational approach allowed for the creation of a comprehensive database of results without the need to generate separate solution domains. The database contains results at every 0.01° increment of nozzle pitch angle. The unsteady analyses, generated using the morphing method, are found to be in excellent agreement with experimental data, further confirming the accuracy of the CFD model.Keywords: thrust vectoring, computational fluid dynamics, 3d mesh morphing, mathematical shape deformation model
Procedia PDF Downloads 84321 Automatic Segmentation of 3D Tomographic Images Contours at Radiotherapy Planning in Low Cost Solution
Authors: D. F. Carvalho, A. O. Uscamayta, J. C. Guerrero, H. F. Oliveira, P. M. Azevedo-Marques
Abstract:
The creation of vector contours slices (ROIs) on body silhouettes in oncologic patients is an important step during the radiotherapy planning in clinic and hospitals to ensure the accuracy of oncologic treatment. The radiotherapy planning of patients is performed by complex softwares focused on analysis of tumor regions, protection of organs at risk (OARs) and calculation of radiation doses for anomalies (tumors). These softwares are supplied for a few manufacturers and run over sophisticated workstations with vector processing presenting a cost of approximately twenty thousand dollars. The Brazilian project SIPRAD (Radiotherapy Planning System) presents a proposal adapted to the emerging countries reality that generally does not have the monetary conditions to acquire some radiotherapy planning workstations, resulting in waiting queues for new patients treatment. The SIPRAD project is composed by a set of integrated and interoperabilities softwares that are able to execute all stages of radiotherapy planning on simple personal computers (PCs) in replace to the workstations. The goal of this work is to present an image processing technique, computationally feasible, that is able to perform an automatic contour delineation in patient body silhouettes (SIPRAD-Body). The SIPRAD-Body technique is performed in tomography slices under grayscale images, extending their use with a greedy algorithm in three dimensions. SIPRAD-Body creates an irregular polyhedron with the Canny Edge adapted algorithm without the use of preprocessing filters, as contrast and brightness. In addition, comparing the technique SIPRAD-Body with existing current solutions is reached a contours similarity at least 78%. For this comparison is used four criteria: contour area, contour length, difference between the mass centers and Jaccard index technique. SIPRAD-Body was tested in a set of oncologic exams provided by the Clinical Hospital of the University of Sao Paulo (HCRP-USP). The exams were applied in patients with different conditions of ethnology, ages, tumor severities and body regions. Even in case of services that have already workstations, it is possible to have SIPRAD working together PCs because of the interoperability of communication between both systems through the DICOM protocol that provides an increase of workflow. Therefore, the conclusion is that SIPRAD-Body technique is feasible because of its degree of similarity in both new radiotherapy planning services and existing services.Keywords: radiotherapy, image processing, DICOM RT, Treatment Planning System (TPS)
Procedia PDF Downloads 296320 The Impact of Coronal STIR Imaging in Routine Lumbar MRI: Uncovering Hidden Causes to Enhanced Diagnostic Yield of Back Pain and Sciatica
Authors: Maysoon Nasser Samhan, Somaya Alkiswani, Abdullah Alzibdeh
Abstract:
Background: Routine lumbar MRIs for back pain may yield normal results despite persistent symptoms, which means the possibility of other causes for this pain, which was not shown on the routine images. Research suggests including coronal STIR imaging to detect additional pathologies like sacroiliitis. Objectives: This study aims to enhance diagnostic accuracy and aid in determining treatment processes for patients with persistent back pain who have normal routine lumbar MRI (T1 and T2 images) by incorporating coronal STIR into the examination. Methods: A prospectively conducted study involving 274 patients, 115 males and 159 females, with an age range of 6–92 years, reviewed their medical records and imaging data following a lumbar spine MRI. This study included patients with back pain and sciatica as their primary complaints, all of whom underwent lumbar spine MRIs at our hospital to identify potential pathologies. Using a GE Signa HD 1.5T MRI System, each patient received a standard MRI protocol that included T1 and T2 sagittal and axial sequences, as well as a coronal STIR sequence. We collected relevant MRI findings, including abnormalities and structural variations, from radiology reports. We classified these findings into tables and documented them as counts and percentages, using Fisher’s exact test to assess differences between categorical variables. We conducted a statistical analysis using Prism GraphPad software version 10.1.2. The study adhered to ethical guidelines, institutional review board approvals, and patient confidentiality regulations. Results: Exclusion of the coronal STIR sequence led to 83 subjects (30.29%) being classified as within normal limits on MRI examination. 36 patients without abnormalities on T1 and T2 sequences showed abnormalities on the coronal STIR sequence, with 26 cases attributed to spinal pathologies and 10 to non-spinal pathologies. In addition to that, Fisher's exact test demonstrated a significant association between sacroiliitis diagnosis and abnormalities identified solely through the coronal STIR sequence (P < 0.0001). Conclusion: Implementing coronal STIR imaging as part of routine lumbar MRI protocols has the potential to improve patient care by facilitating a more comprehensive evaluation and management of persistent back pain.Keywords: magnetic resonance imaging, lumber MRI, radiology, neurology
Procedia PDF Downloads 13319 Effect of the Orifice Plate Specifications on Coefficient of Discharge
Authors: Abulbasit G. Abdulsayid, Zinab F. Abdulla, Asma A. Omer
Abstract:
On the ground that the orifice plate is relatively inexpensive, requires very little maintenance and only calibrated during the occasion of plant turnaround, the orifice plate has turned to be in a real prevalent use in gas industry. Inaccuracy of measurement in the fiscal metering stations may highly be accounted to be the most vital factor for mischarges in the natural gas industry in Libya. A very trivial error in measurement can add up a fast escalating financial burden to the custodian transactions. The unaccounted gas quantity transferred annually via orifice plates in Libya, could be estimated in an extent of multi-million dollars. As the oil and gas wealth is the solely source of income to Libya, every effort is now being exerted to improve the accuracy of existing orifice metering facilities. Discharge coefficient has become pivotal in current researches undertaken in this regard. Hence, increasing the knowledge of the flow field in a typical orifice meter is indispensable. Recently and in a drastic pace, the CFD has become the most time and cost efficient versatile tool for in-depth analysis of fluid mechanics, heat and mass transfer of various industrial applications. Getting deeper into the physical phenomena lied beneath and predicting all relevant parameters and variables with high spatial and temporal resolution have been the greatest weighing pros counting for CFD. In this paper, flow phenomena for air passing through an orifice meter were numerically analyzed with CFD code based modeling, giving important information about the effect of orifice plate specifications on the discharge coefficient for three different tappings locations, i.e., flange tappings, D and D/2 tappings compared with vena contracta tappings. Discharge coefficients were paralleled with discharge coefficients estimated by ISO 5167. The influences of orifice plate bore thickness, orifice plate thickness, beveled angle, perpendicularity and buckling of the orifice plate, were all duly investigated. A case of an orifice meter whose pipe diameter of 2 in, beta ratio of 0.5 and Reynolds number of 91100, was taken as a model. The results highlighted that the discharge coefficients were highly responsive to the variation of plate specifications and under all cases, the discharge coefficients for D and D/2 tappings were very close to that of vena contracta tappings which were believed as an ideal arrangement. Also, in general sense, it was appreciated that the standard equation in ISO 5167, by which the discharge coefficient was calculated, cannot capture the variation of the plate specifications and thus further thorough considerations would be still needed.Keywords: CFD, discharge coefficients, orifice meter, orifice plate specifications
Procedia PDF Downloads 119318 Determinants of Youth Engagement with Health Information on Social Media Platforms in United Arab Emirates
Authors: Niyi Awofeso, Yunes Gaber, Moyosola Bamidele
Abstract:
Since most social media platforms are accessible anytime and anywhere where Internet connections and smartphones are available, the invisibility of the reader raises questions about accuracy, appropriateness and comprehensibility of social media communication. Furthermore, the identity and motives of individuals and organizations who post articles on social media sites are not always transparent. In the health sector, through socially networked platforms constitute a common source of health-related information, given their purported wealth of information. Nevertheless, fake blogs and sponsored postings for marketing 'natural cures' pervade most commonly used social media platforms, thus complicating readers’ abilities to access and understand trustworthy health-related information. This purposive sampling study of 120 participants aged 18-35 year in UAE was conducted between September and December 2017, and explored commonly used social media platforms, frequency of use of social media for accessing health related information, and approaches for assessing the trustworthiness of health information on social media platforms. Results indicate that WhatsApp (95%), Instagram (87%) and Youtube (82%) were the most commonly used social media platforms among respondents. Majority of respondents (81%) indicated that they regularly access social media to get health-associated information. More than half of respondents (55%) with non-chronic health status relied on unsolicited messages to obtain health-related information. Doctors’ health blogs (21%) and social media sites of international healthcare organizations (20%) constitute the most trusted source of health information among respondents, with UAE government health agencies’ social media accounts trusted by 15% of respondents. Cardiovascular diseases, diabetes, and hypertension were the most commonly searched topics on social media (29%), followed by nutrition (20%) and skin care (16%). Majority of respondents (41%) rely on reliability of hits on Google search engines, 22% check for health information only from 'reliable' social media sites, while 8% utilize 'logic' to ascertain reliability of health information. As social media has rapidly become an integral part of the health landscape, it is important that health care policy makers, healthcare providers and social media companies collaborate to promote the positive aspects of social media for young people, whilst mitigating the potential negatives. Utilizing popular social media platforms for posting reader-friendly health information will achieve high coverage. Improving youth digital literacy will facilitate easier access to trustworthy information on the internet.Keywords: social media, United Arab Emirates, youth engagement, digital literacy
Procedia PDF Downloads 119317 Using Signature Assignments and Rubrics in Assessing Institutional Learning Outcomes and Student Learning
Authors: Leigh Ann Wilson, Melanie Borrego
Abstract:
The purpose of institutional learning outcomes (ILOs) is to assess what students across the university know and what they do not. The issue is gathering this information in a systematic and usable way. This presentation will explain how one institution has engineered this process for both student success and maximum faculty curriculum and course design input. At Brandman University, there are three levels of learning outcomes: course, program, and institutional. Institutional Learning Outcomes (ILOs) are mapped to specific courses. Faculty course developers write the signature assignments (SAs) in alignment with the Institutional Learning Outcomes for each course. These SAs use a specific rubric that is applied consistently by every section and every instructor. Each year, the 12-member General Education Team (GET), as a part of their work, conducts the calibration and assessment of the university-wide SAs and the related rubrics for one or two of the five ILOs. GET members, who are senior faculty and administrators who represent each of the university's schools, lead the calibration meetings. Specifically, calibration is a process designed to ensure the accuracy and reliability of evaluating signature assignments by working with peer faculty to interpret rubrics and compare scoring. These calibration meetings include the full time and adjunct faculty members who teach the course to ensure consensus on the application of the rubric. Each calibration session is chaired by a GET representative as well as the course custodian/contact where the ILO signature assignment resides. The overall calibration process GET follows includes multiple steps, such as: contacting and inviting relevant faculty members to participate; organizing and hosting calibration sessions; and reviewing and discussing at least 10 samples of student work from class sections during the previous academic year, for each applicable signature assignment. Conversely, the commitment for calibration teams consist of attending two virtual meetings lasting up to three hours in duration. The first meeting focuses on interpreting the rubric, and the second meeting involves comparing scores for sample work and sharing feedback about the rubric and assignment. Next, participants are expected to follow all directions provided and participate actively, and respond to scheduling requests and other emails within 72 hours. The virtual meetings are recorded for future institutional use. Adjunct faculty are paid a small stipend after participating in both calibration meetings. Full time faculty can use this work on their annual faculty report for "internal service" credit.Keywords: assessment, assurance of learning, course design, institutional learning outcomes, rubrics, signature assignments
Procedia PDF Downloads 280316 Metal Binding Phage Clones in a Quest for Heavy Metal Recovery from Water
Authors: Tomasz Łęga, Marta Sosnowska, Mirosława Panasiuk, Lilit Hovhannisyan, Beata Gromadzka, Marcin Olszewski, Sabina Zoledowska, Dawid Nidzworski
Abstract:
Toxic heavy metal ion contamination of industrial wastewater has recently become a significant environmental concern in many regions of the world. Although the majority of heavy metals are naturally occurring elements found on the earth's surface, anthropogenic activities such as mining and smelting, industrial production, and agricultural use of metals and metal-containing compounds are responsible for the majority of environmental contamination and human exposure. The permissible limits (ppm) for heavy metals in food, water and soil are frequently exceeded and considered hazardous to humans, other organisms, and the environment as a whole. Human exposure to highly nickel-polluted environments causes a variety of pathologic effects. In 2008, nickel received the shameful name of “Allergen of the Year” (GILLETTE 2008). According to the dermatologist, the frequency of nickel allergy is still growing, and it can’t be explained only by fashionable piercing and nickel devices used in medicine (like coronary stents and endoprostheses). Effective remediation methods for removing heavy metal ions from soil and water are becoming increasingly important. Among others, methods such as chemical precipitation, micro- and nanofiltration, membrane separation, conventional coagulation, electrodialysis, ion exchange, reverse and forward osmosis, photocatalysis and polymer or carbon nanocomposite absorbents have all been investigated so far. The importance of environmentally sustainable industrial production processes and the conservation of dwindling natural resources has highlighted the need for affordable, innovative biosorptive materials capable of recovering specific chemical elements from dilute aqueous solutions. The use of combinatorial phage display techniques for selecting and recognizing material-binding peptides with a selective affinity for any target, particularly inorganic materials, has gained considerable interest in the development of advanced bio- or nano-materials. However, due to the limitations of phage display libraries and the biopanning process, the accuracy of molecular recognition for inorganic materials remains a challenge. This study presents the isolation, identification and characterisation of metal binding phage clones that preferentially recover nickel.Keywords: Heavy metal recovery, cleaning water, phage display, nickel
Procedia PDF Downloads 99315 A Study on Inverse Determination of Impact Force on a Honeycomb Composite Panel
Authors: Hamed Kalhori, Lin Ye
Abstract:
In this study, an inverse method was developed to reconstruct the magnitude and duration of impact forces exerted to a rectangular carbon fibre-epoxy composite honeycomb sandwich panel. The dynamic signals captured by Piezoelectric (PZT) sensors installed on the panel remotely from the impact locations were utilized to reconstruct the impact force generated by an instrumented hammer through an extended deconvolution approach. Two discretized forms of convolution integral are considered; the traditional one with an explicit transfer function and the modified one without an explicit transfer function. Deconvolution, usually applied to reconstruct the time history (e.g. magnitude) of a stochastic force at a defined location, is extended to identify both the location and magnitude of the impact force among a number of potential impact locations. It is assumed that a number of impact forces are simultaneously exerted to all potential locations, but the magnitude of all forces except one is zero, implicating that the impact occurs only at one location. The extended deconvolution is then applied to determine the magnitude as well as location (among the potential ones), incorporating the linear superposition of responses resulted from impact at each potential location. The problem can be categorized into under-determined (the number of sensors is less than that of impact locations), even-determined (the number of sensors equals that of impact locations), or over-determined (the number of sensors is greater than that of impact locations) cases. For an under-determined case, it comprises three potential impact locations and one PZT sensor for the rectangular carbon fibre-epoxy composite honeycomb sandwich panel. Assessments are conducted to evaluate the factors affecting the precision of the reconstructed force. Truncated Singular Value Decomposition (TSVD) and the Tikhonov regularization are independently chosen to regularize the problem to find the most suitable method for this system. The selection of optimal value of the regularization parameter is investigated through L-curve and Generalized Cross Validation (GCV) methods. In addition, the effect of different width of signal windows on the reconstructed force is examined. It is observed that the impact force generated by the instrumented impact hammer is sensitive to the impact locations of the structure, having a shape from a simple half-sine to a complicated one. The accuracy of the reconstructed impact force is evaluated using the correlation co-efficient between the reconstructed force and the actual one. Based on this criterion, it is concluded that the forces reconstructed by using the extended deconvolution without an explicit transfer function together with Tikhonov regularization match well with the actual forces in terms of magnitude and duration.Keywords: honeycomb composite panel, deconvolution, impact localization, force reconstruction
Procedia PDF Downloads 535314 Parallel Fuzzy Rough Support Vector Machine for Data Classification in Cloud Environment
Authors: Arindam Chaudhuri
Abstract:
Classification of data has been actively used for most effective and efficient means of conveying knowledge and information to users. The prima face has always been upon techniques for extracting useful knowledge from data such that returns are maximized. With emergence of huge datasets the existing classification techniques often fail to produce desirable results. The challenge lies in analyzing and understanding characteristics of massive data sets by retrieving useful geometric and statistical patterns. We propose a supervised parallel fuzzy rough support vector machine (PFRSVM) for data classification in cloud environment. The classification is performed by PFRSVM using hyperbolic tangent kernel. The fuzzy rough set model takes care of sensitiveness of noisy samples and handles impreciseness in training samples bringing robustness to results. The membership function is function of center and radius of each class in feature space and is represented with kernel. It plays an important role towards sampling the decision surface. The success of PFRSVM is governed by choosing appropriate parameter values. The training samples are either linear or nonlinear separable. The different input points make unique contributions to decision surface. The algorithm is parallelized with a view to reduce training times. The system is built on support vector machine library using Hadoop implementation of MapReduce. The algorithm is tested on large data sets to check its feasibility and convergence. The performance of classifier is also assessed in terms of number of support vectors. The challenges encountered towards implementing big data classification in machine learning frameworks are also discussed. The experiments are done on the cloud environment available at University of Technology and Management, India. The results are illustrated for Gaussian RBF and Bayesian kernels. The effect of variability in prediction and generalization of PFRSVM is examined with respect to values of parameter C. It effectively resolves outliers’ effects, imbalance and overlapping class problems, normalizes to unseen data and relaxes dependency between features and labels. The average classification accuracy for PFRSVM is better than other classifiers for both Gaussian RBF and Bayesian kernels. The experimental results on both synthetic and real data sets clearly demonstrate the superiority of the proposed technique.Keywords: FRSVM, Hadoop, MapReduce, PFRSVM
Procedia PDF Downloads 490313 Simulation Study on Polymer Flooding with Thermal Degradation in Elevated-Temperature Reservoirs
Authors: Lin Zhao, Hanqiao Jiang, Junjian Li
Abstract:
Polymers injected into elevated-temperature reservoirs inevitably suffer from thermal degradation, resulting in severe viscosity loss and poor flooding performance. However, for polymer flooding in such reservoirs, present simulators fail to provide accurate results for lack of description on thermal degradation. In light of this, the objectives of this paper are to provide a simulation model for polymer flooding with thermal degradation and study the effect of thermal degradation on polymer flooding in elevated-temperature reservoirs. Firstly, a thermal degradation experiment was conducted to obtain the degradation law of polymer concentration and viscosity. Different types of polymers degraded in the Thermo tank with elevated temperatures. Afterward, based on the obtained law, a streamline-assistant model was proposed to simulate the degradation process under in-situ flow conditions. Model validation was performed with field data from a well group of an offshore oilfield. Finally, the effect of thermal degradation on polymer flooding was studied using the proposed model. Experimental results showed that the polymer concentration remained unchanged, while the viscosity degraded exponentially with time after degradation. The polymer viscosity was functionally dependent on the polymer degradation time (PDT), which represented the elapsed time started from the polymer particle injection. Tracing the real flow path of polymer particle was required. Therefore, the presented simulation model was streamline-assistant. Equation of PDT vs. time of flight (TOF) along streamline was built by the law of polymer particle transport. Based on the field polymer sample and dynamic data, the new model proved its accuracy. Study of degradation effect on polymer flooding indicated: (1) the viscosity loss increased with TOF exponentially in the main body of polymer-slug and remained constant in the slug front; (2) the responding time of polymer flooding was delayed, but the effective time was prolonged; (3) the breakthrough of subsequent water was eased; (4) the capacity of polymer adjusting injection profile was diminished; (5) the incremental recovery was reduced significantly. In general, the effect of thermal degradation on polymer flooding performance was rather negative. This paper provides a more comprehensive insight into polymer thermal degradation in both the physical process and field application. The proposed simulation model offers an effective means for simulating the polymer flooding process with thermal degradation. The negative effect of thermal degradation suggests that the polymer thermal stability should be given full consideration when designing polymer flooding project in elevated-temperature reservoirs.Keywords: polymer flooding, elevated-temperature reservoir, thermal degradation, numerical simulation
Procedia PDF Downloads 143312 Optimizing Production Yield Through Process Parameter Tuning Using Deep Learning Models: A Case Study in Precision Manufacturing
Authors: Tolulope Aremu
Abstract:
This paper is based on the idea of using deep learning methodology for optimizing production yield by tuning a few key process parameters in a manufacturing environment. The study was explicitly on how to maximize production yield and minimize operational costs by utilizing advanced neural network models, specifically Long Short-Term Memory and Convolutional Neural Networks. These models were implemented using Python-based frameworks—TensorFlow and Keras. The targets of the research are the precision molding processes in which temperature ranges between 150°C and 220°C, the pressure ranges between 5 and 15 bar, and the material flow rate ranges between 10 and 50 kg/h, which are critical parameters that have a great effect on yield. A dataset of 1 million production cycles has been considered for five continuous years, where detailed logs are present showing the exact setting of parameters and yield output. The LSTM model would model time-dependent trends in production data, while CNN analyzed the spatial correlations between parameters. Models are designed in a supervised learning manner. For the model's loss, an MSE loss function is used, optimized through the Adam optimizer. After running a total of 100 training epochs, 95% accuracy was achieved by the models recommending optimal parameter configurations. Results indicated that with the use of RSM and DOE traditional methods, there was an increase in production yield of 12%. Besides, the error margin was reduced by 8%, hence consistent quality products from the deep learning models. The monetary value was annually around $2.5 million, the cost saved from material waste, energy consumption, and equipment wear resulting from the implementation of optimized process parameters. This system was deployed in an industrial production environment with the help of a hybrid cloud system: Microsoft Azure, for data storage, and the training and deployment of their models were performed on Google Cloud AI. The functionality of real-time monitoring of the process and automatic tuning of parameters depends on cloud infrastructure. To put it into perspective, deep learning models, especially those employing LSTM and CNN, optimize the production yield by fine-tuning process parameters. Future research will consider reinforcement learning with a view to achieving further enhancement of system autonomy and scalability across various manufacturing sectors.Keywords: production yield optimization, deep learning, tuning of process parameters, LSTM, CNN, precision manufacturing, TensorFlow, Keras, cloud infrastructure, cost saving
Procedia PDF Downloads 31311 A Method to Predict the Thermo-Elastic Behavior of Laser-Integrated Machine Tools
Authors: C. Brecher, M. Fey, F. Du Bois-Reymond, S. Neus
Abstract:
Additive manufacturing has emerged into a fast-growing section within the manufacturing technologies. Established machine tool manufacturers, such as DMG MORI, recently presented machine tools combining milling and laser welding. By this, machine tools can realize a higher degree of flexibility and a shorter production time. Still there are challenges that have to be accounted for in terms of maintaining the necessary machining accuracy - especially due to thermal effects arising through the use of high power laser processing units. To study the thermal behavior of laser-integrated machine tools, it is essential to analyze and simulate the thermal behavior of machine components, individual and assembled. This information will help to design a geometrically stable machine tool under the influence of high power laser processes. This paper presents an approach to decrease the loss of machining precision due to thermal impacts. Real effects of laser machining processes are considered and thus enable an optimized design of the machine tool, respective its components, in the early design phase. Core element of this approach is a matched FEM model considering all relevant variables arising, e.g. laser power, angle of laser beam, reflective coefficients and heat transfer coefficient. Hence, a systematic approach to obtain this matched FEM model is essential. Indicating the thermal behavior of structural components as well as predicting the laser beam path, to determine the relevant beam intensity on the structural components, there are the two constituent aspects of the method. To match the model both aspects of the method have to be combined and verified empirically. In this context, an essential machine component of a five axis machine tool, the turn-swivel table, serves as the demonstration object for the verification process. Therefore, a turn-swivel table test bench as well as an experimental set-up to measure the beam propagation were developed and are described in the paper. In addition to the empirical investigation, a simulative approach of the described types of experimental examination is presented. Concluding, it is shown that the method and a good understanding of the two core aspects, the thermo-elastic machine behavior and the laser beam path, as well as their combination helps designers to minimize the loss of precision in the early stages of the design phase.Keywords: additive manufacturing, laser beam machining, machine tool, thermal effects
Procedia PDF Downloads 265310 Miniaturized PVC Sensors for Determination of Fe2+, Mn2+ and Zn2+ in Buffalo-Cows’ Cervical Mucus Samples
Authors: Ahmed S. Fayed, Umima M. Mansour
Abstract:
Three polyvinyl chloride membrane sensors were developed for the electrochemical evaluation of ferrous, manganese and zinc ions. The sensors were used for assaying metal ions in cervical mucus (CM) of Egyptian river buffalo-cows (Bubalus bubalis) as their levels vary dependent on cyclical hormone variation during different phases of estrus cycle. The presented sensors are based on using ionophores, β-cyclodextrin (β-CD), hydroxypropyl β-cyclodextrin (HP-β-CD) and sulfocalix-4-arene (SCAL) for sensors 1, 2 and 3 for Fe2+, Mn2+ and Zn2+, respectively. Dioctyl phthalate (DOP) was used as the plasticizer in a polymeric matrix of polyvinylchloride (PVC). For increasing the selectivity and sensitivity of the sensors, each sensor was enriched with a suitable complexing agent, which enhanced the sensor’s response. For sensor 1, β-CD was mixed with bathophenanthroline; for sensor 2, porphyrin was incorporated with HP-β-CD; while for sensor 3, oxine was the used complexing agent with SCAL. Linear responses of 10-7-10-2 M with cationic slopes of 53.46, 45.01 and 50.96 over pH range 4-8 were obtained using coated graphite sensors for ferrous, manganese and zinc ionic solutions, respectively. The three sensors were validated, according to the IUPAC guidelines. The obtained results by the presented potentiometric procedures were statistically analyzed and compared with those obtained by atomic absorption spectrophotometric method (AAS). No significant differences for either accuracy or precision were observed between the two techniques. Successful application for the determination of the three studied cations in CM, for the purpose to determine the proper time for artificial insemination (AI) was achieved. The results were compared with those obtained upon analyzing the samples by AAS. Proper detection of estrus and correct time of AI was necessary to maximize the production of buffaloes. In this experiment, 30 multi-parous buffalo-cows were in second to third lactation and weighting 415-530 kg, and were synchronized with OVSynch protocol. Samples were taken in three times around ovulation, on day 8 of OVSynch protocol, on day 9 (20 h before AI) and on day 10 (1 h before AI). Beside analysis of trace elements (Fe2+, Mn2+ and Zn2+) in CM using the three sensors, the samples were analyzed for the three cations and also Cu2+ by AAS in the CM samples and blood samples. The results obtained were correlated with hormonal analysis of serum samples and ultrasonography for the purpose of determining of the optimum time of AI. The results showed significant differences and powerful correlation with Zn2+ composition of CM during heat phase and the ovulation time, indicating that the parameter could be used as a tool to decide optimal time of AI in buffalo-cows.Keywords: PVC Sensors, buffalo-cows, cyclodextrins, atomic absorption spectrophotometry, artificial insemination, OVSynch protocol
Procedia PDF Downloads 219309 A Method against Obsolescence of Three-Dimensional Archaeological Collection. Two Cases of Study from Qubbet El-Hawa Necropolis, Aswan, Egypt
Authors: L. Serrano-Lara, J.M Alba-Gómez
Abstract:
Qubbet el–Hawa Project has been documented archaeological artifacts as 3d models by laser scanning technique since 2015. Currently, research has obtained the right methodology to develop a high accuracy photographic texture for each geometrical 3D model. Furthermore, the right methodology to attach the complete digital surrogate into a 3DPDF document has been obtained; it is used as a catalogue worksheet that brings archaeological data and, at the same time, allows us to obtain precise measurements, volume calculations and cross-section mapping of each scanned artifact. This validated archaeological documentation is the first step for dissemination, application as Qubbet el-Hawa Virtual Museum, and, moreover, multi-sensory experience through 3D print archaeological artifacts. Material culture from four funerary complexes constructed in West Aswan has become physical replicas opening the archaeological research process itself and offering creative possibilities on museology or educational projects. This paper shares a method of acquiring texture for scanning´s output product in order to achieve a 3DPDF archaeological cataloguing, and, on the other hand, to allow the colorfully 3D printing of singular archaeological artifacts. The proposed method has undergone two concrete cases, a polychrome wooden ushabti, and, a cartonnage mask belonging to a lady, bought recovered on intact tomb QH34aa. Both 3D model results have been implemented on three main applications, archaeological 3D catalogue, public dissemination activities, and the 3D artifact model in a bachelor education program. Due to those three already mentioned applications, productive interaction among spectator and three-dimensional artifact have been increased; moreover, functionality as archaeological documentation has been consolidated. Finding the right methodology to assign a specific color to each vector on the geometric 3D model, we had been achieved two essential archaeological applications. Firstly, 3DPDF as a display document for an archaeological catalogue, secondly, the possibility to obtain a colored 3d printed object to be displayed in public exhibitions. Obsolescences 3D models have become updated archaeological documentation of QH43aa tomb cultural material. Therefore, Qubbet el-Hawa Project has been actualized the educational potential of its results thanks to a multi-sensory experience that arose from 3d scanned´s archaeological artifacts.Keywords: 3D printed, 3D scanner, Middle Kingdom, Qubbet el-Hawa necropolis, virtual archaeology
Procedia PDF Downloads 141308 Fracture Behaviour of Functionally Graded Materials Using Graded Finite Elements
Authors: Mohamad Molavi Nojumi, Xiaodong Wang
Abstract:
In this research fracture behaviour of linear elastic isotropic functionally graded materials (FGMs) are investigated using modified finite element method (FEM). FGMs are advantageous because they enhance the bonding strength of two incompatible materials, and reduce the residual stress and thermal stress. Ceramic/metals are a main type of FGMs. Ceramic materials are brittle. So, there is high possibility of crack existence during fabrication or in-service loading. In addition, damage analysis is necessary for a safe and efficient design. FEM is a strong numerical tool for analyzing complicated problems. Thus, FEM is used to investigate the fracture behaviour of FGMs. Here an accurate 9-node biquadratic quadrilateral graded element is proposed in which the influence of the variation of material properties is considered at the element level. The stiffness matrix of graded elements is obtained using the principle of minimum potential energy. The implementation of graded elements prevents the forced sudden jump of material properties in traditional finite elements for modelling FGMs. Numerical results are verified with existing solutions. Different numerical simulations are carried out to model stationary crack problems in nonhomogeneous plates. In these simulations, material variation is supposed to happen in directions perpendicular and parallel to the crack line. Two special linear and exponential functions have been utilized to model the material gradient as they are mostly discussed in literature. Also, various sizes of the crack length are considered. A major difference in the fracture behaviour of FGMs and homogeneous materials is related to the break of material symmetry. For example, when the material gradation direction is normal to the crack line, even under applying the mode I loading there exists coupled modes I and II of fracture which originates from the induced shear in the model. Therefore, the necessity of the proper modelling of the material variation should be considered in capturing the fracture behaviour of FGMs specially, when the material gradient index is high. Fracture properties such as mode I and mode II stress intensity factors (SIFs), energy release rates, and field variables near the crack tip are investigated and compared with results obtained using conventional homogeneous elements. It is revealed that graded elements provide higher accuracy with less effort in comparison with conventional homogeneous elements.Keywords: finite element, fracture mechanics, functionally graded materials, graded element
Procedia PDF Downloads 174307 Social Media Data Analysis for Personality Modelling and Learning Styles Prediction Using Educational Data Mining
Authors: Srushti Patil, Preethi Baligar, Gopalkrishna Joshi, Gururaj N. Bhadri
Abstract:
In designing learning environments, the instructional strategies can be tailored to suit the learning style of an individual to ensure effective learning. In this study, the information shared on social media like Facebook is being used to predict learning style of a learner. Previous research studies have shown that Facebook data can be used to predict user personality. Users with a particular personality exhibit an inherent pattern in their digital footprint on Facebook. The proposed work aims to correlate the user's’ personality, predicted from Facebook data to the learning styles, predicted through questionnaires. For Millennial learners, Facebook has become a primary means for information sharing and interaction with peers. Thus, it can serve as a rich bed for research and direct the design of learning environments. The authors have conducted this study in an undergraduate freshman engineering course. Data from 320 freshmen Facebook users was collected. The same users also participated in the learning style and personality prediction survey. The Kolb’s Learning style questionnaires and Big 5 personality Inventory were adopted for the survey. The users have agreed to participate in this research and have signed individual consent forms. A specific page was created on Facebook to collect user data like personal details, status updates, comments, demographic characteristics and egocentric network parameters. This data was captured by an application created using Python program. The data captured from Facebook was subjected to text analysis process using the Linguistic Inquiry and Word Count dictionary. An analysis of the data collected from the questionnaires performed reveals individual student personality and learning style. The results obtained from analysis of Facebook, learning style and personality data were then fed into an automatic classifier that was trained by using the data mining techniques like Rule-based classifiers and Decision trees. This helps to predict the user personality and learning styles by analysing the common patterns. Rule-based classifiers applied for text analysis helps to categorize Facebook data into positive, negative and neutral. There were totally two models trained, one to predict the personality from Facebook data; another one to predict the learning styles from the personalities. The results show that the classifier model has high accuracy which makes the proposed method to be a reliable one for predicting the user personality and learning styles.Keywords: educational data mining, Facebook, learning styles, personality traits
Procedia PDF Downloads 231306 Linkage Disequilibrium and Haplotype Blocks Study from Two High-Density Panels and a Combined Panel in Nelore Beef Cattle
Authors: Priscila A. Bernardes, Marcos E. Buzanskas, Luciana C. A. Regitano, Ricardo V. Ventura, Danisio P. Munari
Abstract:
Genotype imputation has been used to reduce genomic selections costs. In order to increase haplotype detection accuracy in methods that considers the linkage disequilibrium, another approach could be used, such as combined genotype data from different panels. Therefore, this study aimed to evaluate the linkage disequilibrium and haplotype blocks in two high-density panels before and after the imputation to a combined panel in Nelore beef cattle. A total of 814 animals were genotyped with the Illumina BovineHD BeadChip (IHD), wherein 93 animals (23 bulls and 70 progenies) were also genotyped with the Affymetrix Axion Genome-Wide BOS 1 Array Plate (AHD). After the quality control, 809 IHD animals (509,107 SNPs) and 93 AHD (427,875 SNPs) remained for analyses. The combined genotype panel (CP) was constructed by merging both panels after quality control, resulting in 880,336 SNPs. Imputation analysis was conducted using software FImpute v.2.2b. The reference (CP) and target (IHD) populations consisted of 23 bulls and 786 animals, respectively. The linkage disequilibrium and haplotype blocks studies were carried out for IHD, AHD, and imputed CP. Two linkage disequilibrium measures were considered; the correlation coefficient between alleles from two loci (r²) and the |D’|. Both measures were calculated using the software PLINK. The haplotypes' blocks were estimated using the software Haploview. The r² measurement presented different decay when compared to |D’|, wherein AHD and IHD had almost the same decay. For r², even with possible overestimation by the sample size for AHD (93 animals), the IHD presented higher values when compared to AHD for shorter distances, but with the increase of distance, both panels presented similar values. The r² measurement is influenced by the minor allele frequency of the pair of SNPs, which can cause the observed difference comparing the r² decay and |D’| decay. As a sum of the combinations between Illumina and Affymetrix panels, the CP presented a decay equivalent to a mean of these combinations. The estimated haplotype blocks detected for IHD, AHD, and CP were 84,529, 63,967, and 140,336, respectively. The IHD were composed by haplotype blocks with mean of 137.70 ± 219.05kb, the AHD with mean of 102.10kb ± 155.47, and the CP with mean of 107.10kb ± 169.14. The majority of the haplotype blocks of these three panels were composed by less than 10 SNPs, with only 3,882 (IHD), 193 (AHD) and 8,462 (CP) haplotype blocks composed by 10 SNPs or more. There was an increase in the number of chromosomes covered with long haplotypes when CP was used as well as an increase in haplotype coverage for short chromosomes (23-29), which can contribute for studies that explore haplotype blocks. In general, using CP could be an alternative to increase density and number of haplotype blocks, increasing the probability to obtain a marker close to a quantitative trait loci of interest.Keywords: Bos taurus indicus, decay, genotype imputation, single nucleotide polymorphism
Procedia PDF Downloads 280305 An Effort at Improving Reliability of Laboratory Data in Titrimetric Analysis for Zinc Sulphate Tablets Using Validated Spreadsheet Calculators
Authors: M. A. Okezue, K. L. Clase, S. R. Byrn
Abstract:
The requirement for maintaining data integrity in laboratory operations is critical for regulatory compliance. Automation of procedures reduces incidence of human errors. Quality control laboratories located in low-income economies may face some barriers in attempts to automate their processes. Since data from quality control tests on pharmaceutical products are used in making regulatory decisions, it is important that laboratory reports are accurate and reliable. Zinc Sulphate (ZnSO4) tablets is used in treatment of diarrhea in pediatric population, and as an adjunct therapy for COVID-19 regimen. Unfortunately, zinc content in these formulations is determined titrimetrically; a manual analytical procedure. The assay for ZnSO4 tablets involves time-consuming steps that contain mathematical formulae prone to calculation errors. To achieve consistency, save costs, and improve data integrity, validated spreadsheets were developed to simplify the two critical steps in the analysis of ZnSO4 tablets: standardization of 0.1M Sodium Edetate (EDTA) solution, and the complexometric titration assay procedure. The assay method in the United States Pharmacopoeia was used to create a process flow for ZnSO4 tablets. For each step in the process, different formulae were input into two spreadsheets to automate calculations. Further checks were created within the automated system to ensure validity of replicate analysis in titrimetric procedures. Validations were conducted using five data sets of manually computed assay results. The acceptance criteria set for the protocol were met. Significant p-values (p < 0.05, α = 0.05, at 95% Confidence Interval) were obtained from students’ t-test evaluation of the mean values for manual-calculated and spreadsheet results at all levels of the analysis flow. Right-first-time analysis and principles of data integrity were enhanced by use of the validated spreadsheet calculators in titrimetric evaluations of ZnSO4 tablets. Human errors were minimized in calculations when procedures were automated in quality control laboratories. The assay procedure for the formulation was achieved in a time-efficient manner with greater level of accuracy. This project is expected to promote cost savings for laboratory business models.Keywords: data integrity, spreadsheets, titrimetry, validation, zinc sulphate tablets
Procedia PDF Downloads 169304 Different Data-Driven Bivariate Statistical Approaches to Landslide Susceptibility Mapping (Uzundere, Erzurum, Turkey)
Authors: Azimollah Aleshzadeh, Enver Vural Yavuz
Abstract:
The main goal of this study is to produce landslide susceptibility maps using different data-driven bivariate statistical approaches; namely, entropy weight method (EWM), evidence belief function (EBF), and information content model (ICM), at Uzundere county, Erzurum province, in the north-eastern part of Turkey. Past landslide occurrences were identified and mapped from an interpretation of high-resolution satellite images, and earlier reports as well as by carrying out field surveys. In total, 42 landslide incidence polygons were mapped using ArcGIS 10.4.1 software and randomly split into a construction dataset 70 % (30 landslide incidences) for building the EWM, EBF, and ICM models and the remaining 30 % (12 landslides incidences) were used for verification purposes. Twelve layers of landslide-predisposing parameters were prepared, including total surface radiation, maximum relief, soil groups, standard curvature, distance to stream/river sites, distance to the road network, surface roughness, land use pattern, engineering geological rock group, topographical elevation, the orientation of slope, and terrain slope gradient. The relationships between the landslide-predisposing parameters and the landslide inventory map were determined using different statistical models (EWM, EBF, and ICM). The model results were validated with landslide incidences, which were not used during the model construction. In addition, receiver operating characteristic curves were applied, and the area under the curve (AUC) was determined for the different susceptibility maps using the success (construction data) and prediction (verification data) rate curves. The results revealed that the AUC for success rates are 0.7055, 0.7221, and 0.7368, while the prediction rates are 0.6811, 0.6997, and 0.7105 for EWM, EBF, and ICM models, respectively. Consequently, landslide susceptibility maps were classified into five susceptibility classes, including very low, low, moderate, high, and very high. Additionally, the portion of construction and verification landslides incidences in high and very high landslide susceptibility classes in each map was determined. The results showed that the EWM, EBF, and ICM models produced satisfactory accuracy. The obtained landslide susceptibility maps may be useful for future natural hazard mitigation studies and planning purposes for environmental protection.Keywords: entropy weight method, evidence belief function, information content model, landslide susceptibility mapping
Procedia PDF Downloads 132303 Imputation of Incomplete Large-Scale Monitoring Count Data via Penalized Estimation
Authors: Mohamed Dakki, Genevieve Robin, Marie Suet, Abdeljebbar Qninba, Mohamed A. El Agbani, Asmâa Ouassou, Rhimou El Hamoumi, Hichem Azafzaf, Sami Rebah, Claudia Feltrup-Azafzaf, Nafouel Hamouda, Wed a.L. Ibrahim, Hosni H. Asran, Amr A. Elhady, Haitham Ibrahim, Khaled Etayeb, Essam Bouras, Almokhtar Saied, Ashrof Glidan, Bakar M. Habib, Mohamed S. Sayoud, Nadjiba Bendjedda, Laura Dami, Clemence Deschamps, Elie Gaget, Jean-Yves Mondain-Monval, Pierre Defos Du Rau
Abstract:
In biodiversity monitoring, large datasets are becoming more and more widely available and are increasingly used globally to estimate species trends and con- servation status. These large-scale datasets challenge existing statistical analysis methods, many of which are not adapted to their size, incompleteness and heterogeneity. The development of scalable methods to impute missing data in incomplete large-scale monitoring datasets is crucial to balance sampling in time or space and thus better inform conservation policies. We developed a new method based on penalized Poisson models to impute and analyse incomplete monitoring data in a large-scale framework. The method al- lows parameterization of (a) space and time factors, (b) the main effects of predic- tor covariates, as well as (c) space–time interactions. It also benefits from robust statistical and computational capability in large-scale settings. The method was tested extensively on both simulated and real-life waterbird data, with the findings revealing that it outperforms six existing methods in terms of missing data imputation errors. Applying the method to 16 waterbird species, we estimated their long-term trends for the first time at the entire North African scale, a region where monitoring data suffer from many gaps in space and time series. This new approach opens promising perspectives to increase the accuracy of species-abundance trend estimations. We made it freely available in the r package ‘lori’ (https://CRAN.R-project.org/package=lori) and recommend its use for large- scale count data, particularly in citizen science monitoring programmes.Keywords: biodiversity monitoring, high-dimensional statistics, incomplete count data, missing data imputation, waterbird trends in North-Africa
Procedia PDF Downloads 156302 Artificial Neural Network Model Based Setup Period Estimation for Polymer Cutting
Authors: Zsolt János Viharos, Krisztián Balázs Kis, Imre Paniti, Gábor Belső, Péter Németh, János Farkas
Abstract:
The paper presents the results and industrial applications in the production setup period estimation based on industrial data inherited from the field of polymer cutting. The literature of polymer cutting is very limited considering the number of publications. The first polymer cutting machine is known since the second half of the 20th century; however, the production of polymer parts with this kind of technology is still a challenging research topic. The products of the applying industrial partner must met high technical requirements, as they are used in medical, measurement instrumentation and painting industry branches. Typically, 20% of these parts are new work, which means every five years almost the entire product portfolio is replaced in their low series manufacturing environment. Consequently, it requires a flexible production system, where the estimation of the frequent setup periods' lengths is one of the key success factors. In the investigation, several (input) parameters have been studied and grouped to create an adequate training information set for an artificial neural network as a base for the estimation of the individual setup periods. In the first group, product information is collected such as the product name and number of items. The second group contains material data like material type and colour. In the third group, surface quality and tolerance information are collected including the finest surface and tightest (or narrowest) tolerance. The fourth group contains the setup data like machine type and work shift. One source of these parameters is the Manufacturing Execution System (MES) but some data were also collected from Computer Aided Design (CAD) drawings. The number of the applied tools is one of the key factors on which the industrial partners’ estimations were based previously. The artificial neural network model was trained on several thousands of real industrial data. The mean estimation accuracy of the setup periods' lengths was improved by 30%, and in the same time the deviation of the prognosis was also improved by 50%. Furthermore, an investigation on the mentioned parameter groups considering the manufacturing order was also researched. The paper also highlights the manufacturing introduction experiences and further improvements of the proposed methods, both on the shop floor and on the quotation preparation fields. Every week more than 100 real industrial setup events are given and the related data are collected.Keywords: artificial neural network, low series manufacturing, polymer cutting, setup period estimation
Procedia PDF Downloads 245