Search results for: endodontic files
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 261

Search results for: endodontic files

231 Surge Analysis of Water Transmission Mains in Una, Himachal Pradesh, India

Authors: Baldev Setia, Raj Rajeshwari, Maneesh Kumar

Abstract:

Present paper is an analysis of water transmission mains failed due to surge analysis by using basic software known as Surge Analysis Program (SAP). It is a real time failure case study of a pipe laid in Una, Himachal Pradesh. The transmission main is a 13 kilometer long pipe with 7.9 kilometers as pumping main and 5.1 kilometers as gravitational main. The analysis deals with mainly pumping mains. The results are available in two text files. Besides, several files are prepared with specific view to obtain results in a graphical form. These results help to observe the pressure difference and surge occurrence at different locations along the pipe profile, which help to redesign the transmission main with different but suitable safety measures against possible surge. A technically viable and economically feasible design has been provided as per the relevant manual and standard code of practice.

Keywords: surge, water hammer, transmission mains, SAP 2000

Procedia PDF Downloads 366
230 Surge Analysis of Water Transmission Mains in Una, Himachal Pradesh (India)

Authors: Baldev Setia, Raj Rajeshwari, Maneesh Kumar

Abstract:

Present paper is an analysis of water transmission mains failed due to surge analysis by using basic software known as Surge Analysis Program (SAP). It is a real time failure case study of a pipe laid in Una, Himachal Pradesh. The transmission main is a 13 kilometres long pipe with 7.9 kilometres as pumping main and 5.1 kilometres as gravitational main. The analysis deals with mainly pumping mains. The results are available in two text files. Besides, several files are prepared with specific view to obtain results in a graphical form. These results help to observe the pressure difference and surge occurrence at different locations along the pipe profile, which help to redesign the transmission main with different but suitable safety measures against possible surge. A technically viable and economically feasible design has been provided as per the relevant manual and standard code of practice.

Keywords: surge, water hammer, transmission mains, SAP 2000

Procedia PDF Downloads 403
229 Hash Based Block Matching for Digital Evidence Image Files from Forensic Software Tools

Authors: M. Kaya, M. Eris

Abstract:

Internet use, intelligent communication tools, and social media have all become an integral part of our daily life as a result of rapid developments in information technology. However, this widespread use increases crimes committed in the digital environment. Therefore, digital forensics, dealing with various crimes committed in digital environment, has become an important research topic. It is in the research scope of digital forensics to investigate digital evidences such as computer, cell phone, hard disk, DVD, etc. and to report whether it contains any crime related elements. There are many software and hardware tools developed for use in the digital evidence acquisition process. Today, the most widely used digital evidence investigation tools are based on the principle of finding all the data taken place in digital evidence that is matched with specified criteria and presenting it to the investigator (e.g. text files, files starting with letter A, etc.). Then, digital forensics experts carry out data analysis to figure out whether these data are related to a potential crime. Examination of a 1 TB hard disk may take hours or even days, depending on the expertise and experience of the examiner. In addition, it depends on examiner’s experience, and may change overall result involving in different cases overlooked. In this study, a hash-based matching and digital evidence evaluation method is proposed, and it is aimed to automatically classify the evidence containing criminal elements, thereby shortening the time of the digital evidence examination process and preventing human errors.

Keywords: block matching, digital evidence, hash list, evaluation of digital evidence

Procedia PDF Downloads 255
228 Musical Composition by Computer with Inspiration from Files of Different Media Types

Authors: Cassandra Pratt Romero, Andres Gomez de Silva Garza

Abstract:

This paper describes a computational system designed to imitate human inspiration during musical composition. The system is called MIS (Musical Inspiration Simulator). The MIS system is inspired by media to which human beings are exposed daily (visual, textual, or auditory) to create new musical compositions based on the emotions detected in said media. After building the system we carried out a series of evaluations with volunteer users who used MIS to compose music based on images, texts, and audio files. The volunteers were asked to judge the harmoniousness and innovation in the system's compositions. An analysis of the results points to the difficulty of computational analysis of the characteristics of the media to which we are exposed daily, as human emotions have a subjective character. This observation will direct future improvements in the system.

Keywords: human inspiration, musical composition, musical composition by computer, theory of sensation and human perception

Procedia PDF Downloads 183
227 DURAFILE: A Collaborative Tool for Preserving Digital Media Files

Authors: Santiago Macho, Miquel Montaner, Raivo Ruusalepp, Ferran Candela, Xavier Tarres, Rando Rostok

Abstract:

During our lives, we generate a lot of personal information such as photos, music, text documents and videos that link us with our past. This data that used to be tangible is now digital information stored in our computers, which implies a software dependence to make them accessible in the future. Technology, however, constantly evolves and goes through regular shifts, quickly rendering various file formats obsolete. The need for accessing data in the future affects not only personal users but also organizations. In a digital environment, a reliable preservation plan and the ability to adapt to fast changing technology are essential for maintaining data collections in the long term. We present in this paper the European FP7 project called DURAFILE that provides the technology to preserve media files for personal users and organizations while maintaining their quality.

Keywords: artificial intelligence, digital preservation, social search, digital preservation plans

Procedia PDF Downloads 445
226 A Hybrid Watermarking Model Based on Frequency of Occurrence

Authors: Hamza A. A. Al-Sewadi, Adnan H. M. Al-Helali, Samaa A. K. Khamis

Abstract:

Ownership proofs of multimedia such as text, image, audio or video files can be achieved by the burial of watermark is them. It is achieved by introducing modifications into these files that are imperceptible to the human senses but easily recoverable by a computer program. These modifications would be in the time domain or frequency domain or both. This paper presents a procedure for watermarking by mixing amplitude modulation with frequency transformation histogram; namely a specific value is used to modulate the intensity component Y of the YIQ components of the carrier image. This scheme is referred to as histogram embedding technique (HET). Results comparison with those of other techniques such as discrete wavelet transform (DWT), discrete cosine transform (DCT) and singular value decomposition (SVD) have shown an enhance efficiency in terms of ease and performance. It has manifested a good degree of robustness against various environment effects such as resizing, rotation and different kinds of noise. This method would prove very useful technique for copyright protection and ownership judgment.

Keywords: authentication, copyright protection, information hiding, ownership, watermarking

Procedia PDF Downloads 565
225 A Graph Library Development Based on the Service-‎Oriented Architecture: Used for Representation of the ‎Biological ‎Systems in the Computer Algorithms

Authors: Mehrshad Khosraviani, Sepehr Najjarpour

Abstract:

Considering the usage of graph-based approaches in systems and synthetic biology, and the various types of ‎the graphs employed by them, a comprehensive graph library based ‎on the three-tier architecture (3TA) was previously introduced for full representation of the biological systems. Although proposing a 3TA-based graph library, three following reasons motivated us to redesign the graph ‎library based on the service-oriented architecture (SOA): (1) Maintaining the accuracy of the data related to an input graph (including its edges, its ‎vertices, its topology, etc.) without involving the end user:‎ Since, in the case of using 3TA, the library files are available to the end users, they may ‎be utilized incorrectly, and consequently, the invalid graph data will be provided to the ‎computer algorithms. However, considering the usage of the SOA, the operation of the ‎graph registration is specified as a service by encapsulation of the library files. In other words, overall control operations needed for registration of the valid data will be the ‎responsibility of the services. (2) Partitioning of the library product into some different parts: Considering 3TA, a whole library product was provided in general. While here, the product ‎can be divided into smaller ones, such as an AND/OR graph drawing service, and each ‎one can be provided individually. As a result, the end user will be able to select any ‎parts of the library product, instead of all features, to add it to a project. (3) Reduction of the complexities: While using 3TA, several other libraries must be needed to add for connecting to the ‎database, responsibility of the provision of the needed library resources in the SOA-‎based graph library is entrusted with the services by themselves. Therefore, the end user ‎who wants to use the graph library is not involved with its complexity. In the end, in order to ‎make ‎the library easier to control in the system, and to restrict the end user from accessing the files, ‎it was preferred to use the service-oriented ‎architecture ‎‎(SOA) over the three-tier architecture (3TA) and to redevelop the previously proposed graph library based on it‎.

Keywords: Bio-Design Automation, Biological System, Graph Library, Service-Oriented Architecture, Systems and Synthetic Biology

Procedia PDF Downloads 311
224 Frequency of Occurrence Hybrid Watermarking Scheme

Authors: Hamza A. Ali, Adnan H. M. Al-Helali

Abstract:

Generally, a watermark is information that identifies the ownership of multimedia (text, image, audio or video files). It is achieved by introducing modifications into these files that are imperceptible to the human senses but easily recoverable by a computer program. These modifications are done according to a secret key in a descriptive model that would be either in the time domain or frequency domain or both. This paper presents a procedure for watermarking by mixing amplitude modulation with frequency transformation histogram; namely a specific value is used to modulate the intensity component Y of the YIQ components of the carrier image. This scheme is referred to as histogram embedding technique (HET). Results comparison with those of other techniques such as discrete wavelet transform (DWT), discrete cosine transform (DCT) and singular value decomposition (SVD) have shown an enhance efficiency in terms of ease and performance. It has manifested a good degree of robustness against various environment effects such as resizing, rotation and different kinds of noise. This method would prove very useful technique for copyright protection and ownership judgment.

Keywords: watermarking, ownership, copyright protection, steganography, information hiding, authentication

Procedia PDF Downloads 368
223 The Effectiveness of Using MS SharePoint for the Curriculum Repository System

Authors: Misook Ahn

Abstract:

This study examines the Institutional Curriculum Repository (ICR) developed with MS SharePoint. The purpose of using MS SharePoint is to organize, share, and manage the curriculum data. The ICR aims to build a centralized curriculum infrastructure, preserve all curriculum materials, and provide academic service to users (faculty, students, or other agencies). The ICR collection includes core language curriculum materials developed by each language school—foreign language textbooks, language survival kits, and audio files currently in or not in use at the schools. All core curriculum materials with audio and video files have been coded, collected, and preserved at the ICR. All metadata for the collected curriculum materials have been input by language, code, year, book type, level, user, version, and current status (in use/not in use). The qualitative content analysis, including the survey data, is used to evaluate the effectiveness of using MS SharePoint for the repository system. This study explains how to manage and preserve curriculum materials with MS SharePoint, along with challenges and suggestions for further research. This study will be beneficial to other universities or organizations considering archiving or preserving educational materials.

Keywords: digital preservation, ms sharepoint, repository, curriculum materials

Procedia PDF Downloads 105
222 A Novel Methodology for Browser Forensics to Retrieve Searched Keywords from Windows 10 Physical Memory Dump

Authors: Dija Sulekha

Abstract:

Nowadays, a good percentage of reported cybercrimes involve the usage of the Internet, directly or indirectly for committing the crime. Usually, Web Browsers leave traces of browsing activities on the host computer’s hard disk, which can be used by investigators to identify internet-based activities of the suspect. But criminals, who involve in some organized crimes, disable browser file generation feature to hide the evidence while doing illegal activities through the Internet. In such cases, even though browser files were not generated in the storage media of the system, traces of recent and ongoing activities were generated in the Physical Memory of the system. As a result, the analysis of Physical Memory Dump collected from the suspect's machine retrieves lots of forensically crucial information related to the browsing history of the Suspect. This information enables the cyber forensic investigators to concentrate on a few highly relevant selected artefacts while doing the Offline Forensics analysis of storage media. This paper addresses the reconstruction of web browsing activities by conducting live forensics to identify searched terms, downloaded files, visited sites, email headers, email ids, etc. from the physical memory dump collected from Windows 10 Systems. Well-known entry points are available for retrieving all the above artefacts except searched terms. The paper describes a novel methodology to retrieve the searched terms from Windows 10 Physical Memory. The searched terms retrieved in this way can be used for doing advanced file and keyword search in the storage media files reconstructed from the file system recovery in offline forensics.

Keywords: browser forensics, digital forensics, live Forensics, physical memory forensics

Procedia PDF Downloads 116
221 Building a Comprehensive Repository for Montreal Gamelan Archives

Authors: Laurent Bellemare

Abstract:

After the showcase of traditional Indonesian performing arts at the Vancouver Expo 1986, Canadian universities inherited sets of Indonesian gamelan orchestras and soon began offering courses for music students interested in learning these diverse traditions. Among them, Université de Montréal was offered two sets of Balinese orchestras, a novelty that allowed a community of Montreal gamelan enthusiasts to form and engage with this music. A few generations later, a large body of archives have amassed, framing the history of this niche community’s achievements. This data, scattered in public and private archive collections, comes in various formats: Digital Audio Tape, audio cassettes, Video Home System videotape, digital files, photos, reel-to-reel audiotape, posters, concert programs, letters, TV shows, reports and more. Attempting to study these documents in order to unearth a chronology of gamelan in Montreal has proven to be challenging since no suitable platform for preservation, storage, and research currently exists. These files are, therefore, hard to find due to their decentralized locations. Additionally, most of the documents in older formats have yet to be digitized. In the case of recent digital files, such as pictures or rehearsal recordings, their locations can be even messier and their quantity overwhelming. Aside from the basic issue of choosing a suitable repository platform, questions of legal rights and methodology arise. For posterity, these documents should nonetheless be digitized, organized, and stored in an easily accessible online repository. This paper aims to underline the various challenges encountered in the early stages of such a project as well as to suggest ways of overcoming the obstacles to a thorough archival investigation.

Keywords: archival work, archives, Balinese gamelan, Canada, Gamelan, Indonesia, Javanese gamelan, Montreal

Procedia PDF Downloads 119
220 Preparing Curved Canals Using Mtwo and RaCe Rotary Instruments: A Comparison Study

Authors: Mimoza Canga, Vito Malagnino, Giulia Malagnino, Irene Malagnino

Abstract:

Objective: The objective of this study was to compare the effectiveness of Mtwo and RaCe rotary instruments, in cleaning and shaping root canals curvature. Material and Method: The present study was conducted on 160 simulated canals in resin blocks, with an angle curvature 15°-30°. These 160 simulated canals were divided into two groups, where each group consisted of 80 blocks. Each group was divided into two subgroups (n=40 canals each). The simulated canals subgroups were prepared with Mtwo and RaCe rotary nickel-titanium instruments. The root canals were measured at four different points of reference, starting at 13 mm from the orifice. In the first group, the canals were prepared using Mtwo rotary system (VDW, Munich, Germany). The Mtwo files used were: 10/0.04, 15/0.05, 20/0.06, and 25/0.06. These instruments entered in the full length of the canal. Each file was rotated in the canal until it reached the apical point. In the second group, the canals were prepared using RaCe instruments (La Chaux-De-Fonds, Switzerland), performing the crown down technique, using the torque electric control motor (VDWCO, Munich, Germany), with 600 RPM and 2n/cm as follow: ≠40/0.10, ≠35/0.08, ≠30/0.06, ≠25/0.04, ≠25/0.02. The data were recorded using SPSS version 23 software (Microsoft, IL, USA). Data analysis was done using ANOVA test. Results: The results obtained by using the Mtwo rotary instruments, showed that these instruments were able to clean and shape in the right-to-left motion curved canals, at different levels, without any deviation, and in perfect symmetry, with a P-value=0.000. The data showed that the greater the depth of the root canal, the greater the deviations of the RaCe rotary instruments. These deviations occurred in three levels, which are: S2(P=0.004), S3( P=0.007), S4(P=0.009). The Mtwo files can go deeper and create a greater angle in S4 level (21°-28°), compared to RaCe instruments with an angle equal to 19°-24°. Conclusion: The present study noted a clinically significant difference between Mtwo rotary instruments and RaCe rotary files used for the canal preparation and indicated that Mtwo instruments are a better choice for the curved canals.

Keywords: canal curvature, canal preparation, Mtwo, RaCe, resin blocks

Procedia PDF Downloads 122
219 MSIpred: A Python 2 Package for the Classification of Tumor Microsatellite Instability from Tumor Mutation Annotation Data Using a Support Vector Machine

Authors: Chen Wang, Chun Liang

Abstract:

Microsatellite instability (MSI) is characterized by high degree of polymorphism in microsatellite (MS) length due to a deficiency in mismatch repair (MMR) system. MSI is associated with several tumor types and its status can be considered as an important indicator for tumor prognostic. Conventional clinical diagnosis of MSI examines PCR products of a panel of MS markers using electrophoresis (MSI-PCR) which is laborious, time consuming, and less reliable. MSIpred, a python 2 package for automatic classification of MSI was released by this study. It computes important somatic mutation features from files in mutation annotation format (MAF) generated from paired tumor-normal exome sequencing data, subsequently using these to predict tumor MSI status with a support vector machine (SVM) classifier trained by MAF files of 1074 tumors belonging to four types. Evaluation of MSIpred on an independent 358-tumor test set achieved overall accuracy of over 98% and area under receiver operating characteristic (ROC) curve of 0.967. These results indicated that MSIpred is a robust pan-cancer MSI classification tool and can serve as a complementary diagnostic to MSI-PCR in MSI diagnosis.

Keywords: microsatellite instability, pan-cancer classification, somatic mutation, support vector machine

Procedia PDF Downloads 173
218 Comprehensive Review of Adversarial Machine Learning in PDF Malware

Authors: Preston Nabors, Nasseh Tabrizi

Abstract:

Portable Document Format (PDF) files have gained significant popularity for sharing and distributing documents due to their universal compatibility. However, the widespread use of PDF files has made them attractive targets for cybercriminals, who exploit vulnerabilities to deliver malware and compromise the security of end-user systems. This paper reviews notable contributions in PDF malware detection, including static, dynamic, signature-based, and hybrid analysis. It presents a comprehensive examination of PDF malware detection techniques, focusing on the emerging threat of adversarial sampling and the need for robust defense mechanisms. The paper highlights the vulnerability of machine learning classifiers to evasion attacks. It explores adversarial sampling techniques in PDF malware detection to produce mimicry and reverse mimicry evasion attacks, which aim to bypass detection systems. Improvements for future research are identified, including accessible methods, applying adversarial sampling techniques to malicious payloads, evaluating other models, evaluating the importance of features to malware, implementing adversarial defense techniques, and conducting comprehensive examination across various scenarios. By addressing these opportunities, researchers can enhance PDF malware detection and develop more resilient defense mechanisms against adversarial attacks.

Keywords: adversarial attacks, adversarial defense, adversarial machine learning, intrusion detection, PDF malware, malware detection, malware detection evasion

Procedia PDF Downloads 39
217 Digital Curriculum Preservation Planning, Actions, and Challenges

Authors: Misook Ahn

Abstract:

This study examined the Digital Curriculum Repository (DCR) project initiated at Defense Language Institute Foreign Language Center (DLIFLC). The purpose of the DCR is to build a centralized curriculum infrastructure, preserve all curriculum materials, and provide academic service to users (faculty, students, or other agencies). The DCR collection includes core language curriculum materials developed by each language school—foreign language textbooks, language survival kits, and audio files currently in or not in use at the schools. All core curriculum materials with audio and video files have been coded, collected, and preserved at the DCR. The DCR website was designed with MS SharePoint for easy accessibility by the DLIFLC’s faculty and students. All metadata for the collected curriculum materials have been input by language, code, year, book type, level, user, version, and current status (in use/not in use). The study documents digital curriculum preservation planning, actions, and challenges, including collecting, coding, collaborating, designing DCR SharePoint, and policymaking. DCR Survey data is also collected and analyzed for this research. Based on the finding, the study concludes that the mandatory policy for the DCR system and collaboration with school leadership are critical elements of a successful repository system. The sample collected items, metadata, and DCR SharePoint site are presented in the evaluation section.

Keywords: MS share point, digital preservation, repository, policy

Procedia PDF Downloads 159
216 Correlation between Speech Emotion Recognition Deep Learning Models and Noises

Authors: Leah Lee

Abstract:

This paper examines the correlation between deep learning models and emotions with noises to see whether or not noises mask emotions. The deep learning models used are plain convolutional neural networks (CNN), auto-encoder, long short-term memory (LSTM), and Visual Geometry Group-16 (VGG-16). Emotion datasets used are Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS), Crowd-sourced Emotional Multimodal Actors Dataset (CREMA-D), Toronto Emotional Speech Set (TESS), and Surrey Audio-Visual Expressed Emotion (SAVEE). To make it four times bigger, audio set files, stretch, and pitch augmentations are utilized. From the augmented datasets, five different features are extracted for inputs of the models. There are eight different emotions to be classified. Noise variations are white noise, dog barking, and cough sounds. The variation in the signal-to-noise ratio (SNR) is 0, 20, and 40. In summation, per a deep learning model, nine different sets with noise and SNR variations and just augmented audio files without any noises will be used in the experiment. To compare the results of the deep learning models, the accuracy and receiver operating characteristic (ROC) are checked.

Keywords: auto-encoder, convolutional neural networks, long short-term memory, speech emotion recognition, visual geometry group-16

Procedia PDF Downloads 75
215 Podcasting: A Tool for an Enhanced Learning Experience of Introductory Courses to Science and Engineering Students

Authors: Yaser E. Greish, Emad F. Hindawy, Maryam S. Al Nehayan

Abstract:

Introductory courses such as General Chemistry I, General Physics I and General Biology need special attention as students taking these courses are usually at their first year of the university. In addition to the language barrier for most of them, they also face other difficulties if these elementary courses are taught in the traditional way. Changing the routine method of teaching of these courses is therefore mandated. In this regard, podcasting of chemistry lectures was used as an add-on to the traditional and non-traditional methods of teaching chemistry to science and non-science students. Podcasts refer to video files that are distributed in a digital format through the Internet using personal computers or mobile devices. Pedagogical strategy is another way of identifying podcasts. Three distinct teaching approaches are evident in the current literature and include receptive viewing, problem-solving, and created video podcasts. The digital format and dispensing of video podcasts have stabilized over the past eight years, the type of podcasts vary considerably according to their purpose, degree of segmentation, pedagogical strategy, and academic focus. In this regard, the whole syllabus of 'General Chemistry I' course was developed as podcasts and were delivered to students throughout the semester. Students used the podcasted files extensively during their studies, especially as part of their preparations for exams. Feedback of students strongly supported the idea of using podcasting as it reflected its effect on the overall understanding of the subject, and a consequent improvement of their grades.

Keywords: podcasting, introductory course, interactivity, flipped classroom

Procedia PDF Downloads 265
214 Using Autoencoder as Feature Extractor for Malware Detection

Authors: Umm-E-Hani, Faiza Babar, Hanif Durad

Abstract:

Malware-detecting approaches suffer many limitations, due to which all anti-malware solutions have failed to be reliable enough for detecting zero-day malware. Signature-based solutions depend upon the signatures that can be generated only when malware surfaces at least once in the cyber world. Another approach that works by detecting the anomalies caused in the environment can easily be defeated by diligently and intelligently written malware. Solutions that have been trained to observe the behavior for detecting malicious files have failed to cater to the malware capable of detecting the sandboxed or protected environment. Machine learning and deep learning-based approaches greatly suffer in training their models with either an imbalanced dataset or an inadequate number of samples. AI-based anti-malware solutions that have been trained with enough samples targeted a selected feature vector, thus ignoring the input of leftover features in the maliciousness of malware just to cope with the lack of underlying hardware processing power. Our research focuses on producing an anti-malware solution for detecting malicious PE files by circumventing the earlier-mentioned shortcomings. Our proposed framework, which is based on automated feature engineering through autoencoders, trains the model over a fairly large dataset. It focuses on the visual patterns of malware samples to automatically extract the meaningful part of the visual pattern. Our experiment has successfully produced a state-of-the-art accuracy of 99.54 % over test data.

Keywords: malware, auto encoders, automated feature engineering, classification

Procedia PDF Downloads 72
213 Large-Scale Simulations of Turbulence Using Discontinuous Spectral Element Method

Authors: A. Peyvan, D. Li, J. Komperda, F. Mashayek

Abstract:

Turbulence can be observed in a variety fluid motions in nature and industrial applications. Recent investment in high-speed aircraft and propulsion systems has revitalized fundamental research on turbulent flows. In these systems, capturing chaotic fluid structures with different length and time scales is accomplished through the Direct Numerical Simulation (DNS) approach since it accurately simulates flows down to smallest dissipative scales, i.e., Kolmogorov’s scales. The discontinuous spectral element method (DSEM) is a high-order technique that uses spectral functions for approximating the solution. The DSEM code has been developed by our research group over the course of more than two decades. Recently, the code has been improved to run large cases in the order of billions of solution points. Running big simulations requires a considerable amount of RAM. Therefore, the DSEM code must be highly parallelized and able to start on multiple computational nodes on an HPC cluster with distributed memory. However, some pre-processing procedures, such as determining global element information, creating a global face list, and assigning global partitioning and element connection information of the domain for communication, must be done sequentially with a single processing core. A separate code has been written to perform the pre-processing procedures on a local machine. It stores the minimum amount of information that is required for the DSEM code to start in parallel, extracted from the mesh file, into text files (pre-files). It packs integer type information with a Stream Binary format in pre-files that are portable between machines. The files are generated to ensure fast read performance on different file-systems, such as Lustre and General Parallel File System (GPFS). A new subroutine has been added to the DSEM code to read the startup files using parallel MPI I/O, for Lustre, in a way that each MPI rank acquires its information from the file in parallel. In case of GPFS, in each computational node, a single MPI rank reads data from the file, which is specifically generated for the computational node, and send them to other ranks on the node using point to point non-blocking MPI communication. This way, communication takes place locally on each node and signals do not cross the switches of the cluster. The read subroutine has been tested on Argonne National Laboratory’s Mira (GPFS), National Center for Supercomputing Application’s Blue Waters (Lustre), San Diego Supercomputer Center’s Comet (Lustre), and UIC’s Extreme (Lustre). The tests showed that one file per node is suited for GPFS and parallel MPI I/O is the best choice for Lustre file system. The DSEM code relies on heavily optimized linear algebra operation such as matrix-matrix and matrix-vector products for calculation of the solution in every time-step. For this, the code can either make use of its matrix math library, BLAS, Intel MKL, or ATLAS. This fact and the discontinuous nature of the method makes the DSEM code run efficiently in parallel. The results of weak scaling tests performed on Blue Waters showed a scalable and efficient performance of the code in parallel computing.

Keywords: computational fluid dynamics, direct numerical simulation, spectral element, turbulent flow

Procedia PDF Downloads 133
212 Anomaly Detection of Log Analysis using Data Visualization Techniques for Digital Forensics Audit and Investigation

Authors: Mohamed Fadzlee Sulaiman, Zainurrasyid Abdullah, Mohd Zabri Adil Talib, Aswami Fadillah Mohd Ariffin

Abstract:

In common digital forensics cases, investigation may rely on the analysis conducted on specific and relevant exhibits involved. Usually the investigation officer may define and advise digital forensic analyst about the goals and objectives to be achieved in reconstructing the trail of evidence while maintaining the specific scope of investigation. With the technology growth, people are starting to realize the importance of cyber security to their organization and this new perspective creates awareness that digital forensics auditing must come in place in order to measure possible threat or attack to their cyber-infrastructure. Instead of performing investigation on incident basis, auditing may broaden the scope of investigation to the level of anomaly detection in daily operation of organization’s cyber space. While handling a huge amount of data such as log files, performing digital forensics audit for large organization proven to be onerous task for the analyst either to analyze the huge files or to translate the findings in a way where the stakeholder can clearly understand. Data visualization can be emphasized in conducting digital forensic audit and investigation to resolve both needs. This study will identify the important factors that should be considered to perform data visualization techniques in order to detect anomaly that meet the digital forensic audit and investigation objectives.

Keywords: digital forensic, data visualization, anomaly detection , log analysis, forensic audit, visualization techniques

Procedia PDF Downloads 287
211 Assessing Online Learning Paths in an Learning Management Systems Using a Data Mining and Machine Learning Approach

Authors: Alvaro Figueira, Bruno Cabral

Abstract:

Nowadays, students are used to be assessed through an online platform. Educators have stepped up from a period in which they endured the transition from paper to digital. The use of a diversified set of question types that range from quizzes to open questions is currently common in most university courses. In many courses, today, the evaluation methodology also fosters the students’ online participation in forums, the download, and upload of modified files, or even the participation in group activities. At the same time, new pedagogy theories that promote the active participation of students in the learning process, and the systematic use of problem-based learning, are being adopted using an eLearning system for that purpose. However, although there can be a lot of feedback from these activities to student’s, usually it is restricted to the assessments of online well-defined tasks. In this article, we propose an automatic system that informs students of abnormal deviations of a 'correct' learning path in the course. Our approach is based on the fact that by obtaining this information earlier in the semester, may provide students and educators an opportunity to resolve an eventual problem regarding the student’s current online actions towards the course. Our goal is to prevent situations that have a significant probability to lead to a poor grade and, eventually, to failing. In the major learning management systems (LMS) currently available, the interaction between the students and the system itself is registered in log files in the form of registers that mark beginning of actions performed by the user. Our proposed system uses that logged information to derive new one: the time each student spends on each activity, the time and order of the resources used by the student and, finally, the online resource usage pattern. Then, using the grades assigned to the students in previous years, we built a learning dataset that is used to feed a machine learning meta classifier. The produced classification model is then used to predict the grades a learning path is heading to, in the current year. Not only this approach serves the teacher, but also the student to receive automatic feedback on her current situation, having past years as a perspective. Our system can be applied to online courses that integrate the use of an online platform that stores user actions in a log file, and that has access to other student’s evaluations. The system is based on a data mining process on the log files and on a self-feedback machine learning algorithm that works paired with the Moodle LMS.

Keywords: data mining, e-learning, grade prediction, machine learning, student learning path

Procedia PDF Downloads 122
210 Design of an Improved Distributed Framework for Intrusion Detection System Based on Artificial Immune System and Neural Network

Authors: Yulin Rao, Zhixuan Li, Burra Venkata Durga Kumar

Abstract:

Intrusion detection refers to monitoring the actions of internal and external intruders on the system and detecting the behaviours that violate security policies in real-time. In intrusion detection, there has been much discussion about the application of neural network technology and artificial immune system (AIS). However, many solutions use static methods (signature-based and stateful protocol analysis) or centralized intrusion detection systems (CIDS), which are unsuitable for real-time intrusion detection systems that need to process large amounts of data and detect unknown intrusions. This article proposes a framework for a distributed intrusion detection system (DIDS) with multi-agents based on the concept of AIS and neural network technology to detect anomalies and intrusions. In this framework, multiple agents are assigned to each host and work together, improving the system's detection efficiency and robustness. The trainer agent in the central server of the framework uses the artificial neural network (ANN) rather than the negative selection algorithm of AIS to generate mature detectors. Mature detectors can distinguish between self-files and non-self-files after learning. Our analyzer agents use genetic algorithms to generate memory cell detectors. This kind of detector will effectively reduce false positive and false negative errors and act quickly on known intrusions.

Keywords: artificial immune system, distributed artificial intelligence, multi-agent, intrusion detection system, neural network

Procedia PDF Downloads 109
209 Developing a Virtual Reality System to Assist in Anatomy Teaching and Evaluating the Effectiveness of That System

Authors: Tarek Abdelkader, Suresh Selvaraj, Prasad Iyer, Yong Mun Hin, Hajmath Begum, P. Gopalakrishnakone

Abstract:

Nowadays, more and more educational institutes, as well as students, rely on 3D anatomy programs as an important tool that helps students correlate the actual locations of anatomical structures in a 3D dimension. Lately, virtual reality (VR) is gaining more favor from the younger generations due to its higher interactive mode. As a result, using virtual reality as a gamified learning platform for anatomy became the current goal. We present a model where a Virtual Human Anatomy Program (VHAP) was developed to assist with the anatomy learning experience of students. The anatomy module has been built, mostly, from real patient CT scans. Segmentation and surface rendering were used to create the 3D model by direct segmentation of CT scans for each organ individually and exporting that model as a 3D file. After acquiring the 3D files for all needed organs, all the files were introduced into a Virtual Reality environment as a complete body anatomy model. In this ongoing experiment, students from different Allied Health orientations are testing the VHAP. Specifically, the cardiovascular system has been selected as the focus system of study since all of our students finished learning about it in the 1st trimester. The initial results suggest that the VHAP system is adding value to the learning process of our students, encouraging them to get more involved and to ask more questions. Involved students comments show that they are excited about the VHAP system with comments about its interactivity as well as the ability to use it solo as a self-learning aid in combination with the lectures. Some students also experienced minor side effects like dizziness.

Keywords: 3D construction, health sciences, teaching pedagogy, virtual reality

Procedia PDF Downloads 157
208 Factors Influencing Antipsychotic Drug Usage and Substitution among Nigerian Schizophrenic Patients

Authors: Ubaka Chukwuemeka Michael, Ukwe Chinwe Victoria

Abstract:

Background: The use of antipsychotic monotherapy remains the standard for schizophrenic disorders so also a prescription switch from older typical to newer atypical classes of antipsychotics on the basis of better efficacy and tolerability. However, surveys on the quality of antipsychotic drug use and substitution in developing countries are very scarce. This study was intended to evaluate quality and factors that drive the prescription and substitution of antipsychotic drugs among schizophrenic patients visiting a regional psychiatric hospital. Methods: Case files of patients visiting a federal government funded Neuropsychiatric Hospital between July 2012 and July 2014 were systematically retrieved. Patient demographic characteristics, clinical details and drug management data were collected and subjected to descriptive and inferential data analysis to determine quality and predictors of utilization. Results: Of the 600 case files used, there were more male patients (55.3%) with an overall mean age of 33.7±14.4 years. Typical antipsychotic agents accounted for over 85% of prescriptions, with majority of the patients receiving more than 2 drugs in at least a visit (80.9%). Fluphenazine (25.2%) and Haloperidol (18.8%) were mostly given as antipsychotics for treatment initiation while Olazenpine (23.0%) and Benzhexol (18.3%) were the most currently prescribed antipsychotics. Nearly half (42%, 252/600) of these patients were switched from one class to another, with 34.5% (207/600) of them switched from typical to atypical drug classes. No demographic or clinical factors influenced drug substitutions but a younger age and being married influenced being prescribed a polypharmacy regimen (more than 2 drugs) and an injectable antipsychotic agent. Conclusion: The prevalence of antipsychotic polypharmacy and use of typical agents among these patients was high. However, only age and marital status affected the quality of antipsychotic prescriptions among these patients.

Keywords: antipsychotics, drug substitution, pharmacoepidemiology, polypharmacy

Procedia PDF Downloads 475
207 The Recording of Personal Data in the Spanish Criminal Justice System and Its Impact on the Right to Privacy

Authors: Deborah García-Magna

Abstract:

When a person goes through the criminal justice system, either as a suspect, arrested, prosecuted or convicted, certain personal data are recorded, and a wide range of persons and organizations may have access to it. The recording of data can have a great impact on the daily life of the person concerned during the period of time determined by the legislation. In addition, this registered information can refer to various aspects not strictly related directly to the alleged or actually committed infraction. In some areas, the Spanish legislation does not clearly determine the cancellation period of the registers nor what happens when they are cancelled since some of the files are not really erased and remain recorded, even if their consultation is no more allowed or it is stated that they should not be taken into account. Thus, access to the recorded data of arrested or convicted persons may reduce their possibilities of reintegration into society. In this research, some of the areas in which data recording has a special impact on the lives of affected persons are analyzed in a critical manner, taking into account Spanish legislation and jurisprudence, and the influence of the European Court of Human Rights, the Council of Europe and other supranational instruments. In particular, the analysis cover the scope of video-surveillance in public spaces, the police record, the recording of personal data for the purposes of police investigation (especially DNA and psychological profiles), the registry of administrative and minor offenses (especially as they are taken into account to impose aggravating circumstaces), criminal records (of adults, minors and legal entities), and the registration of special circumstances occurred during the execution of the sentence (files of inmates under special surveillance –FIES–, disciplinary sanctions, special therapies in prison, etc.).

Keywords: ECHR jurisprudence, formal and informal criminal control, privacy, disciplinary sanctions, social reintegration

Procedia PDF Downloads 144
206 An Observation Approach of Reading Order for Single Column and Two Column Layout Template

Authors: In-Tsang Lin, Chiching Wei

Abstract:

Reading order is an important task in many digitization scenarios involving the preservation of the logical structure of a document. From the paper survey, it finds that the state-of-the-art algorithm could not fulfill to get the accurate reading order in the portable document format (PDF) files with rich formats, diverse layout arrangement. In recent years, most of the studies on the analysis of reading order have targeted the specific problem of associating layout components with logical labels, while less attention has been paid to the problem of extracting relationships the problem of detecting the reading order relationship between logical components, such as cross-references. Over 3 years of development, the company Foxit has demonstrated the layout recognition (LR) engine in revision 20601 to eager for the accuracy of the reading order. The bounding box of each paragraph can be obtained correctly by the Foxit LR engine, but the result of reading-order is not always correct for single-column, and two-column layout format due to the table issue, formula issue, and multiple mini separated bounding box and footer issue. Thus, the algorithm is developed to improve the accuracy of the reading order based on the Foxit LR structure. In this paper, a creative observation method (Here called the MESH method) is provided here to open a new chance in the research of the reading-order field. Here two important parameters are introduced, one parameter is the number of the bounding box on the right side of the present bounding box (NRight), and another parameter is the number of the bounding box under the present bounding box (Nunder). And the normalized x-value (x/the whole width), the normalized y-value (y/the whole height) of each bounding box, the x-, and y- position of each bounding box were also put into consideration. Initial experimental results of single column layout format demonstrate a 19.33% absolute improvement in accuracy of the reading-order over 7 PDF files (total 150 pages) using our proposed method based on the LR structure over the baseline method using the LR structure in 20601 revision, which its accuracy of the reading-order is 72%. And for two-column layout format, the preliminary results demonstrate a 44.44% absolute improvement in accuracy of the reading-order over 2 PDF files (total 18 pages) using our proposed method based on the LR structure over the baseline method using the LR structure in 20601 revision, which its accuracy of the reading-order is 0%. Until now, the footer issue and a part of multiple mini separated bounding box issue can be solved by using the MESH method. However, there are still three issues that cannot be solved, such as the table issue, formula issue, and the random multiple mini separated bounding boxes. But the detection of the table position and the recognition of the table structure are out of the scope in this paper, and there is needed another research. In the future, the tasks are chosen- how to detect the table position in the page and to extract the content of the table.

Keywords: document processing, reading order, observation method, layout recognition

Procedia PDF Downloads 181
205 ABET Accreditation Process for Engineering and Technology Programs: Detailed Process Flow from Criteria 1 to Criteria 8

Authors: Amit Kumar, Rajdeep Chakrabarty, Ganesh Gupta

Abstract:

This paper illustrates the detailed accreditation process of Accreditation Board of Engineering and Technology (ABET) for accrediting engineering and Technology programs. ABET is a non-governmental agency that accredits engineering and technology, applied and natural sciences, and computing sciences programs. ABET was founded on 10th May 1932 and was founded by Institute of Electrical and Electronics Engineering. International industries accept ABET accredited institutes having the highest standards in their academic programs. In this accreditation, there are eight criteria in general; criterion 1 describes the student outcome evaluations, criteria 2 measures the program's educational objectives, criteria 3 is the student outcome calculated from the marks obtained by students, criteria 4 establishes continuous improvement, criteria 5 focus on curriculum of the institute, criteria 6 is about faculties of this institute, criteria 7 measures the facilities provided by the institute and finally, criteria 8 focus on institutional support towards staff of the institute. In this paper, we focused on the calculative part of each criterion with equations and suitable examples, the files and documentation required for each criterion, and the total workflow of the process. The references and the values used to illustrate the calculations are all taken from the samples provided at ABET's official website. In the final section, we also discuss the criterion-wise score weightage followed by evaluation with timeframe and deadlines.

Keywords: Engineering Accreditation Committee, Computing Accreditation Committee, performance indicator, Program Educational Objective, ABET Criterion 1 to 7, IEEE, National Board of Accreditation, MOOCS, Board of Studies, stakeholders, course objective, program outcome, articulation, attainment, CO-PO mapping, CO-PO-SO mapping, PDCA cycle, degree certificates, course files, course catalogue

Procedia PDF Downloads 59
204 Effects of Tenefovir Disiproxil Fumarate on the Renal Sufficiency of HIV Positive Patients

Authors: Londeka Ntuli, Frasia Oosthuizen

Abstract:

Background: Tenefovir disiproxil fumarate (TDF) is a nephrotoxic drug and has been proven to contribute to renal insufficiency necessitating intensive monitoring and management of adverse effects arising from prolonged exposure to the drug. TDF is one of the preferred first-line drugs used in combination therapy in most regions. There are estimated 300 000 patients being initiated on the Efavirenz/TDF/Emtricitabine first-line regimen annually in South Africa. It is against this background that this study aims to investigate the effects of TDF on renal sufficiency of HIV positive patients. Methodology: A retrospective quantitative study was conducted, analysing clinical charts of HIV positive patient’s older than 18 years of age and on a TDF-containing regimen for more than 1 year. Data were obtained from the analysis of patient files and was transcribed into Microsoft® Excel® spreadsheet. Extracted data were coded, categorised and analysed using STATA®. Results: A total of 275 patient files were included in this study. Renal function started decreasing after 3 months of treatment (with 93.5% patients having a normal EGFR), and kept on decreasing as time progressed with only 39.6% normal renal function at year 4. Additional risk factors for renal insufficiency included age below 25, female gender, and additional medication. Conclusion: It is clear from this study that the use of TDF necessitates intensive monitoring and management of adverse effects arising from prolonged exposure to the drug. The findings from this study generated pertinent information on the safety profile of the drug TDF in a resource-limited setting of a public health institution. The appropriate management is of tremendous importance in the South African context where the majority of HIV positive individuals are on the TDF containing regimen; thus it is beneficial to ascertain the possible level of toxicities these patients may be experiencing.

Keywords: renal insufficiency, tenefovir, HIV, risk factors

Procedia PDF Downloads 121
203 Using Log Files to Improve Work Efficiency

Authors: Salman Hussam

Abstract:

As a monitoring system to manage employees' time and employers' business, this system (logger) will monitor the employees at work and will announce them if they spend too much time on social media (even if they are using proxy it will catch them). In this way, people will spend less time at work and more time with family.

Keywords: clients, employees, employers, family, monitoring, systems, social media, time

Procedia PDF Downloads 494
202 Comparison of Incidence and Risk Factors of Early Onset and Late Onset Preeclampsia: A Population Based Cohort Study

Authors: Sadia Munir, Diana White, Aya Albahri, Pratiwi Hastania, Eltahir Mohamed, Mahmood Khan, Fathima Mohamed, Ayat Kadhi, Haila Saleem

Abstract:

Preeclampsia is a major complication of pregnancy. Prediction and management of preeclampsia is a challenge for obstetricians. To our knowledge, no major progress has been achieved in the prevention and early detection of preeclampsia. There is very little known about the clear treatment path of this disorder. Preeclampsia puts both mother and baby at risk of several short term- and long term-health problems later in life. There is huge health service cost burden in the health care system associated with preeclampsia and its complications. Preeclampsia is divided into two different types. Early onset preeclampsia develops before 34 weeks of gestation, and late onset develops at or after 34 weeks of gestation. Different genetic and environmental factors, prognosis, heritability, biochemical and clinical features are associated with early and late onset preeclampsia. Prevalence of preeclampsia greatly varies all over the world and is dependent on ethnicity of the population and geographic region. To authors best knowledge, no published data on preeclampsia exist in Qatar. In this study, we are reporting the incidence of preeclampsia in Qatar. The purpose of this study is to compare the incidence and risk factors of both early onset and late onset preeclampsia in Qatar. This retrospective longitudinal cohort study was conducted using data from the hospital record of Women’s Hospital, Hamad Medical Corporation (HMC), from May 2014-May 2016. Data collection tool, which was approved by HMC, was a researcher made extraction sheet that included information such as blood pressure during admission, socio demographic characteristics, delivery mode, and new born details. A total of 1929 patients’ files were identified by the hospital information management when they apply codes of preeclampsia. Out of 1929 files, 878 had significant gestational hypertension without proteinuria, 365 had preeclampsia, 364 had severe preeclampsia, and 188 had preexisting hypertension with superimposed proteinuria. In this study, 78% of the data was obtained by hospital electronic system (Cerner) and the remaining 22% was from patient’s paper records. We have gone through detail data extraction from 560 files. Initial data analysis has revealed that 15.02% of pregnancies were complicated with preeclampsia from May 2014-May 2016. We have analyzed difference in the two different disease entities in the ethnicity, maternal age, severity of hypertension, mode of delivery and infant birth weight. We have identified promising differences in the risk factors of early onset and late onset preeclampsia. The data from clinical findings of preeclampsia will contribute to increased knowledge about two different disease entities, their etiology, and similarities/differences. The findings of this study can also be used in predicting health challenges, improving health care system, setting up guidelines, and providing the best care for women suffering from preeclampsia.

Keywords: preeclampsia, incidence, risk factors, maternal

Procedia PDF Downloads 141