Search results for: computer modelling
630 Error Detection and Correction for Onboard Satellite Computers Using Hamming Code
Authors: Rafsan Al Mamun, Md. Motaharul Islam, Rabana Tajrin, Nabiha Noor, Shafinaz Qader
Abstract:
In an attempt to enrich the lives of billions of people by providing proper information, security and a way of communicating with others, the need for efficient and improved satellites is constantly growing. Thus, there is an increasing demand for better error detection and correction (EDAC) schemes, which are capable of protecting the data onboard the satellites. The paper is aimed towards detecting and correcting such errors using a special algorithm called the Hamming Code, which uses the concept of parity and parity bits to prevent single-bit errors onboard a satellite in Low Earth Orbit. This paper focuses on the study of Low Earth Orbit satellites and the process of generating the Hamming Code matrix to be used for EDAC using computer programs. The most effective version of Hamming Code generated was the Hamming (16, 11, 4) version using MATLAB, and the paper compares this particular scheme with other EDAC mechanisms, including other versions of Hamming Codes and Cyclic Redundancy Check (CRC), and the limitations of this scheme. This particular version of the Hamming Code guarantees single-bit error corrections as well as double-bit error detections. Furthermore, this version of Hamming Code has proved to be fast with a checking time of 5.669 nanoseconds, that has a relatively higher code rate and lower bit overhead compared to the other versions and can detect a greater percentage of errors per length of code than other EDAC schemes with similar capabilities. In conclusion, with the proper implementation of the system, it is quite possible to ensure a relatively uncorrupted satellite storage system.Keywords: bit-flips, Hamming code, low earth orbit, parity bits, satellite, single error upset
Procedia PDF Downloads 130629 Modeling Palm Oil Quality During the Ripening Process of Fresh Fruits
Authors: Afshin Keshvadi, Johari Endan, Haniff Harun, Desa Ahmad, Farah Saleena
Abstract:
Experiments were conducted to develop a model for analyzing the ripening process of oil palm fresh fruits in relation to oil yield and oil quality of palm oil produced. This research was carried out on 8-year-old Tenera (Dura × Pisifera) palms planted in 2003 at the Malaysian Palm Oil Board Research Station. Fresh fruit bunches were harvested from designated palms during January till May of 2010. The bunches were divided into three regions (top, middle and bottom), and fruits from the outer and inner layers were randomly sampled for analysis at 8, 12, 16 and 20 weeks after anthesis to establish relationships between maturity and oil development in the mesocarp and kernel. Computations on data related to ripening time, oil content and oil quality were performed using several computer software programs (MSTAT-C, SAS and Microsoft Excel). Nine nonlinear mathematical models were utilized using MATLAB software to fit the data collected. The results showed mean mesocarp oil percent increased from 1.24 % at 8 weeks after anthesis to 29.6 % at 20 weeks after anthesis. Fruits from the top part of the bunch had the highest mesocarp oil content of 10.09 %. The lowest kernel oil percent of 0.03 % was recorded at 12 weeks after anthesis. Palmitic acid and oleic acid comprised of more than 73 % of total mesocarp fatty acids at 8 weeks after anthesis, and increased to more than 80 % at fruit maturity at 20 weeks. The Logistic model with the highest R2 and the lowest root mean square error was found to be the best fit model.Keywords: oil palm, oil yield, ripening process, anthesis, fatty acids, modeling
Procedia PDF Downloads 313628 Synthesis and Characterization of Some Novel Carbazole Schiff Bases (OLED)
Authors: Baki Cicek, Umit Calisir
Abstract:
Carbazoles have been replaced lots of studies from 1960's to present and also still continues. In 1987, the first diode device had been developed. Thanks to that study, light emitting devices have been investigated and developed and also have been used on commercial applications. Nowadays, OLED (Organic Light Emitting Diodes) technology is using on lots of electronic screen such as (mobile phone, computer monitors, televisions, etc.) Carbazoles were subject a lot of study as a semiconductor material. Although this technology is used commen and widely, it is still development stage. Metal complexes of these compounds are using at pigment dyes because of colored substances, polymer technology, medicine industry, agriculture area, preparing rocket fuel-oil, determine some of biological events, etc. Becides all of these to preparing of schiff base synthesis is going on intensely. In this study, some of novel carbazole schiff bases were synthesized starting from carbazole. For that purpose, firstly, carbazole was alkylated. After purification of N-substituted-carbazole was nitrated to sythesized 3-nitro-N-substituted and 3,6-dinitro-N-substituted carbazoles. At next step, nitro group/groups were reduced to amines. Purified with using a type of silica gel-column chromatography. At the last step of our study, with sythesized 3,6-diamino-N-substituted carbazoles and 3-amino-N-substituted carbazoles were reacted with aldehydes to condensation reactions. 3-(imino-p-hydroxybenzyl)-N-isobutyl -carbazole, 3-(imino-2,3,4-trimethoxybenzene)-N-butylcarbazole, 3-(imino-3,4-dihydroxybenzene)-N-octylcarbazole, 3-(imino-2,3-dihydroxybenzene)-N-octylkarbazole and 3,6-di(α-imino-β-naphthol) -N-hexylcarbazole compounds were synthesized. All of synthesized compounds were characterized with FT-IR, 1H-NMR, 13C-NMR, and LC-MS.Keywords: carbazole, carbazol schiff base, condensation reactions, OLED
Procedia PDF Downloads 441627 Thresholding Approach for Automatic Detection of Pseudomonas aeruginosa Biofilms from Fluorescence in situ Hybridization Images
Authors: Zonglin Yang, Tatsuya Akiyama, Kerry S. Williamson, Michael J. Franklin, Thiruvarangan Ramaraj
Abstract:
Pseudomonas aeruginosa is an opportunistic pathogen that forms surface-associated microbial communities (biofilms) on artificial implant devices and on human tissue. Biofilm infections are difficult to treat with antibiotics, in part, because the bacteria in biofilms are physiologically heterogeneous. One measure of biological heterogeneity in a population of cells is to quantify the cellular concentrations of ribosomes, which can be probed with fluorescently labeled nucleic acids. The fluorescent signal intensity following fluorescence in situ hybridization (FISH) analysis correlates to the cellular level of ribosomes. The goals here are to provide computationally and statistically robust approaches to automatically quantify cellular heterogeneity in biofilms from a large library of epifluorescent microscopy FISH images. In this work, the initial steps were developed toward these goals by developing an automated biofilm detection approach for use with FISH images. The approach allows rapid identification of biofilm regions from FISH images that are counterstained with fluorescent dyes. This methodology provides advances over other computational methods, allowing subtraction of spurious signals and non-biological fluorescent substrata. This method will be a robust and user-friendly approach which will enable users to semi-automatically detect biofilm boundaries and extract intensity values from fluorescent images for quantitative analysis of biofilm heterogeneity.Keywords: image informatics, Pseudomonas aeruginosa, biofilm, FISH, computer vision, data visualization
Procedia PDF Downloads 133626 An Image Processing Scheme for Skin Fungal Disease Identification
Authors: A. A. M. A. S. S. Perera, L. A. Ranasinghe, T. K. H. Nimeshika, D. M. Dhanushka Dissanayake, Namalie Walgampaya
Abstract:
Nowadays, skin fungal diseases are mostly found in people of tropical countries like Sri Lanka. A skin fungal disease is a particular kind of illness caused by fungus. These diseases have various dangerous effects on the skin and keep on spreading over time. It becomes important to identify these diseases at their initial stage to control it from spreading. This paper presents an automated skin fungal disease identification system implemented to speed up the diagnosis process by identifying skin fungal infections in digital images. An image of the diseased skin lesion is acquired and a comprehensive computer vision and image processing scheme is used to process the image for the disease identification. This includes colour analysis using RGB and HSV colour models, texture classification using Grey Level Run Length Matrix, Grey Level Co-Occurrence Matrix and Local Binary Pattern, Object detection, Shape Identification and many more. This paper presents the approach and its outcome for identification of four most common skin fungal infections, namely, Tinea Corporis, Sporotrichosis, Malassezia and Onychomycosis. The main intention of this research is to provide an automated skin fungal disease identification system that increase the diagnostic quality, shorten the time-to-diagnosis and improve the efficiency of detection and successful treatment for skin fungal diseases.Keywords: Circularity Index, Grey Level Run Length Matrix, Grey Level Co-Occurrence Matrix, Local Binary Pattern, Object detection, Ring Detection, Shape Identification
Procedia PDF Downloads 232625 Design Optimization of Miniature Mechanical Drive Systems Using Tolerance Analysis Approach
Authors: Eric Mxolisi Mkhondo
Abstract:
Geometrical deviations and interaction of mechanical parts influences the performance of miniature systems.These deviations tend to cause costly problems during assembly due to imperfections of components, which are invisible to a naked eye.They also tend to cause unsatisfactory performance during operation due to deformation cause by environmental conditions.One of the effective tools to manage the deviations and interaction of parts in the system is tolerance analysis.This is a quantitative tool for predicting the tolerance variations which are defined during the design process.Traditional tolerance analysis assumes that the assembly is static and the deviations come from the manufacturing discrepancies, overlooking the functionality of the whole system and deformation of parts due to effect of environmental conditions. This paper presents an integrated tolerance analysis approach for miniature system in operation.In this approach, a computer-aided design (CAD) model is developed from system’s specification.The CAD model is then used to specify the geometrical and dimensional tolerance limits (upper and lower limits) that vary component’s geometries and sizes while conforming to functional requirements.Worst-case tolerances are analyzed to determine the influenced of dimensional changes due to effects of operating temperatures.The method is used to evaluate the nominal conditions, and worse case conditions in maximum and minimum dimensions of assembled components.These three conditions will be evaluated under specific operating temperatures (-40°C,-18°C, 4°C, 26°C, 48°C, and 70°C). A case study on the mechanism of a zoom lens system is used to illustrate the effectiveness of the methodology.Keywords: geometric dimensioning, tolerance analysis, worst-case analysis, zoom lens mechanism
Procedia PDF Downloads 165624 Analyzing the Effect of Materials’ Selection on Energy Saving and Carbon Footprint: A Case Study Simulation of Concrete Structure Building
Authors: M. Kouhirostamkolaei, M. Kouhirostami, M. Sam, J. Woo, A. T. Asutosh, J. Li, C. Kibert
Abstract:
Construction is one of the most energy consumed activities in the urban environment that results in a significant amount of greenhouse gas emissions around the world. Thus, the impact of the construction industry on global warming is undeniable. Thus, reducing building energy consumption and mitigating carbon production can slow the rate of global warming. The purpose of this study is to determine the amount of energy consumption and carbon dioxide production during the operation phase and the impact of using new shells on energy saving and carbon footprint. Therefore, a residential building with a re-enforced concrete structure is selected in Babolsar, Iran. DesignBuilder software has been used for one year of building operation to calculate the amount of carbon dioxide production and energy consumption in the operation phase of the building. The primary results show the building use 61750 kWh of energy each year. Computer simulation analyzes the effect of changing building shells -using XPS polystyrene and new electrochromic windows- as well as changing the type of lighting on energy consumption reduction and subsequent carbon dioxide production. The results show that the amount of energy and carbon production during building operation has been reduced by approximately 70% by applying the proposed changes. The changes reduce CO2e to 11345 kg CO2/yr. The result of this study helps designers and engineers to consider material selection’s process as one of the most important stages of design for improving energy performance of buildings.Keywords: construction materials, green construction, energy simulation, carbon footprint, energy saving, concrete structure, designbuilder
Procedia PDF Downloads 198623 Adding a Few Language-Level Constructs to Improve OOP Verifiability of Semantic Correctness
Authors: Lian Yang
Abstract:
Object-oriented programming (OOP) is the dominant programming paradigm in today’s software industry and it has literally enabled average software developers to develop millions of commercial strength software applications in the era of INTERNET revolution over the past three decades. On the other hand, the lack of strict mathematical model and domain constraint features at the language level has long perplexed the computer science academia and OOP engineering community. This situation resulted in inconsistent system qualities and hard-to-understand designs in some OOP projects. The difficulties with regards to fix the current situation are also well known. Although the power of OOP lies in its unbridled flexibility and enormously rich data modeling capability, we argue that the ambiguity and the implicit facade surrounding the conceptual model of a class and an object should be eliminated as much as possible. We listed the five major usage of class and propose to separate them by proposing new language constructs. By using well-established theories of set and FSM, we propose to apply certain simple, generic, and yet effective constraints at OOP language level in an attempt to find a possible solution to the above-mentioned issues regarding OOP. The goal is to make OOP more theoretically sound as well as to aid programmers uncover warning signs of irregularities and domain-specific issues in applications early on the development stage and catch semantic mistakes at runtime, improving correctness verifiability of software programs. On the other hand, the aim of this paper is more practical than theoretical.Keywords: new language constructs, set theory, FSM theory, user defined value type, function groups, membership qualification attribute (MQA), check-constraint (CC)
Procedia PDF Downloads 239622 Optimal Image Representation for Linear Canonical Transform Multiplexing
Authors: Navdeep Goel, Salvador Gabarda
Abstract:
Digital images are widely used in computer applications. To store or transmit the uncompressed images requires considerable storage capacity and transmission bandwidth. Image compression is a means to perform transmission or storage of visual data in the most economical way. This paper explains about how images can be encoded to be transmitted in a multiplexing time-frequency domain channel. Multiplexing involves packing signals together whose representations are compact in the working domain. In order to optimize transmission resources each 4x4 pixel block of the image is transformed by a suitable polynomial approximation, into a minimal number of coefficients. Less than 4*4 coefficients in one block spares a significant amount of transmitted information, but some information is lost. Different approximations for image transformation have been evaluated as polynomial representation (Vandermonde matrix), least squares + gradient descent, 1-D Chebyshev polynomials, 2-D Chebyshev polynomials or singular value decomposition (SVD). Results have been compared in terms of nominal compression rate (NCR), compression ratio (CR) and peak signal-to-noise ratio (PSNR) in order to minimize the error function defined as the difference between the original pixel gray levels and the approximated polynomial output. Polynomial coefficients have been later encoded and handled for generating chirps in a target rate of about two chirps per 4*4 pixel block and then submitted to a transmission multiplexing operation in the time-frequency domain.Keywords: chirp signals, image multiplexing, image transformation, linear canonical transform, polynomial approximation
Procedia PDF Downloads 412621 Applying Kinect on the Development of a Customized 3D Mannequin
Authors: Shih-Wen Hsiao, Rong-Qi Chen
Abstract:
In the field of fashion design, 3D Mannequin is a kind of assisting tool which could rapidly realize the design concepts. While the concept of 3D Mannequin is applied to the computer added fashion design, it will connect with the development and the application of design platform and system. Thus, the situation mentioned above revealed a truth that it is very critical to develop a module of 3D Mannequin which would correspond with the necessity of fashion design. This research proposes a concrete plan that developing and constructing a system of 3D Mannequin with Kinect. In the content, ergonomic measurements of objective human features could be attained real-time through the implement with depth camera of Kinect, and then the mesh morphing can be implemented through transformed the locations of the control-points on the model by inputting those ergonomic data to get an exclusive 3D mannequin model. In the proposed methodology, after the scanned points from the Kinect are revised for accuracy and smoothening, a complete human feature would be reconstructed by the ICP algorithm with the method of image processing. Also, the objective human feature could be recognized to analyze and get real measurements. Furthermore, the data of ergonomic measurements could be applied to shape morphing for the division of 3D Mannequin reconstructed by feature curves. Due to a standardized and customer-oriented 3D Mannequin would be generated by the implement of subdivision, the research could be applied to the fashion design or the presentation and display of 3D virtual clothes. In order to examine the practicality of research structure, a system of 3D Mannequin would be constructed with JAVA program in this study. Through the revision of experiments the practicability-contained research result would come out.Keywords: 3D mannequin, kinect scanner, interactive closest point, shape morphing, subdivision
Procedia PDF Downloads 306620 Collaborative Platform for Learning Basic Programming (Algorinfo)
Authors: Edgar Mauricio Ruiz Osuna, Claudia Yaneth Herrera Bolivar, Sandra Liliana Gomez Vasquez
Abstract:
The increasing needs of professionals with skills in software development in industry are incremental, therefore, the relevance of an educational process in line with the strengthening of these competencies, are part of the responsibilities of universities with careers related to the area of Informatics and Systems. In this sense, it is important to consider that in the National Science, Technology and Innovation Plan for the development of the Electronics, Information Technologies and Communications (2013) sectors, it is established as a weakness in the SWOT Analysis of the Software sector and Services, Deficiencies in training and professional training. Accordingly, UNIMINUTO's Computer Technology Program has addressed the analysis of students' performance in software development, identifying various problems such as dropout in programming subjects, academic averages, as well as deficiencies in strategies and competencies developed in the area of programming. As a result of this analysis, it was determined to design a collaborative learning platform in basic programming using heat maps as a tool to support didactic feedback. The pilot phase allows to evaluate in a programming course the ALGORINFO platform as a didactic resource, through an interactive and collaborative environment where students can develop basic programming practices and in turn, are fed back through the analysis of time patterns and difficulties frequent in certain segments or program cycles, by means of heat maps. The result allows the teacher to have tools to reinforce and advise critical points generated on the map, so that students and graduates improve their skills as software developers.Keywords: collaborative platform, learning, feedback, programming, heat maps
Procedia PDF Downloads 162619 Polyhouse Farming: An Integrated Approach to Organic Farming
Authors: Promila Dahiya, Kiran Singh
Abstract:
Indian agriculture has come a long way from being an era of frequent droughts and vulnerability to food shortages to becoming a significant exporter of agricultural commodities. Polyhouses are essentially microcosms aimed at providing physical environment suitable for the survival and growth of plants with high degree of temperature, humidity and carbon dioxide. The present study was conducted in 21 districts of Haryana State to review Polyhouse farming is an alternative farming in Haryana State to fulfil the needs of population byminimum use of land, water and energy. The information regarding number, area and type of polyhouses and subsidy provided by Govt. of India and Haryana on polyhouse farming was collected from respective district horticulture offices of Haryana State. Four different types of polyhouses were studied during work viz., Hitechnology polyhouse (Hi-tech), Anti-Insect Net Shade House (AINSH), Naturally Ventilated Polyhouse (NVPH) and Walk-In-Tunnel (WIT).In study it was found that in walk-in-tunnel (WIT) and natural ventilated polyhouses (NVPH) the temperature was 69.54% and 52.29% higher and the humidity was 96.37% and 85.19 % higher in comparison to open farming in the months of January and May. No significant different was found in temperature, humidity, dust, solar radiation and CO2 level between open and anti insect net shade house (AINH). In Hi-tech polyhouse, the environment was totally controlled by computer and was not found to much strenuous. Health status of workers was checked by doctor, and it was found that in polyhouse farming workers were more prone to problems of allergy and asthma.Keywords: polyhouse, unfavorable climate, walk-in-tunnel, psychological aspect
Procedia PDF Downloads 464618 Taleb's Complexity Theory Concept of 'Antifragility' Has a Significant Contribution to Make to Positive Psychology as Applied to Wellbeing
Authors: Claudius Peter Van Wyk
Abstract:
Given the increasingly manifest phenomena, as described in complexity theory, of volatility, uncertainty, complexity and ambiguity (VUCA), Taleb's notion of 'antifragility, has a significant contribution to make to positive psychology applied to wellbeing. Antifragility is argued to be fundamentally different from the concepts of resiliency; as the ability to recover from failure, and robustness; as the ability to resist failure. Rather it describes the capacity to reorganise in the face of stress in such a way as to cope more effectively with systemic challenges. The concept, which has been applied in disciplines ranging from physics, molecular biology, planning, engineering, and computer science, can now be considered for its application in individual human and social wellbeing. There are strong correlations to Antonovsky's model of 'salutogenesis' in which an attitude and competencies are developed of transforming burdening factors into greater resourcefulness. We demonstrate, from the perspective of neuroscience, how technology measuring nervous system coherence can be coupled to acquired psychodynamic approaches to not only identify contextual stressors, utilise biofeedback instruments for facilitating greater coherence, but apply these insights to specific life stressors that compromise well-being. Employing an on-going case study with BMW South Africa, the neurological mapping is demonstrated together with 'reframing' and emotional anchoring techniques from neurolinguistic programming. The argument is contextualised in the discipline of psychoneuroimmunology which describes the stress pathways from the CNS and endocrine systems and their impact on immune function and the capacity to restore homeostasis.Keywords: antifragility, complexity, neuroscience, psychoneuroimmunology, salutogenesis, volatility
Procedia PDF Downloads 376617 Dosimetric Analysis of Intensity Modulated Radiotherapy versus 3D Conformal Radiotherapy in Adult Primary Brain Tumors: Regional Cancer Centre, India
Authors: Ravi Kiran Pothamsetty, Radha Rani Ghosh, Baby Paul Thaliath
Abstract:
Radiation therapy has undergone many advancements and evloved from 2D to 3D. Recently, with rapid pace of drug discoveries, cutting edge technology, and clinical trials has made innovative advancements in computer technology and treatment planning and upgraded to intensity modulated radiotherapy (IMRT) which delivers in homogenous dose to tumor and normal tissues. The present study was a hospital-based experience comparing two different conformal radiotherapy techniques for brain tumors. This analytical study design has been conducted at Regional Cancer Centre, India from January 2014 to January 2015. Ten patients have been selected after inclusion and exclusion criteria. All the patients were treated on Artiste Siemens Linac Accelerator. The tolerance level for maximum dose was 6.0 Gyfor lenses and 54.0 Gy for brain stem, optic chiasm and optical nerves as per RTOG criteria. Mean and standard deviation values of PTV98%, PTV 95% and PTV 2% in IMRT were 93.16±2.9, 95.01±3.4 and 103.1±1.1 respectively; for 3DCRT were 91.4±4.7, 94.17±2.6 and 102.7±0.39 respectively. PTV max dose (%) in IMRT and 3D-CRT were 104.7±0.96 and 103.9±1.0 respectively. Maximum dose to the tumor can be delivered with IMRT with acceptable toxicity limits. Variables such as expertise, location of tumor, patient condition, and TPS influence the outcome of the treatment.Keywords: brain tumors, intensity modulated radiotherapy (IMRT), three dimensional conformal radiotherapy (3D-CRT), radiation therapy oncology group (RTOG)
Procedia PDF Downloads 239616 Osteoarthritis (OA): A Total Knee Replacement Surgery
Authors: Loveneet Kaur
Abstract:
Introduction: Osteoarthritis (OA) is one of the leading causes of disability, and the knee is the most commonly affected joint in the body. The last resort for treatment of knee OA is Total Knee Replacement (TKR) surgery. Despite numerous advances in prosthetic design, patients do not reach normal function after surgery. Current surgical decisions are made on 2D radiographs and patient interviews. Aims: The aim of this study was to compare knee kinematics pre and post-TKR surgery using computer-animated images of patient-specific models under everyday conditions. Methods: 7 subjects were recruited for the study. Subjects underwent 3D gait analysis during 4 everyday activities and medical imaging of the knee joint pre- and one-month post-surgery. A 3D model was created from each of the scans, and the kinematic gait analysis data was used to animate the images. Results: Improvements were seen in a range of motion in all 4 activities 1-year post-surgery. The preoperative 3D images provide detailed information on the anatomy of the osteoarthritic knee. The postoperative images demonstrate potential future problems associated with the implant. Although not accurate enough to be of clinical use, the animated data can provide valuable insight into what conditions cause damage to both the osteoarthritic and prosthetic knee joints. As the animated data does not require specialist training to view, the images can be utilized across the fields of health professionals and manufacturing in the assessment and treatment of patients pre and post-knee replacement surgery. Future improvements in the collection and processing of data may yield clinically useful data. Conclusion: Although not yet of clinical use, the potential application of 3D animations of the knee joint pre and post-surgery is widespread.Keywords: Orthoporosis, Ortharthritis, knee replacement, TKR
Procedia PDF Downloads 48615 Role of Radiologic Technologist Specialist in Plain Image Interpretation of Adults in the Middle East: A Radiologist’s Perspective
Authors: Awad Mohamed Elkhadir, Rajab M. Ben Yousef
Abstract:
Background/Aim: Radiological technologists are medical professionals who perform diagnostic imaging tests such as X-rays, magnetic resonance imaging (MRI) scans, and computer tomography (CT) scans. Despite the recognition of image interpretation by British radiologists, it is still considered a problem in the Arab world. This study evaluates the perceptions of radiologists in the Middle East concerning the plain image interpretation of adults by radiologic technologist specialists. Methods: This is a cross-sectional study that follows a quantitative approach. A close-ended questionnaire was distributed among 103 participants who were radiologists by profession from various hospitals in Saudi Arabia and Sudan. The gathered data was then analyzed through Statistical Package for Social Sciences (SPSS). Results: The results showed that 29% recognized the Radiologic Technologist Specialist (RTS) role of writing image reports, while 61% did not. A total of 38% of participants believed that RTS image interpretation would help diagnose unreported radiographs. 47% of the sample responded that the workload and stress on radiologists would reduce by allowing reporting for RTS, while 37% did not. Lastly, 43% believe that image interpretation by RTS can be introduced into the Middle East in the future. Conclusion: The study's findings reveal that the combination of image reporting and radiography improves the care of the patients. The study's outcomes also show that the burden of the medical practitioners reduces due to image reporting of the radiographers. Further researches need to be conducted in the Arab World to obtain and measure the associated factors of the desired criteria.Keywords: Arab world, image interpretation, radiographer, radiologist, Saudi Arabia, Sudan
Procedia PDF Downloads 100614 Evaluation of the Dry Compressive Strength of Refractory Bricks Developed from Local Kaolin
Authors: Olanrewaju Rotimi Bodede, Akinlabi Oyetunji
Abstract:
Modeling the dry compressive strength of sodium silicate bonded kaolin refractory bricks was studied. The materials used for this research work included refractory clay obtained from Ijero-Ekiti kaolin deposit on coordinates 7º 49´N and 5º 5´E, sodium silicate obtained from the open market in Lagos on coordinates 6°27′11″N 3°23′45″E all in the South Western part of Nigeria. The mineralogical composition of the kaolin clay was determined using the Energy Dispersive X-Ray Fluorescence Spectrometer (ED-XRF). The clay samples were crushed and sieved using the laboratory pulveriser, ball mill and sieve shaker respectively to obtain 100 μm diameter particles. Manual pipe extruder of dimension 30 mm diameter by 43.30 mm height was used to prepare the samples with varying percentage volume of sodium silicate 5 %, 7.5 % 10 %, 12.5 %, 15 %, 17.5 %, 20% and 22.5 % while kaolin and water were kept at 50 % and 5 % respectively for the comprehensive test. The samples were left to dry in the open laboratory atmosphere for 24 hours to remove moisture. The samples were then were fired in an electrically powered muffle furnace. Firing was done at the following temperatures; 700ºC, 750ºC, 800ºC, 850ºC, 900ºC, 950ºC, 1000ºC and 1100ºC. Compressive strength test was carried out on the dried samples using a Testometric Universal Testing Machine (TUTM) equipped with a computer and printer, optimum compression of 4.41 kN/mm2 was obtained at 12.5 % sodium silicate; the experimental results were modeled with MATLAB and Origin packages using polynomial regression equations that predicted the estimated values for dry compressive strength and later validated with Pearson’s rank correlation coefficient, thereby obtaining a very high positive correlation value of 0.97.Keywords: dry compressive strength, kaolin, modeling, sodium silicate
Procedia PDF Downloads 455613 Perceived and Performed E-Health Literacy: Survey and Simulated Performance Test
Authors: Efrat Neter, Esther Brainin, Orna Baron-Epel
Abstract:
Background: Connecting end-users to newly developed ICT technologies and channeling patients to new products requires an assessment of compatibility. End user’s assessment is conveyed in the concept of eHealth literacy. The study examined the association between perceived and performed eHealth literacy (EHL) in a heterogeneous age sample in Israel. Methods: Participants included 100 Israeli adults (mean age 43,SD 13.9) who were first phone interviewed and then tested on a computer simulation of health-related Internet tasks. Performed, perceived and evaluated EHL were assessed. Levels of successful completion of tasks represented EHL performance and evaluated EHL included observed motivation, confidence, and amount of help provided. Results: The skills of accessing, understanding, appraising, applying, and generating new information had a decreasing successful completion rate with increase in complexity of the task. Generating new information, though highly correlated with all other skills, was least correlated with the other skills. Perceived and performed EHL were correlated (r=.40, P=.001), while facets of performance (i.e, digital literacy and EHL) were highly correlated (r=.89, P<.001). Participants low and high in performed EHL were significantly different: low performers were older, had attained less education, used the Internet for less time and perceived themselves as less healthy. They also encountered more difficulties, required more assistance, were less confident in their conduct and exhibited less motivation than high performers. Conclusions: The association in this age-hetrogenous ample was larger than in previous age-homogenous samples. The moderate association between perceived and performed EHL indicates that the two are associated yet distinct, the latter requiring separate assessment. Features of future rapid performed EHL tools are discussed.Keywords: eHealth, health literacy, performance, simulation
Procedia PDF Downloads 236612 Hybrid Approach for Face Recognition Combining Gabor Wavelet and Linear Discriminant Analysis
Authors: A: Annis Fathima, V. Vaidehi, S. Ajitha
Abstract:
Face recognition system finds many applications in surveillance and human computer interaction systems. As the applications using face recognition systems are of much importance and demand more accuracy, more robustness in the face recognition system is expected with less computation time. In this paper, a hybrid approach for face recognition combining Gabor Wavelet and Linear Discriminant Analysis (HGWLDA) is proposed. The normalized input grayscale image is approximated and reduced in dimension to lower the processing overhead for Gabor filters. This image is convolved with bank of Gabor filters with varying scales and orientations. LDA, a subspace analysis techniques are used to reduce the intra-class space and maximize the inter-class space. The techniques used are 2-dimensional Linear Discriminant Analysis (2D-LDA), 2-dimensional bidirectional LDA ((2D)2LDA), Weighted 2-dimensional bidirectional Linear Discriminant Analysis (Wt (2D)2 LDA). LDA reduces the feature dimension by extracting the features with greater variance. k-Nearest Neighbour (k-NN) classifier is used to classify and recognize the test image by comparing its feature with each of the training set features. The HGWLDA approach is robust against illumination conditions as the Gabor features are illumination invariant. This approach also aims at a better recognition rate using less number of features for varying expressions. The performance of the proposed HGWLDA approaches is evaluated using AT&T database, MIT-India face database and faces94 database. It is found that the proposed HGWLDA approach provides better results than the existing Gabor approach.Keywords: face recognition, Gabor wavelet, LDA, k-NN classifier
Procedia PDF Downloads 467611 Graphene Based Materials as Novel Membranes for Water Desalination and Boron Separation
Authors: Francesca Risplendi, Li-Chiang Lin, Jeffrey C. Grossman, Giancarlo Cicero
Abstract:
Desalination is one of the most employed approaches to supply water in the context of a rapidly growing global water shortage. However, the most popular water filtration method available is the reverse osmosis (RO) technique, still suffers from important drawbacks, such as a large energy demands and high process costs. In addition some serious limitations have been recently discovered, among them, the boron problem seems to have a critical meaning. Boron has been found to have a dual effect on the living systems on Earth and the difference between boron deficiency and boron toxicity levels is quite small. The aim of this project is to develop a new generation of RO membranes based on porous graphene or reduced graphene oxide (rGO) able to remove salts from seawater and to reduce boron concentrations in the permeate to the level that meets the drinking or process water requirements, by means of a theoretical approach based on density functional theory and classical molecular dynamics. Computer simulations have been employed to investigate the relationship between the atomic structure of nanoporous graphene or rGO monolayer and its membrane properties in RO applications (i.e. water permeability and resilience at RO pressures). In addition, an emphasis has been given to multilayer nanoporous rGO and rGO flakes based membranes. By means of non-equilibrium MD simulations, we investigated the water transport mechanism permeating through such multilayer membrane focusing on the effect of slit widths and sheet geometries. These simulations allowed us to establish the implications of these graphene based materials as promising membrane properties for desalination plants and as boron filtration.Keywords: boron filtration, desalination, graphene membrane, reduced graphene oxide membrane
Procedia PDF Downloads 299610 Hydrocarbon Source Rocks of the Maragh Low
Authors: Elhadi Nasr, Ibrahim Ramadan
Abstract:
Biostratigraphical analyses of well sections from the Maragh Low in the Eastern Sirt Basin has allowed high resolution correlations to be undertaken. Full integration of this data with available palaeoenvironmental, lithological, gravity, seismic, aeromagnetic, igneous, radiometric and wireline log information and a geochemical analysis of source rock quality and distribution has led to a more detailed understanding of the geological and the structural history of this area. Pre Sirt Unconformity two superimposed rifting cycles have been identified. The oldest is represented by the Amal Group of sediments and is of Late Carboniferous, Kasimovian / Gzelian to Middle Triassic, Anisian age. Unconformably overlying is a younger rift cycle which is represented the Sarir Group of sediments and is of Early Cretaceous, late Neocomian to Aptian in age. Overlying the Sirt Unconformity is the marine Late Cretaceous section. An assessment of pyrolysis results and a palynofacies analysis has allowed hydrocarbon source facies and quality to be determined. There are a number of hydrocarbon source rock horizons in the Maragh Low, these are sometimes vertically stacked and they are of fair to excellent quality. The oldest identified source rock is the Triassic Shale, this unit is unconformably overlain by sandstones belonging to the Sarir Group and conformably overlies a Triassic Siltstone unit. Palynological dating of the Triassic Shale unit indicates a Middle Triassic, Anisian age. The Triassic Shale is interpreted to have been deposited in a lacustrine palaeoenvironment. This particularly is evidenced by the dark, fine grained, organic rich nature of the sediment and is supported by palynofacies analysis and by the recovery of fish fossils. Geochemical analysis of the Triassic Shale indicates total organic carbon varying between 1.37 and 3.53. S2 pyrolysate yields vary between 2.15 mg/g and 6.61 mg/g and hydrogen indices vary between 156.91 and 278.91. The source quality of the Triassic Shale varies from being of fair to very good / rich. Linked to thermal maturity it is now a very good source for light oil and gas. It was once a very good to rich oil source. The Early Barremian Shale was also deposited in a lacustrine palaeoenvironment. Recovered palynomorphs indicate an Early Cretaceous, late Neocomian to early Barremian age. The Early Barremian Shale is conformably underlain and overlain by sandstone units belonging to the Sarir Group of sediments which are also of Early Cretaceous age. Geochemical analysis of the Early Barremian Shale indicates that it is a good oil source and was originally very good. Total organic carbon varies between 3.59% and 7%. S2 varies between 6.30 mg/g and 10.39 mg/g and the hydrogen indices vary between 148.4 and 175.5. A Late Barremian Shale unit of this age has also been identified in the central Maragh Low. Geochemical analyses indicate that total organic carbon varies between 1.05 and 2.38%, S2 pyrolysate between 1.6 and 5.34 mg/g and the hydrogen index between 152.4 and 224.4. It is a good oil source rock which is now mature. In addition to the non marine hydrocarbon source rocks pre Sirt Unconformity, three formations in the overlying Late Cretaceous section also provide hydrocarbon quality source rocks. Interbedded shales within the Rachmat Formation of Late Cretaceous, early Campanian age have total organic carbon ranging between, 0.7 and 1.47%, S2 pyrolysate varying between 1.37 and 4.00 mg/g and hydrogen indices varying between 195.7 and 272.1. The indication is that this unit would provide a fair gas source to a good oil source. Geochemical analyses of the overlying Tagrifet Limestone indicate that total organic carbon varies between 0.26% and 1.01%. S2 pyrolysate varies between 1.21 and 2.16 mg/g and hydrogen indices vary between 195.7 and 465.4. For the overlying Sirt Shale Formation of Late Cretaceous, late Campanian age, total organic carbon varies between 1.04% and 1.51%, S2 pyrolysate varies between 4.65 mg/g and 6.99 mg/g and the hydrogen indices vary between 151 and 462.9. The study has proven that both the Sirt Shale Formation and the Tagrifet Limestone are good to very good and rich sources for oil in the Maragh Low. High resolution biostratigraphical interpretations have been integrated and calibrated with thermal maturity determinations (Vitrinite Reflectance (%Ro), Spore Colour Index (SCI) and Tmax (ºC) and the determined present day geothermal gradient of 25ºC / Km for the Maragh Low. Interpretation of generated basin modelling profiles allows a detailed prediction of timing of maturation development of these source horizons and leads to a determination of amounts of missing section at major unconformities. From the results the top of the oil window (0.72% Ro) is picked as high as 10,700’ and the base of the oil window (1.35% Ro) assuming a linear trend and by projection is picked as low as 18,000’ in the Maragh Low. For the Triassic Shale the early phase of oil generation was in the Late Palaeocene / Early to Middle Eocene and the main phase of oil generation was in the Middle to Late Eocene. The Early Barremian Shale reached the main phase of oil generation in the Early Oligocene with late generation being reached in the Middle Miocene. For the Rakb Group section (Rachmat Formation, Tagrifet Limestone and Sirt Shale Formation) the early phase of oil generation started in the Late Eocene with the main phase of generation being between the Early Oligocene and the Early Miocene. From studying maturity profiles and from regional considerations it can be predicted that up to 500’ of sediment may have been deposited and eroded by the Sirt Unconformity in the central Maragh Low while up to 2000’ of sediment may have been deposited and then eroded to the south of the trough.Keywords: Geochemical analysis of the source rocks from wells in Eastern Sirt Basin.
Procedia PDF Downloads 408609 Competency and Strategy Formulation in Automobile Industry
Authors: Chandan Deep Singh
Abstract:
In present days, companies are facing the rapid competition in terms of customer requirements to be satisfied, new technologies to be integrated into future products, new safety regulations to be followed, new computer-based tools to be introduced into design activities that becomes more scientific. In today’s highly competitive market, survival focuses on various factors such as quality, innovation, adherence to standards, and rapid response as the basis for competitive advantage. For competitive advantage, companies have to produce various competencies: for improving the capability of suppliers and for strengthening the process of integrating technology. For more competitiveness, organizations should operate in a strategy driven way and have a strategic architecture for developing core competencies. Traditional ways to take such experience and develop competencies tend to take a lot of time and they are expensive. A new learning environment, which is built around a gaming engine, supports the development of competences in specific subject areas. Technology competencies have a significant role in firm innovation and competitiveness; they interact with the competitive environment. Technological competencies vary according to the type of competitive environment, thus enhancing firm innovativeness. Technological competency is gained through extensive experimentation and learning in its research, development and employment in manufacturing. This is a review paper based on competency and strategic success of automobile industry. The aim here is to study strategy formulation and competency tools in the industry. This work is a review of literature related to competency and strategy in automobile industry. This study involves review of 34 papers related to competency and strategy.Keywords: manufacturing competency, strategic success, competitiveness, strategy formulation
Procedia PDF Downloads 311608 Orthogonal Metal Cutting Simulation of Steel AISI 1045 via Smoothed Particle Hydrodynamic Method
Authors: Seyed Hamed Hashemi Sohi, Gerald Jo Denoga
Abstract:
Machining or metal cutting is one of the most widely used production processes in industry. The quality of the process and the resulting machined product depends on parameters like tool geometry, material, and cutting conditions. However, the relationships of these parameters to the cutting process are often based mostly on empirical knowledge. In this study, computer modeling and simulation using LS-DYNA software and a Smoothed Particle Hydrodynamic (SPH) methodology, was performed on the orthogonal metal cutting process to analyze three-dimensional deformation of AISI 1045 medium carbon steel during machining. The simulation was performed using the following constitutive models: the Power Law model, the Johnson-Cook model, and the Zerilli-Armstrong models (Z-A). The outcomes were compared against the simulated results obtained by Cenk Kiliçaslan using the Finite Element Method (FEM) and the empirical results of Jaspers and Filice. The analysis shows that the SPH method combined with the Zerilli-Armstrong constitutive model is a viable alternative to simulating the metal cutting process. The tangential force was overestimated by 7%, and the normal force was underestimated by 16% when compared with empirical values. The simulation values for flow stress versus strain at various temperatures were also validated against empirical values. The SPH method using the Z-A model has also proven to be robust against issues of time-scaling. Experimental work was also done to investigate the effects of friction, rake angle and tool tip radius on the simulation.Keywords: metal cutting, smoothed particle hydrodynamics, constitutive models, experimental, cutting forces analyses
Procedia PDF Downloads 261607 The Morphological and Morphometrical Evaluation of the Bores That Transmit Emissary Veins in Terms of Surgery
Authors: Fikri Turk, Sahika Pinar Akyer, Mevci Ozdemir, Mehmet Bulent Ozdemir, Ilgaz Akdogan
Abstract:
The complications such as bleeding, thrombosis and air embolism depend on injuries emissary veins is often encountered in surgery. Detailed descriptions of the mastoid foramen, occipital foramen, parietal foramen, posterior condylar canal and foramen vesalius are lacking in the literature. For this reason, the purpose of our study was to explore and represent the morphology and morphometry of these emissary foramina in order to prevent complications and to guide for surgeons. The present study was made on 60 dry human skull in the laboratories of Pamukkale University, Faculty of Medicine Department of Anatomy. After taken photograph of emissary foramens by Canon 650D professional camera, the evaluation and measurement’s these foramens made with Matlab program by computer. The overall prevalence of mastoid foramen was 90.52%, occipital foramen was 72.52%, parietal foramen was 42.85%, posterior condylar canal was 91.25% and foramen vesalius was 78.26%. The mean diameter of the mastoid foramen was 1.81±0.76 mm, occipital foramen was 1.20±0.25 mm, parietal foramen was 1.49±0.46 mm, posterior condylar canal was 2.83±1.33 mm and foramen vesalius was 1.74±0.60 mm. Distances between emissary foramina and fixed bony landmarks were measured. Emissary veins are important in clinic practice and surgical procedures because they act a route of spread of exracranial infection to the intracranial structures and these veins may be a significant bleeding during surgery of the skull and they can be source of thrombosis and air embolism. The detailed anatomical knowledge of these veins and foraminas may help to prevent complications and to guide for surgeons.Keywords: emissary foramina, mastoid foramen, occipital foramen, parietal foramen, posterior condylar canal, foramen vesalius, morphology, morphometry
Procedia PDF Downloads 365606 The Influence of Cultural Perceptions in the Preference and Choice of STEM Programs
Authors: Priscilla Adoley Moffat
Abstract:
This study explored perceptions rooted in and acquired from the cultures of many developing countries and how they impact applicants’ preferences and choices of STEM programs. The context of developing countries was chosen for this study because gender role socialization continues to maintain an important place in most of these cultures. This study’s relevance rests in the fact that, as the world takes steps to encourage and promote the choice and study of STEM programs, especially among females, there is a need for efforts towards understanding various cultural perceptions towards some programs of study, particularly STEM programs, which have diverse gender attributions in many developing cultures. Also, as the world strives to achieve gender equity in education, such a study comes in handy, as it provides a useful understanding of the underlying cultural factors that affect study program preferences of applicants, particularly in developing countries like Ghana as well as others in Africa. The study analyzed the admission application data of five public universities in Ghana. 1600 randomly-sampled final-year students of 32 randomly-selected senior high schools from the 16 regions of Ghana were interviewed. Since parents and teachers often guide and influence the study program choices of applicants, the study examined the perceptions of 180 teachers and 360 parents. The study found, among other things, that STEM programs are commonly perceived to pose much more difficulty to females than they do to males. As a result, many female applicants are discouraged from choosing these programs. While nursing programs are perceived more as programs for females, with the justification that females are better caregivers, males are perceived to be better medical doctors, engineers, and computer technicians. Thus, many females are less encouraged to choose Technology and Engineering programs.Keywords: culture, perceptions, STEM, choice, preference
Procedia PDF Downloads 84605 A Relative Entropy Regularization Approach for Fuzzy C-Means Clustering Problem
Authors: Ouafa Amira, Jiangshe Zhang
Abstract:
Clustering is an unsupervised machine learning technique; its aim is to extract the data structures, in which similar data objects are grouped in the same cluster, whereas dissimilar objects are grouped in different clusters. Clustering methods are widely utilized in different fields, such as: image processing, computer vision , and pattern recognition, etc. Fuzzy c-means clustering (fcm) is one of the most well known fuzzy clustering methods. It is based on solving an optimization problem, in which a minimization of a given cost function has been studied. This minimization aims to decrease the dissimilarity inside clusters, where the dissimilarity here is measured by the distances between data objects and cluster centers. The degree of belonging of a data point in a cluster is measured by a membership function which is included in the interval [0, 1]. In fcm clustering, the membership degree is constrained with the condition that the sum of a data object’s memberships in all clusters must be equal to one. This constraint can cause several problems, specially when our data objects are included in a noisy space. Regularization approach took a part in fuzzy c-means clustering technique. This process introduces an additional information in order to solve an ill-posed optimization problem. In this study, we focus on regularization by relative entropy approach, where in our optimization problem we aim to minimize the dissimilarity inside clusters. Finding an appropriate membership degree to each data object is our objective, because an appropriate membership degree leads to an accurate clustering result. Our clustering results in synthetic data sets, gaussian based data sets, and real world data sets show that our proposed model achieves a good accuracy.Keywords: clustering, fuzzy c-means, regularization, relative entropy
Procedia PDF Downloads 259604 An End-to-end Piping and Instrumentation Diagram Information Recognition System
Authors: Taekyong Lee, Joon-Young Kim, Jae-Min Cha
Abstract:
Piping and instrumentation diagram (P&ID) is an essential design drawing describing the interconnection of process equipment and the instrumentation installed to control the process. P&IDs are modified and managed throughout a whole life cycle of a process plant. For the ease of data transfer, P&IDs are generally handed over from a design company to an engineering company as portable document format (PDF) which is hard to be modified. Therefore, engineering companies have to deploy a great deal of time and human resources only for manually converting P&ID images into a computer aided design (CAD) file format. To reduce the inefficiency of the P&ID conversion, various symbols and texts in P&ID images should be automatically recognized. However, recognizing information in P&ID images is not an easy task. A P&ID image usually contains hundreds of symbol and text objects. Most objects are pretty small compared to the size of a whole image and are densely packed together. Traditional recognition methods based on geometrical features are not capable enough to recognize every elements of a P&ID image. To overcome these difficulties, state-of-the-art deep learning models, RetinaNet and connectionist text proposal network (CTPN) were used to build a system for recognizing symbols and texts in a P&ID image. Using the RetinaNet and the CTPN model carefully modified and tuned for P&ID image dataset, the developed system recognizes texts, equipment symbols, piping symbols and instrumentation symbols from an input P&ID image and save the recognition results as the pre-defined extensible markup language format. In the test using a commercial P&ID image, the P&ID information recognition system correctly recognized 97% of the symbols and 81.4% of the texts.Keywords: object recognition system, P&ID, symbol recognition, text recognition
Procedia PDF Downloads 153603 Developing an Exhaustive and Objective Definition of Social Enterprise through Computer Aided Text Analysis
Authors: Deepika Verma, Runa Sarkar
Abstract:
One of the prominent debates in the social entrepreneurship literature has been to establish whether entrepreneurial work for social well-being by for-profit organizations can be classified as social entrepreneurship or not. Of late, the scholarship has reached a consensus. It concludes that there seems little sense in confining social entrepreneurship to just non-profit organizations. Boosted by this research, increasingly a lot of businesses engaged in filling the social infrastructure gaps in developing countries are calling themselves social enterprise. These organizations are diverse in their ownership, size, objectives, operations and business models. The lack of a comprehensive definition of social enterprise leads to three issues. Firstly, researchers may face difficulty in creating a database for social enterprises because the choice of an entity as a social enterprise becomes subjective or based on some pre-defined parameters by the researcher which is not replicable. Secondly, practitioners who use ‘social enterprise’ in their vision/mission statement(s) may find it difficult to adjust their business models accordingly especially during the times when they face the dilemma of choosing social well-being over business viability. Thirdly, social enterprise and social entrepreneurship attract a lot of donor funding and venture capital. In the paucity of a comprehensive definitional guide, the donors or investors may find assigning grants and investments difficult. It becomes necessary to develop an exhaustive and objective definition of social enterprise and examine whether the understanding of the academicians and practitioners about social enterprise match. This paper develops a dictionary of words often associated with social enterprise or (and) social entrepreneurship. It further compares two lexicographic definitions of social enterprise imputed from the abstracts of academic journal papers and trade publications extracted from the EBSCO database using the ‘tm’ package in R software.Keywords: EBSCO database, lexicographic definition, social enterprise, text mining
Procedia PDF Downloads 397602 Don't Just Guess and Slip: Estimating Bayesian Knowledge Tracing Parameters When Observations Are Scant
Authors: Michael Smalenberger
Abstract:
Intelligent tutoring systems (ITS) are computer-based platforms which can incorporate artificial intelligence to provide step-by-step guidance as students practice problem-solving skills. ITS can replicate and even exceed some benefits of one-on-one tutoring, foster transactivity in collaborative environments, and lead to substantial learning gains when used to supplement the instruction of a teacher or when used as the sole method of instruction. A common facet of many ITS is their use of Bayesian Knowledge Tracing (BKT) to estimate parameters necessary for the implementation of the artificial intelligence component, and for the probability of mastery of a knowledge component relevant to the ITS. While various techniques exist to estimate these parameters and probability of mastery, none directly and reliably ask the user to self-assess these. In this study, 111 undergraduate students used an ITS in a college-level introductory statistics course for which detailed transaction-level observations were recorded, and users were also routinely asked direct questions that would lead to such a self-assessment. Comparisons were made between these self-assessed values and those obtained using commonly used estimation techniques. Our findings show that such self-assessments are particularly relevant at the early stages of ITS usage while transaction level data are scant. Once a user’s transaction level data become available after sufficient ITS usage, these can replace the self-assessments in order to eliminate the identifiability problem in BKT. We discuss how these findings are relevant to the number of exercises necessary to lead to mastery of a knowledge component, the associated implications on learning curves, and its relevance to instruction time.Keywords: Bayesian Knowledge Tracing, Intelligent Tutoring System, in vivo study, parameter estimation
Procedia PDF Downloads 172601 Aerodynamic Analysis by Computational Fluids Dynamics in Building: Case Study
Authors: Javier Navarro Garcia, Narciso Vazquez Carretero
Abstract:
Eurocode 1, part 1-4, wind actions, includes in its article 1.5 the possibility of using numerical calculation methods to obtain information on the loads acting on a building. On the other hand, the analysis using computational fluids dynamics (CFD) in aerospace, aeronautical, and industrial applications is already in widespread use. The application of techniques based on CFD analysis on the building to study its aerodynamic behavior now opens a whole alternative field of possibilities for civil engineering and architecture; optimization of the results with respect to those obtained by applying the regulations, the possibility of obtaining information on pressures, speeds at any point of the model for each moment, the analysis of turbulence and the possibility of modeling any geometry or configuration. The present work compares the results obtained on a building, with respect to its aerodynamic behavior, from a mathematical model based on the analysis by CFD with the results obtained by applying Eurocode1, part1-4, wind actions. It is verified that the results obtained by CFD techniques suppose an optimization of the wind action that acts on the building with respect to the wind action obtained by applying the Eurocode1, part 1-4, wind actions. In order to carry out this verification, a 45m high square base truncated pyramid building has been taken. The mathematical model on CFD, based on finite volumes, has been calculated using the FLUENT commercial computer application using a scale-resolving simulation (SRS) type large eddy simulation (LES) turbulence model for an atmospheric boundary layer wind with turbulent component in the direction of the flow.Keywords: aerodynamic, CFD, computacional fluids dynamics, computational mechanics
Procedia PDF Downloads 137