Search results for: Computer Experiments
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2898

Search results for: Computer Experiments

2508 The Effect of Curcumin on Cryopreserved Bovine Semen

Authors: Eva Tvrdá, Marek Halenár, Hana Greifová, Alica Mackovich, Faridullah Hashim, Norbert Lukáč

Abstract:

Oxidative stress associated with semen cryopreservation may result in lipid peroxidation (LPO), DNA damage and apoptosis, leading to decreased sperm motility and fertilization ability. Curcumin (CUR), a natural phenol isolated from Curcuma longa Linn. has been presented as a possible supplement for a more effective semen cryopreservation because of its antioxidant properties. This study focused to evaluate the effects of CUR on selected oxidative stress parameters in cryopreserved bovine semen. 20 bovine ejaculates were split into two aliquots and diluted with a commercial semen extender containing CUR (50 μmol/L) or no supplement (control), cooled to 4 °C, frozen and kept in liquid nitrogen. Frozen straws were thawed in a water bath for subsequent experiments. Computer assisted semen analysis was used to evaluate spermatozoa motility, and reactive oxygen species (ROS) generation was quantified by using luminometry. Superoxide generation was evaluated with the NBT test, and LPO was assessed via the TBARS assay. CUR supplementation significantly (P<0.001) increased the spermatozoa motility and provided a significantly higher protection against ROS (P<0.001) or superoxide (P<0.01) overgeneration caused by semen freezing and thawing. Furthermore, CUR administration resulted in a significantly (P<0.01) lower LPO of the experimental semen samples. In conclusion, CUR exhibits significant ROS-scavenging activities which may prevent oxidative insults to cryopreserved spermatozoa and thus may enhance the post-thaw functional activity of male gametes.

Keywords: Bulls, cryopreservation, curcumin, lipid peroxidation, reactive oxygen species, spermatozoa.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2145
2507 Influence of Deep Cold Rolling and Low Plasticity Burnishing on Surface Hardness and Surface Roughness of AISI 4140 Steel

Authors: P. R. Prabhu, S. M. Kulkarni, S. S. Sharma

Abstract:

Deep cold rolling (DCR) and low plasticity burnishing (LPB) process are cold working processes, which easily produce a smooth and work-hardened surface by plastic deformation of surface irregularities. The present study focuses on the surface roughness and surface hardness aspects of AISI 4140 work material, using fractional factorial design of experiments. The assessment of the surface integrity aspects on work material was done, in order to identify the predominant factors amongst the selected parameters. They were then categorized in order of significance followed by setting the levels of the factors for minimizing surface roughness and/or maximizing surface hardness. In the present work, the influence of main process parameters (force, feed rate, number of tool passes/overruns, initial roughness of the work piece, ball material, ball diameter and lubricant used) on the surface roughness and the hardness of AISI 4140 steel were studied for both LPB and DCR process and the results are compared. It was observed that by using LPB process surface hardness has been improved by 167% and in DCR process surface hardness has been improved by 442%. It was also found that the force, ball diameter, number of tool passes and initial roughness of the workpiece are the most pronounced parameters, which has a significant effect on the work piece-s surface during deep cold rolling and low plasticity burnishing process.

Keywords: Deep cold rolling, burnishing, surface roughness, surface hardness, design of experiments, AISI4140 steel.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3767
2506 The Application of Six Sigma to Integration of Computer Based Systems

Authors: Zenon Chaczko, Essam Rahali, Rizwan Tariq

Abstract:

This paper introduces a process for the module level integration of computer based systems. It is based on the Six Sigma Process Improvement Model, where the goal of the process is to improve the overall quality of the system under development. We also present a conceptual framework that shows how this process can be implemented as an integration solution. Finally, we provide a partial implementation of key components in the conceptual framework.

Keywords: Software Quality, Six Sigma, System Integration, 3SI Process, 3SI Conceptual Framework.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1652
2505 Effect of Injection Moulding Process Parameter on Tensile Strength Using Taguchi Method

Authors: Gurjeet Singh, M. K. Pradhan, Ajay Verma

Abstract:

The plastic industry plays very important role in the economy of any country. It is generally among the leading share of the economy of the country. Since metals and their alloys are very rarely available on the earth. Therefore, to produce plastic products and components, which finds application in many industrial as well as household consumer products is beneficial. Since 50% plastic products are manufactured by injection moulding process. For production of better quality product, we have to control quality characteristics and performance of the product. The process parameters plays a significant role in production of plastic, hence the control of process parameter is essential. In this paper the effect of the parameters selection on injection moulding process has been described. It is to define suitable parameters in producing plastic product. Selecting the process parameter by trial and error is neither desirable nor acceptable, as it is often tends to increase the cost and time. Hence, optimization of processing parameter of injection moulding process is essential. The experiments were designed with Taguchi’s orthogonal array to achieve the result with least number of experiments. Plastic material polypropylene is studied. Tensile strength test of material is done on universal testing machine, which is produced by injection moulding machine. By using Taguchi technique with the help of MiniTab-14 software the best value of injection pressure, melt temperature, packing pressure and packing time is obtained. We found that process parameter packing pressure contribute more in production of good tensile plastic product.

Keywords: Injection moulding, tensile strength, Taguchi method, poly-propylene.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3740
2504 Estimation of Relative Permeabilities and Capillary Pressures in Shale Using Simulation Method

Authors: F. C. Amadi, G. C. Enyi, G. Nasr

Abstract:

Relative permeabilities are practical factors that are used to correct the single phase Darcy’s law for application to multiphase flow. For effective characterisation of large-scale multiphase flow in hydrocarbon recovery, relative permeability and capillary pressures are used. These parameters are acquired via special core flooding experiments. Special core analysis (SCAL) module of reservoir simulation is applied by engineers for the evaluation of these parameters. But, core flooding experiments in shale core sample are expensive and time consuming before various flow assumptions are achieved for instance Darcy’s law. This makes it imperative for the application of coreflooding simulations in which various analysis of relative permeabilities and capillary pressures of multiphase flow can be carried out efficiently and effectively at a relative pace. This paper presents a Sendra software simulation of core flooding to achieve to relative permeabilities and capillary pressures using different correlations. The approach used in this study was three steps. The first step, the basic petrophysical parameters of Marcellus shale sample such as porosity was determined using laboratory techniques. Secondly, core flooding was simulated for particular scenario of injection using different correlations. And thirdly the best fit correlations for the estimation of relative permeability and capillary pressure was obtained. This research approach saves cost and time and very reliable in the computation of relative permeability and capillary pressures at steady or unsteady state, drainage or imbibition processes in oil and gas industry when compared to other methods.

Keywords: Special core analysis (SCAL), relative permeability, capillary pressures, drainage, imbibition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1797
2503 Beam Coding with Orthogonal Complementary Golay Codes for Signal to Noise Ratio Improvement in Ultrasound Mammography

Authors: Y. Kumru, K. Enhos, H. Köymen

Abstract:

In this paper, we report the experimental results on using complementary Golay coded signals at 7.5 MHz to detect breast microcalcifications of 50 µm size. Simulations using complementary Golay coded signals show perfect consistence with the experimental results, confirming the improved signal to noise ratio for complementary Golay coded signals. For improving the success on detecting the microcalcifications, orthogonal complementary Golay sequences having cross-correlation for minimum interference are used as coded signals and compared to tone burst pulse of equal energy in terms of resolution under weak signal conditions. The measurements are conducted using an experimental ultrasound research scanner, Digital Phased Array System (DiPhAS) having 256 channels, a phased array transducer with 7.5 MHz center frequency and the results obtained through experiments are validated by Field-II simulation software. In addition, to investigate the superiority of coded signals in terms of resolution, multipurpose tissue equivalent phantom containing series of monofilament nylon targets, 240 µm in diameter, and cyst-like objects with attenuation of 0.5 dB/[MHz x cm] is used in the experiments. We obtained ultrasound images of monofilament nylon targets for the evaluation of resolution. Simulation and experimental results show that it is possible to differentiate closely positioned small targets with increased success by using coded excitation in very weak signal conditions.

Keywords: Coded excitation, complementary Golay codes, DiPhAS, medical ultrasound.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 883
2502 A Survey of 2nd Year Students’ Frequent English Writing Errors and the Effects of Participatory Error Correction Process

Authors: Chaiwat Tantarangsee

Abstract:

The purposes of this study are 1) to study the effects of participatory error correction process and 2) to find out the students’ satisfaction of such error correction process. This study is a Quasi Experimental Research with single group, in which data is collected 5 times preceding and following 4 experimental studies of participatory error correction process including providing coded indirect corrective feedback in the students’ texts with error treatment activities. Samples include 52 2nd year English Major students, Faculty of Humanities and Social Sciences, Suan Sunandha Rajabhat University. Tool for experimental study includes the lesson plan of the course; Reading and Writing English for Academic Purposes II, and tools for data collection include 5 writing tests of short texts and a questionnaire. Based on formative evaluation of the students’ writing ability prior to and after each of the 4 experiments, the research findings disclose the students’ higher scores with statistical difference at 0.00. Moreover, in terms of the effect size of such process, it is found that for mean of the students’ scores prior to and after the 4 experiments; d equals 0.6801, 0.5093, 0.5071, and 0.5296 respectively. It can be concluded that participatory error correction process enables all of the students to learn equally well and there is improvement in their ability to write short texts. Finally the students’ overall satisfaction of the participatory error correction process is in high level (Mean = 4.39, S.D. = 0.76).

Keywords: Coded indirect corrective feedback, participatory error correction process, error treatment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1764
2501 Investigation of Rehabilitation Effects on Fire Damaged High Strength Concrete Beams

Authors: Eun Mi Ryu, Ah Young An, Ji Yeon Kang, Yeong Soo Shin, Hee Sun Kim

Abstract:

When high strength reinforced concrete is exposed to high temperature due to a fire, deteriorations occur such as loss in strength and elastic modulus, cracking and spalling of the concrete. Therefore, it is important to understand risk of structural safety in building structures by studying structural behaviors and rehabilitation of fire damaged high strength concrete structures. This paper aims at investigating rehabilitation effect on fire damaged high strength concrete beams using experimental and analytical methods. In the experiments, flexural specimens with high strength concrete are exposed to high temperatures according to ISO 834 standard time temperature curve. From four-point loading test, results show that maximum loads of the rehabilitated beams are similar to or higher than those of the non-fire damaged RC beam. In addition, structural analyses are performed using ABAQUS 6.10-3 with same conditions as experiments to provide accurate predictions on structural and mechanical behaviors of rehabilitated RC beams. The parameters are the fire cover thickness and strengths of repairing mortar. Analytical results show good rehabilitation effects, when the results predicted from the rehabilitated models are compared to structural behaviors of the non-damaged RC beams. In this study, fire damaged high strength concrete beams are rehabilitated using polymeric cement mortar. The predictions from the finite element (FE) models show good agreements with the experimental results and the modeling approaches can be used to investigate applicability of various rehabilitation methods for further study.

Keywords: Fire, High strength concrete, Rehabilitation, Reinforced concrete beam.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2355
2500 Effective Stacking of Deep Neural Models for Automated Object Recognition in Retail Stores

Authors: Ankit Sinha, Soham Banerjee, Pratik Chattopadhyay

Abstract:

Automated product recognition in retail stores is an important real-world application in the domain of Computer Vision and Pattern Recognition. In this paper, we consider the problem of automatically identifying the classes of the products placed on racks in retail stores from an image of the rack and information about the query/product images. We improve upon the existing approaches in terms of effectiveness and memory requirement by developing a two-stage object detection and recognition pipeline comprising of a Faster-RCNN-based object localizer that detects the object regions in the rack image and a ResNet-18-based image encoder that classifies  the detected regions into the appropriate classes. Each of the models is fine-tuned using appropriate data sets for better prediction and data augmentation is performed on each query image to prepare an extensive gallery set for fine-tuning the ResNet-18-based product recognition model. This encoder is trained using a triplet loss function following the strategy of online-hard-negative-mining for improved prediction. The proposed models are lightweight and can be connected in an end-to-end manner during deployment to automatically identify each product object placed in a rack image. Extensive experiments using Grozi-32k and GP-180 data sets verify the effectiveness of the proposed model.

Keywords: Retail stores, Faster-RCNN, object localization, ResNet-18, triplet loss, data augmentation, product recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 540
2499 The Influence of the Intellectual Capital on the Firms’ Market Value: A Study of Listed Firms in the Tehran Stock Exchange (TSE)

Authors: Bita Mashayekhi, Seyed Meisam Tabatabaie Nasab

Abstract:

Intellectual capital is one of the most valuable and important parts of the intangible assets of enterprises especially in knowledge-based enterprises. With respect to increasing gap between the market value and the book value of the companies, intellectual capital is one of the components that can be placed in this gap. This paper uses the value added efficiency of the three components, capital employed, human capital and structural capital, to measure the intellectual capital efficiency of Iranian industries groups, listed in the Tehran Stock Exchange (TSE), using a 8 years period data set from 2005 to 2012. In order to analyze the effect of intellectual capital on the market-to-book value ratio of the companies, the data set was divided into 10 industries, Banking, Pharmaceutical, Metals & Mineral Nonmetallic, Food, Computer, Building, Investments, Chemical, Cement and Automotive, and the panel data method was applied to estimating pooled OLS. The results exhibited that value added of capital employed has a positive significant relation with increasing market value in the industries, Banking, Metals & Mineral Nonmetallic, Food, Computer, Chemical and Cement, and also, showed that value added efficiency of structural capital has a positive significant relation with increasing market value in the Banking, Pharmaceutical and Computer industries groups. The results of the value added showed a negative relation with the Banking and Pharmaceutical industries groups and a positive relation with computer and Automotive industries groups. Among the studied industries, computer industry has placed the widest gap between the market value and book value in its intellectual capital.

Keywords: Capital Employed, Human Capital, Intellectual Capital, Market-to-Book Value, Structural Capital, Value Added Efficiency.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1744
2498 Image Ranking to Assist Object Labeling for Training Detection Models

Authors: Tonislav Ivanov, Oleksii Nedashkivskyi, Denis Babeshko, Vadim Pinskiy, Matthew Putman

Abstract:

Training a machine learning model for object detection that generalizes well is known to benefit from a training dataset with diverse examples. However, training datasets usually contain many repeats of common examples of a class and lack rarely seen examples. This is due to the process commonly used during human annotation where a person would proceed sequentially through a list of images labeling a sufficiently high total number of examples. Instead, the method presented involves an active process where, after the initial labeling of several images is completed, the next subset of images for labeling is selected by an algorithm. This process of algorithmic image selection and manual labeling continues in an iterative fashion. The algorithm used for the image selection is a deep learning algorithm, based on the U-shaped architecture, which quantifies the presence of unseen data in each image in order to find images that contain the most novel examples. Moreover, the location of the unseen data in each image is highlighted, aiding the labeler in spotting these examples. Experiments performed using semiconductor wafer data show that labeling a subset of the data, curated by this algorithm, resulted in a model with a better performance than a model produced from sequentially labeling the same amount of data. Also, similar performance is achieved compared to a model trained on exhaustive labeling of the whole dataset. Overall, the proposed approach results in a dataset that has a diverse set of examples per class as well as more balanced classes, which proves beneficial when training a deep learning model.

Keywords: Computer vision, deep learning, object detection, semiconductor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 794
2497 In-situ Chemical Oxidation of Residual TCE by Permanganate in Epikarst

Authors: Nihat Hakan Akyol, Irfan Yolcubal

Abstract:

In-situ chemical oxidation (ISCO) has been widely used for source zone remediation of Dense Nonaqueous Phase Liquids (DNAPLs) in subsurface environments. DNAPL source zones for karst aquifers are generally located in epikarst where the DNAPL mass is trapped either in karst soil or at the regolith contact with carbonate bedrock. This study aims to investigate the performance of oxidation of residual trichloroethylene found in such environments by potassium permanganate. Batch and flow cell experiments were conducted to determine the kinetics and the mass removal rate of TCE. pH change, Cl production, TCE and MnO4 destruction were monitored routinely during experiments. Nonreactive tracer tests were also conducted prior and after the oxidation process to determine the influence of oxidation on flow conditions. The results show that oxidant consumption rate of the calcareous epikarst soil was significant and the oxidant demand was determined to be 20 g KMnO4/kg soil. Oxidation rate of residual TCE (1.26x10-3 s-1) was faster than the oxidant consumption rate of the soil (2.54 - 2.92x10-4 s-1) at only high oxidant concentrations (> 40 mM KMnO4). Half life of TCE oxidation ranged from 7.9 to 10.7 min. Although highly significant fraction of residual TCE mass in the system was destroyed by permanganate oxidation, TCE concentration in the effluent remained above its MCL. Flow interruption tests indicate that efficiency of ISCO was limited by the rate of TCE dissolution and the rate-limited desorption of TCE. The residence time and the initial concentration of the oxidant in the source zone also controlled the efficiency of ISCO in epikarst.

Keywords: Epikarst, in-situ chemical oxidation, permanganate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2006
2496 High Wire Act: the Perils, Pitfalls and Possibilities of Online Discussions

Authors: Karen Armstrong

Abstract:

Online discussions are an important component of both blended and online courses. This paper examines the varieties of online discussions and the perils, pitfalls and possibilities of this rather new technological tool for enhanced learning. The discussion begins with possible perils and pitfalls inherent in this educational tool and moves to a consideration of the advantages of the varieties of online discussions feasible for use in teacher education programs.

Keywords: online discussions, computer-mediatedcommunication (CMC), computer-supported collaborative learning(CSCL), e-learning, teacher education

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2584
2495 Surfactant Stabilized Nanoemulsion: Characterization and Application in Enhanced Oil Recovery

Authors: Ajay Mandal, Achinta Bera

Abstract:

Nanoemulsions are a class of emulsions with a droplet size in the range of 50–500 nm and have attracted a great deal of attention in recent years because it is unique characteristics. The physicochemical properties of nanoemulsion suggests that it can be successfully used to recover the residual oil which is trapped in the fine pore of reservoir rock by capillary forces after primary and secondary recovery. Oil-in-water nanoemulsion which can be formed by high-energy emulsification techniques using specific surfactants can reduce oil-water interfacial tension (IFT) by 3-4 orders of magnitude. The present work is aimed on characterization of oil-inwater nanoemulsion in terms of its phase behavior, morphological studies; interfacial energy; ability to reduce the interfacial tension and understanding the mechanisms of mobilization and displacement of entrapped oil blobs by lowering interfacial tension both at the macroscopic and microscopic level. In order to investigate the efficiency of oil-water nanoemulsion in enhanced oil recovery (EOR), experiments were performed to characterize the emulsion in terms of their physicochemical properties and size distribution of the dispersed oil droplet in water phase. Synthetic mineral oil and a series of surfactants were used to prepare oil-in-water emulsions. Characterization of emulsion shows that it follows pseudo-plastic behaviour and drop size of dispersed oil phase follows lognormal distribution. Flooding experiments were also carried out in a sandpack system to evaluate the effectiveness of the nanoemulsion as displacing fluid for enhanced oil recovery. Substantial additional recoveries (more than 25% of original oil in place) over conventional water flooding were obtained in the present investigation.

Keywords: Nanoemulsion, Characterization, Enhanced Oil Recovery, Particle Size Distribution

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5007
2494 Verification of Space System Dynamics Using the MATLAB Identification Toolbox in Space Qualification Test

Authors: Y. V. Kim

Abstract:

This article presents an approach with regards to the Functional Testing of Space System (SS) that could be a space vehicle (spacecraft-S/C) and/or its equipment and components – S/C subsystems. This test should finalize the Space Qualification Tests (SQT) campaign. It could be considered as a generic test and used for a wide class of SS that, from the point of view of System Dynamics and Control Theory, may be described by the ordinary differential equations. The suggested methodology is based on using semi-natural experiment laboratory stand that does not require complicated, precise and expensive technological control-verification equipment. However, it allows for testing totally assembled system during Assembling, Integration and Testing (AIT) activities at the final phase of SQT, involving system hardware (HW) and software (SW). The test physically activates system input (sensors) and output (actuators) and requires recording their outputs in real time. The data are then inserted in a laboratory computer, where it is post-experiment processed by the MATLAB/Simulink Identification Toolbox. It allows for estimating the system dynamics in the form of estimation of its differential equation coefficients through the verification experimental test and comparing them with expected mathematical model, prematurely verified by mathematical simulation during the design process. Mathematical simulation results presented in the article show that this approach could be applicable and helpful in SQT practice. Further semi-natural experiments should specify detail requirements for the test laboratory equipment and test-procedures.

Keywords: system dynamics, space system ground tests, space qualification, system dynamics identification, satellite attitude control, assembling integration and testing

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 515
2493 Particle Swarm Optimization Algorithm vs. Genetic Algorithm for Image Watermarking Based Discrete Wavelet Transform

Authors: Omaima N. Ahmad AL-Allaf

Abstract:

Over communication networks, images can be easily copied and distributed in an illegal way. The copyright protection for authors and owners is necessary. Therefore, the digital watermarking techniques play an important role as a valid solution for authority problems. Digital image watermarking techniques are used to hide watermarks into images to achieve copyright protection and prevent its illegal copy. Watermarks need to be robust to attacks and maintain data quality. Therefore, we discussed in this paper two approaches for image watermarking, first is based on Particle Swarm Optimization (PSO) and the second approach is based on Genetic Algorithm (GA). Discrete wavelet transformation (DWT) is used with the two approaches separately for embedding process to cover image transformation. Each of PSO and GA is based on co-relation coefficient to detect the high energy coefficient watermark bit in the original image and then hide the watermark in original image. Many experiments were conducted for the two approaches with different values of PSO and GA parameters. From experiments, PSO approach got better results with PSNR equal 53, MSE equal 0.0039. Whereas GA approach got PSNR equal 50.5 and MSE equal 0.0048 when using population size equal to 100, number of iterations equal to 150 and 3×3 block. According to the results, we can note that small block size can affect the quality of image watermarking based PSO/GA because small block size can increase the search area of the watermarking image. Better PSO results were obtained when using swarm size equal to 100.

Keywords: Image watermarking, genetic algorithm, particle swarm optimization, discrete wavelet transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1139
2492 Corporate Information System Educational Center

Authors: Alquliyev R.M., Kazimov T.H., Mahmudova Sh.C., Mahmudova R.Sh.

Abstract:

The given work is devoted to the description of Information Technologies NAS of Azerbaijan created and successfully maintained in Institute. On the basis of the decision of board of the Supreme Certifying commission at the President of the Azerbaijan Republic and Presidium of National Academy of Sciences of the Azerbaijan Republic, the organization of training courses on Computer Sciences for all post-graduate students and dissertators of the republic, taking of examinations of candidate minima, it was on-line entrusted to Institute of Information Technologies of the National Academy of Sciences of Azerbaijan. Therefore, teaching the computer sciences to post-graduate students and dissertators a scientific - methodological manual on effective application of new information technologies for research works by post-graduate students and dissertators and taking of candidate minima is carried out in the Educational Center. Information and communication technologies offer new opportunities and prospects of their application for teaching and training. The new level of literacy demands creation of essentially new technology of obtaining of scientific knowledge. Methods of training and development, social and professional requirements, globalization of the communicative economic and political projects connected with construction of a new society, depends on a level of application of information and communication technologies in the educational process. Computer technologies develop ideas of programmed training, open completely new, not investigated technological ways of training connected to unique opportunities of modern computers and telecommunications. Computer technologies of training are processes of preparation and transfer of the information to the trainee by means of computer. Scientific and technical progress as well as global spread of the technologies created in the most developed countries of the world is the main proof of the leading role of education in XXI century. Information society needs individuals having modern knowledge. In practice, all technologies, using special technical information means (computer, audio, video) are called information technologies of education.

Keywords: Educational Center, post-graduate, database.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1695
2491 Design of a Computer Vision Based Exercise Video Game for Senior Citizens

Authors: June Tay, Ivy Chia

Abstract:

There are numerous changes, both mental and physical, taking place when people age. We need to understand the different aspects required for healthy living, including meeting nutritional needs, regular physical activities to keep agility, sufficient rest and sleep to have physical and mental well-being, social engagement to avoid the risk of social isolation and depression, and access to healthcare to detect and manage chronic conditions. Promoting physical activities for an ageing population is necessary as many may have enjoyed sedentary lifestyles for some time. In our study, we evaluate the considerations when designing a computer vision video game for the elderly. We need to design some low-impact activities, such as stretching and gentle movements, because some elderly individuals may have joint pains or mobility issues. The exercise game should consist of simple movements that are easy to follow and remember. It should be fun and enjoyable so that they can be motivated to do some exercise. Social engagement can keep the elderly motivated and competitive, and they are more willing to engage in game exercises. Elderly citizens can compare their game scores and try to improve them. We propose a computer vision-based video game for the elderly that will capture and track the movement of the elderly hand pushing a ball on the screen into a circle. It can be easily set up using a PC laptop with a webcam. Our video game adhered to the design framework we employed, and it encompassed ease of use, a simple graphical interface, easy-to-play game exercise, and fun gameplay.

Keywords: Computer vision, video games, gerontology technology, caregiving.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 188
2490 AI Tutor: A Computer Science Domain Knowledge Graph-Based QA System on JADE platform

Authors: Yingqi Cui, Changran Huang, Raymond Lee

Abstract:

In this paper, we proposed an AI Tutor using ontology and natural language process techniques to generate a computer science domain knowledge graph and answer users’ questions based on the knowledge graph. We define eight types of relation to extract relationships between entities according to the computer science domain text. The AI tutor is separated into two agents: learning agent and Question-Answer (QA) agent and developed on JADE (a multi-agent system) platform. The learning agent is responsible for reading text to extract information and generate a corresponding knowledge graph by defined patterns. The QA agent can understand the users’ questions and answer humans’ questions based on the knowledge graph generated by the learning agent.

Keywords: Artificial intelligence, natural language process, knowledge graph, agent, QA system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 865
2489 Public Economic Efficiency and Case-Based Reasoning: A Theoretical Framework to Police Performance

Authors: Javier Parra-Domínguez, Juan Manuel Corchado

Abstract:

At present, public efficiency is a concept that intends to maximize return on public investment focus on minimizing the use of resources and maximizing the outputs. The concept takes into account statistical criteria drawn up according to techniques such as DEA (Data Envelopment Analysis). The purpose of the current work is to consider, more precisely, the theoretical application of CBR (Case-Based Reasoning) from economics and computer science, as a preliminary step to improving the efficiency of law enforcement agencies (public sector). With the aim of increasing the efficiency of the public sector, we have entered into a phase whose main objective is the implementation of new technologies. Our main conclusion is that the application of computer techniques, such as CBR, has become key to the efficiency of the public sector, which continues to require economic valuation based on methodologies such as DEA. As a theoretical result and conclusion, the incorporation of CBR systems will reduce the number of inputs and increase, theoretically, the number of outputs generated based on previous computer knowledge.

Keywords: Case-based reasoning, knowledge, police, public efficiency.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 585
2488 Plant Layout Analysis by Computer Simulation for Electronic Manufacturing Service Plant

Authors: Visuwan D., Phruksaphanrat B

Abstract:

In this research, computer simulation is used for Electronic Manufacturing Service (EMS) plant layout analysis. The current layout of this manufacturing plant is a process layout, which is not suitable due to the nature of an EMS that has high-volume and high-variety environment. Moreover, quick response and high flexibility are also needed. Then, cellular manufacturing layout design was determined for the selected group of products. Systematic layout planning (SLP) was used to analyze and design the possible cellular layouts for the factory. The cellular layout was selected based on the main criteria of the plant. Computer simulation was used to analyze and compare the performance of the proposed cellular layout and the current layout. It found that the proposed cellular layout can generate better performances than the current layout. In this research, computer simulation is used for Electronic Manufacturing Service (EMS) plant layout analysis. The current layout of this manufacturing plant is a process layout, which is not suitable due to the nature of an EMS that has high-volume and high-variety environment. Moreover, quick response and high flexibility are also needed. Then, cellular manufacturing layout design was determined for the selected group of products. Systematic layout planning (SLP) was used to analyze and design the possible cellular layouts for the factory. The cellular layout was selected based on the main criteria of the plant. Computer simulation was used to analyze and compare the performance of the proposed cellular layout and the current layout. It found that the proposed cellular layout can generate better performances than the current layout. 

Keywords: Layout, Electronic Manufacturing Service Plant (EMS), Computer Simulation, Cellular Manufacturing System (CMS).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3445
2487 A New Center of Motion in Cabling Robots

Authors: A. Abbasi Moshaii, F. Najafi

Abstract:

In this paper a new model for center of motion creating is proposed. This new method uses cables. So, it is very useful in robots because it is light and has easy assembling process. In the robots which need to be in touch with some things this method is so useful. It will be described in the following. The accuracy of the idea is proved by two experiments. This system could be used in the robots which need a fixed point in the contact with some things and make a circular motion.

Keywords: Center of Motion, Robotic cables, permanent touching.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1650
2486 Improving Quality of Business Networks for Information Systems

Authors: Hazem M. El-Bakry, Ahmed Atwan

Abstract:

Computer networks are essential part in computerbased information systems. The performance of these networks has a great influence on the whole information system. Measuring the usability criteria and customers satisfaction on small computer network is very important. In this article, an effective approach for measuring the usability of business network in an information system is introduced. The usability process for networking provides us with a flexible and a cost-effective way to assess the usability of a network and its products. In addition, the proposed approach can be used to certify network product usability late in the development cycle. Furthermore, it can be used to help in developing usable interfaces very early in the cycle and to give a way to measure, track, and improve usability. Moreover, a new approach for fast information processing over computer networks is presented. The entire data are collected together in a long vector and then tested as a one input pattern. Proposed fast time delay neural networks (FTDNNs) use cross correlation in the frequency domain between the tested data and the input weights of neural networks. It is proved mathematically and practically that the number of computation steps required for the presented time delay neural networks is less than that needed by conventional time delay neural networks (CTDNNs). Simulation results using MATLAB confirm the theoretical computations.

Keywords: Usability Criteria, Computer Networks, Fast Information Processing, Cross Correlation, Frequency Domain.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2004
2485 Design of Wireless Readout System for Resonant Gas Sensors

Authors: S. Mohamed Rabeek, Mi Kyoung Park, M. Annamalai Arasu

Abstract:

This paper presents a design of a wireless read out system for tracking the frequency shift of the polymer coated piezoelectric micro electromechanical resonator due to gas absorption. The measure of this frequency shift indicates the percentage of a particular gas the sensor is exposed to. It is measured using an oscillator and an FPGA based frequency counter by employing the resonator as a frequency determining element in the oscillator. This system consists of a Gas Sensing Wireless Readout (GSWR) and an USB Wireless Transceiver (UWT). GSWR consists of an oscillator based on a trans-impedance sustaining amplifier, an FPGA based frequency readout, a sub 1GHz wireless transceiver and a micro controller. UWT can be plugged into the computer via USB port and function as a wireless module to transfer gas sensor data from GSWR to the computer through its USB port. GUI program running on the computer periodically polls for sensor data through UWT - GSWR wireless link, the response from GSWR is logged in a file for post processing as well as displayed on screen.

Keywords: Gas sensor, GSWR, micro-mechanical system, UWT, volatile emissions.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1479
2484 Phytoremediation of Wastewater Using Some of Aquatic Macrophytes as Biological Purifiers for Irrigation Purposes

Authors: Dilshad G.A. Ganjo, Ahmed I. Khwakaram

Abstract:

An attempt was made for availability of wastewater reuse/reclamation for irrigation purposes using phytoremediation “the low cost and less technology", using six local aquatic macrophytes “e.g. T. angustifolia, B. maritimus, Ph. australis, A. donax, A. plantago-aquatica and M. longifolia (Linn)" as biological waste purifiers. Outdoor experiments/designs were conducted from May 03, 2007 till October 15, 2008, close to one of the main sewage channels of Sulaimani City/Iraq*. All processes were mainly based on conventional wastewater treatment processes, besides two further modifications were tested, the first was sand filtration pots, implanted by individual species of experimental macrophytes and the second was constructed wetlands implanted by experimental macrophytes all together. Untreated and treated wastewater samples were analyzed for their key physico-chemical properties (only heavy metals Fe, Mn, Zn and Cu with particular reference to removal efficiency by experimental macrophytes are highlighted in this paper). On the other hand, vertical contents of heavy metals were also evaluated from both pots and the cells of constructed wetland. After 135 days, macrophytes were harvested and heavy metals were analyzed in their biomass (roots/shoots) for removal efficiency assessment (i.e. uptake/ bioaccumulation rate). Results showed that; removal efficiency of all studied heavy metals was much higher in T. angustifolia followed by Ph. Australis, B. maritimus and A. donax in triple experiment sand pots. Constructed wetland experiments have revealed that; the more replicated constructed wetland cells the highest heavy metal removal efficiency was indicated.

Keywords: Aquatic Macrophytes, Heavy Metals (Fe, Mn, Zn and Cu), Phytoremediation and Removal Efficiency.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3321
2483 Early Depression Detection for Young Adults with a Psychiatric and AI Interdisciplinary Multimodal Framework

Authors: Raymond Xu, Ashley Hua, Andrew Wang, Yuru Lin

Abstract:

During COVID-19, the depression rate has increased dramatically. Young adults are most vulnerable to the mental health effects of the pandemic. Lower-income families have a higher ratio to be diagnosed with depression than the general population, but less access to clinics. This research aims to achieve early depression detection at low cost, large scale, and high accuracy with an interdisciplinary approach by incorporating clinical practices defined by American Psychiatric Association (APA) as well as multimodal AI framework. The proposed approach detected the nine depression symptoms with Natural Language Processing sentiment analysis and a symptom-based Lexicon uniquely designed for young adults. The experiments were conducted on the multimedia survey results from adolescents and young adults and unbiased Twitter communications. The result was further aggregated with the facial emotional cues analyzed by the Convolutional Neural Network on the multimedia survey videos. Five experiments each conducted on 10k data entries reached consistent results with an average accuracy of 88.31%, higher than the existing natural language analysis models. This approach can reach 300+ million daily active Twitter users and is highly accessible by low-income populations to promote early depression detection to raise awareness in adolescents and young adults and reveal complementary cues to assist clinical depression diagnosis.

Keywords: Artificial intelligence, depression detection, facial emotion recognition, natural language processing, mental disorder.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1140
2482 A Case Study on Appearance Based Feature Extraction Techniques and Their Susceptibility to Image Degradations for the Task of Face Recognition

Authors: Vitomir Struc, Nikola Pavesic

Abstract:

Over the past decades, automatic face recognition has become a highly active research area, mainly due to the countless application possibilities in both the private as well as the public sector. Numerous algorithms have been proposed in the literature to cope with the problem of face recognition, nevertheless, a group of methods commonly referred to as appearance based have emerged as the dominant solution to the face recognition problem. Many comparative studies concerned with the performance of appearance based methods have already been presented in the literature, not rarely with inconclusive and often with contradictory results. No consent has been reached within the scientific community regarding the relative ranking of the efficiency of appearance based methods for the face recognition task, let alone regarding their susceptibility to appearance changes induced by various environmental factors. To tackle these open issues, this paper assess the performance of the three dominant appearance based methods: principal component analysis, linear discriminant analysis and independent component analysis, and compares them on equal footing (i.e., with the same preprocessing procedure, with optimized parameters for the best possible performance, etc.) in face verification experiments on the publicly available XM2VTS database. In addition to the comparative analysis on the XM2VTS database, ten degraded versions of the database are also employed in the experiments to evaluate the susceptibility of the appearance based methods on various image degradations which can occur in "real-life" operating conditions. Our experimental results suggest that linear discriminant analysis ensures the most consistent verification rates across the tested databases.

Keywords: Biometrics, face recognition, appearance based methods, image degradations, the XM2VTS database.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2268
2481 Parameter Optimization and Thermal Simulation in Laser Joining of Coach Peel Panels of Dissimilar Materials

Authors: Masoud Mohammadpour, Blair Carlson, Radovan Kovacevic

Abstract:

The quality of laser welded-brazed (LWB) joints were strongly dependent on the main process parameters, therefore the effect of laser power (3.2–4 kW), welding speed (60–80 mm/s) and wire feed rate (70–90 mm/s) on mechanical strength and surface roughness were investigated in this study. The comprehensive optimization process by means of response surface methodology (RSM) and desirability function was used for multi-criteria optimization. The experiments were planned based on Box– Behnken design implementing linear and quadratic polynomial equations for predicting the desired output properties. Finally, validation experiments were conducted on an optimized process condition which exhibited good agreement between the predicted and experimental results. AlSi3Mn1 was selected as the filler material for joining aluminum alloy 6022 and hot-dip galvanized steel in coach peel configuration. The high scanning speed could control the thickness of IMC as thin as 5 µm. The thermal simulations of joining process were conducted by the Finite Element Method (FEM), and results were validated through experimental data. The Fe/Al interfacial thermal history evidenced that the duration of critical temperature range (700–900 °C) in this high scanning speed process was less than 1 s. This short interaction time leads to the formation of reaction-control IMC layer instead of diffusion-control mechanisms.

Keywords: Laser welding-brazing, finite element, response surface methodology, multi-response optimization, cross-beam laser.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 943
2480 Performance Assessment of the Gold Coast Desalination Plant Offshore Multiport Brine Diffuser during ‘Hot Standby’ Operation

Authors: M. J. Baum, B. Gibbes, A. Grinham, S. Albert, D. Gale, P. Fisher

Abstract:

Alongside the rapid expansion of Seawater Reverse Osmosis technologies there is a concurrent increase in the production of hypersaline brine by-products. To minimize environmental impact, these by-products are commonly disposed into open-coastal environments via submerged diffuser systems as inclined dense jet outfalls. Despite the widespread implementation of this process, diffuser designs are typically based on small-scale laboratory experiments under idealistic quiescent conditions. Studies concerning diffuser performance in the field are limited. A set of experiments were conducted to assess the near field characteristics of brine disposal at the Gold Coast Desalination Plant offshore multiport diffuser. The aim of the field experiments was to determine the trajectory and dilution characteristics of the plume under various discharge configurations with production ranging 66 – 100% of plant operative capacity. The field monitoring system employed an unprecedented static array of temperature and electrical conductivity sensors in a three-dimensional grid surrounding a single diffuser port. Complimenting these measurements, Acoustic Doppler Current Profilers were also deployed to record current variability over the depth of the water column and wave characteristics. Recorded data suggested the open-coastal environment was highly active over the experimental duration with ambient velocities ranging 0.0 – 0.5 m∙s-1, with considerable variability over the depth of the water column observed. Variations in background electrical conductivity corresponding to salinity fluctuations of ± 1.7 g∙kg-1 were also observed. Increases in salinity were detected during plant operation and appeared to be most pronounced 10 – 30 m from the diffuser, consistent with trajectory predictions described by existing literature. Plume trajectories and respective dilutions extrapolated from salinity data are compared with empirical scaling arguments. Discharge properties were found to adequately correlate with modelling projections. Temporal and spatial variation of background processes and their subsequent influence upon discharge outcomes are discussed with a view to incorporating the influence of waves and ambient currents in the design of brine outfalls into the future.

Keywords: Brine disposal, desalination, field study, inclined dense jets, negatively buoyant discharge.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1037
2479 Applying Kinect on the Development of a Customized 3D Mannequin

Authors: Shih-Wen Hsiao, Rong-Qi Chen

Abstract:

In the field of fashion design, 3D Mannequin is a kind of assisting tool which could rapidly realize the design concepts. While the concept of 3D Mannequin is applied to the computer added fashion design, it will connect with the development and the application of design platform and system. Thus, the situation mentioned above revealed a truth that it is very critical to develop a module of 3D Mannequin which would correspond with the necessity of fashion design. This research proposes a concrete plan that developing and constructing a system of 3D Mannequin with Kinect. In the content, ergonomic measurements of objective human features could be attained real-time through the implement with depth camera of Kinect, and then the mesh morphing can be implemented through transformed the locations of the control-points on the model by inputting those ergonomic data to get an exclusive 3D mannequin model. In the proposed methodology, after the scanned points from the Kinect are revised for accuracy and smoothening, a complete human feature would be reconstructed by the ICP algorithm with the method of image processing. Also, the objective human feature could be recognized to analyze and get real measurements. Furthermore, the data of ergonomic measurements could be applied to shape morphing for the division of 3D Mannequin reconstructed by feature curves. Due to a standardized and customer-oriented 3D Mannequin would be generated by the implement of subdivision, the research could be applied to the fashion design or the presentation and display of 3D virtual clothes. In order to examine the practicality of research structure, a system of 3D Mannequin would be constructed with JAVA program in this study. Through the revision of experiments the practicability-contained research result would come out.

Keywords: 3D Mannequin, kinect scanner, interactive closest point, shape morphing, subdivision.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2047