Search results for: computer mediate communication.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2741

Search results for: computer mediate communication.

161 Dosimetric Analysis of Intensity Modulated Radiotherapy versus 3D Conformal Radiotherapy in Adult Primary Brain Tumors: Regional Cancer Centre, India

Authors: Ravi Kiran Pothamsetty, Radha Rani Ghosh, Baby Paul Thaliath

Abstract:

Radiation therapy has undergone many advancements and evloved from 2D to 3D. Recently, with rapid pace of drug discoveries, cutting edge technology, and clinical trials has made innovative advancements in computer technology and treatment planning and upgraded to intensity modulated radiotherapy (IMRT) which delivers in homogenous dose to tumor and normal tissues. The present study was a hospital-based experience comparing two different conformal radiotherapy techniques for brain tumors. This analytical study design has been conducted at Regional Cancer Centre, India from January 2014 to January 2015. Ten patients have been selected after inclusion and exclusion criteria. All the patients were treated on Artiste Siemens Linac Accelerator. The tolerance level for maximum dose was 6.0 Gyfor lenses and 54.0 Gy for brain stem, optic chiasm and optical nerves as per RTOG criteria. Mean and standard deviation values of PTV98%, PTV 95% and PTV 2% in IMRT were 93.16±2.9, 95.01±3.4 and 103.1±1.1 respectively; for 3DCRT were 91.4±4.7, 94.17±2.6 and 102.7±0.39 respectively. PTV max dose (%) in IMRT and 3D-CRT were 104.7±0.96 and 103.9±1.0 respectively. Maximum dose to the tumor can be delivered with IMRT with acceptable toxicity limits. Variables such as expertise, location of tumor, patient condition, and TPS influence the outcome of the treatment.

Keywords: IMRT, 3D CRT, Brain, tumors, OARs, RTOG.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 819
160 The Role of Acoustical Design within Architectural Design in the Early Design Phase

Authors: O. Wright, N. Perkins, M. Donn, M. Halstead

Abstract:

This research responded to anecdotal evidence that suggested inefficiencies within the Architect and Acoustician relationship may lead to ineffective acoustic design decisions.  The acoustician spoken to believed that he was approached too late in the design phase. The approached architect valued acoustical qualities, yet, struggled to interpret common measurement parameters. The preliminary investigation of these opinions indicated a gap in the current New Zealand Architectural discourse and currently informs the creation of a 2016 Master of Architecture (Prof) thesis research. Little meaningful information about acoustic intervention in the early design phase could be found from past literature. In the information that was sourced, authors focus on software as an incorporation tool without investigating why the flaws in the relationship originally exist. To further explore this relationship, a survey was designed. It underwent three phases to ensure its consistency, and was delivered to a group of 51 acousticians from one international Acoustics company. The results were then separated between New Zealand and off-shore to identify trends. The survey results suggest that 75% of acousticians meet the architect less than 5 times per project. Instead of regular contact, a mediated method is adopted though a mix of telecommunication and written reports. Acousticians tend to be introduced later into New Zealand building project than the corresponding off-shore building. This delay corresponds to an increase in remedial action for each of the building types in the survey except Auditoria and Office Buildings. 31 participants have had their specifications challenged by an architect. Furthermore, 71% of the acousticians believe that architects do not have the knowledge to understand why the acoustic specifications are in place. The issues raised in this investigation align to the colloquial evidence expressed by the two consultants. It identifies a larger gap in the industry were acoustics is remedially treated rather than identified as a possible design driver. Further research through design is suggested to understand the role of acoustics within architectural design and potential tools for its inclusion during, not after, the design process.

Keywords: Architectural acoustics, early-design, interdisciplinary communication, remedial response.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1437
159 The Low-Cost Design and 3D Printing of Structural Knee Orthotics for Athletic Knee Injury Patients

Authors: Alexander Hendricks, Sean Nevin, Clayton Wikoff, Melissa Dougherty, Jacob Orlita, Rafiqul Noorani

Abstract:

Knee orthotics play an important role in aiding in the recovery of those with knee injuries, especially athletes. However, structural knee orthotics is often very expensive, ranging between $300 and $800. The primary reason for this project was to answer the question: can 3D printed orthotics represent a viable and cost-effective alternative to present structural knee orthotics? The primary objective for this research project was to design a knee orthotic for athletes with knee injuries for a low-cost under $100 and evaluate its effectiveness. The initial design for the orthotic was done in SolidWorks, a computer-aided design (CAD) software available at Loyola Marymount University. After this design was completed, finite element analysis (FEA) was utilized to understand how normal stresses placed upon the knee affected the orthotic. The knee orthotic was then adjusted and redesigned to meet a specified factor-of-safety of 3.25 based on the data gathered during FEA and literature sources. Once the FEA was completed and the orthotic was redesigned based from the data gathered, the next step was to move on to 3D-printing the first design of the knee brace. Subsequently, physical therapy movement trials were used to evaluate physical performance. Using the data from these movement trials, the CAD design of the brace was refined to accommodate the design requirements. The final goal of this research means to explore the possibility of replacing high-cost, outsourced knee orthotics with a readily available low-cost alternative.

Keywords: Knee Orthotics, 3D printing, finite element analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1037
158 Mixed Convection in a Vertical Heated Channel: Influence of the Aspect Ratio

Authors: Ameni Mokni , Hatem Mhiri , Georges Le Palec , Philippe Bournot

Abstract:

In mechanical and environmental engineering, mixed convection is a frequently encountered thermal fluid phenomenon which exists in atmospheric environment, urban canopy flows, ocean currents, gas turbines, heat exchangers, and computer chip cooling systems etc... . This paper deals with a numerical investigation of mixed convection in a vertical heated channel. This flow results from the mixing of the up-going fluid along walls of the channel with the one issued from a flat nozzle located in its entry section. The fluiddynamic and heat-transfer characteristics of vented vertical channels are investigated for constant heat-flux boundary conditions, a Rayleigh number equal to 2.57 1010, for two jet Reynolds number Re=3 103 and 2104 and the aspect ratio in the 8-20 range. The system of governing equations is solved with a finite volumes method and an implicit scheme. The obtained results show that the turbulence and the jet-wall interaction activate the heat transfer, as does the drive of ambient air by the jet. For low Reynolds number Re=3 103, the increase of the aspect Ratio enhances the heat transfer of about 3%, however; for Re=2 104, the heat transfer enhancement is of about 12%. The numerical velocity, pressure and temperature fields are post-processed to compute the quantities of engineering interest such as the induced mass flow rate, and average Nusselt number, in terms of Rayleigh, Reynolds numbers and dimensionless geometric parameters are presented.

Keywords: Aspect Ratio, Channel, Jet, Mixed convection

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2178
157 The Effect of Curcumin on Cryopreserved Bovine Semen

Authors: Eva Tvrdá, Marek Halenár, Hana Greifová, Alica Mackovich, Faridullah Hashim, Norbert Lukáč

Abstract:

Oxidative stress associated with semen cryopreservation may result in lipid peroxidation (LPO), DNA damage and apoptosis, leading to decreased sperm motility and fertilization ability. Curcumin (CUR), a natural phenol isolated from Curcuma longa Linn. has been presented as a possible supplement for a more effective semen cryopreservation because of its antioxidant properties. This study focused to evaluate the effects of CUR on selected oxidative stress parameters in cryopreserved bovine semen. 20 bovine ejaculates were split into two aliquots and diluted with a commercial semen extender containing CUR (50 μmol/L) or no supplement (control), cooled to 4 °C, frozen and kept in liquid nitrogen. Frozen straws were thawed in a water bath for subsequent experiments. Computer assisted semen analysis was used to evaluate spermatozoa motility, and reactive oxygen species (ROS) generation was quantified by using luminometry. Superoxide generation was evaluated with the NBT test, and LPO was assessed via the TBARS assay. CUR supplementation significantly (P<0.001) increased the spermatozoa motility and provided a significantly higher protection against ROS (P<0.001) or superoxide (P<0.01) overgeneration caused by semen freezing and thawing. Furthermore, CUR administration resulted in a significantly (P<0.01) lower LPO of the experimental semen samples. In conclusion, CUR exhibits significant ROS-scavenging activities which may prevent oxidative insults to cryopreserved spermatozoa and thus may enhance the post-thaw functional activity of male gametes.

Keywords: Bulls, cryopreservation, curcumin, lipid peroxidation, reactive oxygen species, spermatozoa.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2164
156 The Computational Psycholinguistic Situational-Fuzzy Self-Controlled Brain and Mind System under Uncertainty

Authors: Ben Khayut, Lina Fabri, Maya Avikhana

Abstract:

The modern Artificial Narrow Intelligence (ANI) models cannot: a) independently, situationally, and continuously function without of human intelligence, used for retraining and reprogramming the ANI’s models, and b) think, understand, be conscious, and cognize under uncertainty and changing of the environmental objects. To eliminate these shortcomings and build a new generation of Artificial Intelligence systems, the paper proposes a Conception, Model, and Method of Computational Psycholinguistic Cognitive Situational-Fuzzy Self-Controlled Brain and Mind System (CPCSFSCBMSUU). This system uses a neural network as its computational memory, and activates functions of the perception, identification of real objects, fuzzy situational control, and forming images of these objects. These images and objects are used for modeling their psychological, linguistic, cognitive, and neural values of properties and features, the meanings of which are identified, interpreted, generated, and formed taking into account the identified subject area, using the data, information, knowledge, accumulated in the Memory. The functioning of the CPCSFSCBMSUU is carried out by its subsystems of the: fuzzy situational control of all processes, computational perception, identifying of reactions and actions, Psycholinguistic Cognitive Fuzzy Logical Inference, Decision Making, Reasoning, Systems Thinking, Planning, Awareness, Consciousness, Cognition, Intuition, and Wisdom. In doing so are performed analysis and processing of the psycholinguistic, subject, visual, signal, sound and other objects, accumulation and using the data, information and knowledge of the Memory, communication, and interaction with other computing systems, robots and humans in order of solving the joint tasks. To investigate the functional processes of the proposed system, the principles of situational control, fuzzy logic, psycholinguistics, informatics, and modern possibilities of data science were applied. The proposed self-controlled system of brain and mind is oriented on use as a plug-in in multilingual subject applications.

Keywords: Computational psycholinguistic cognitive brain and mind system, situational fuzzy control, uncertainty, AI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 409
155 A State Aggregation Approach to Singularly Perturbed Markov Reward Processes

Authors: Dali Zhang, Baoqun Yin, Hongsheng Xi

Abstract:

In this paper, we propose a single sample path based algorithm with state aggregation to optimize the average rewards of singularly perturbed Markov reward processes (SPMRPs) with a large scale state spaces. It is assumed that such a reward process depend on a set of parameters. Differing from the other kinds of Markov chain, SPMRPs have their own hierarchical structure. Based on this special structure, our algorithm can alleviate the load in the optimization for performance. Moreover, our method can be applied on line because of its evolution with the sample path simulated. Compared with the original algorithm applied on these problems of general MRPs, a new gradient formula for average reward performance metric in SPMRPs is brought in, which will be proved in Appendix, and then based on these gradients, the schedule of the iteration algorithm is presented, which is based on a single sample path, and eventually a special case in which parameters only dominate the disturbance matrices will be analyzed, and a precise comparison with be displayed between our algorithm with the old ones which is aim to solve these problems in general Markov reward processes. When applied in SPMRPs, our method will approach a fast pace in these cases. Furthermore, to illustrate the practical value of SPMRPs, a simple example in multiple programming in computer systems will be listed and simulated. Corresponding to some practical model, physical meanings of SPMRPs in networks of queues will be clarified.

Keywords: Singularly perturbed Markov processes, Gradient of average reward, Differential reward, State aggregation, Perturbed close network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1636
154 Detection of Action Potentials in the Presence of Noise Using Phase-Space Techniques

Authors: Christopher Paterson, Richard Curry, Alan Purvis, Simon Johnson

Abstract:

Emerging Bio-engineering fields such as Brain Computer Interfaces, neuroprothesis devices and modeling and simulation of neural networks have led to increased research activity in algorithms for the detection, isolation and classification of Action Potentials (AP) from noisy data trains. Current techniques in the field of 'unsupervised no-prior knowledge' biosignal processing include energy operators, wavelet detection and adaptive thresholding. These tend to bias towards larger AP waveforms, AP may be missed due to deviations in spike shape and frequency and correlated noise spectrums can cause false detection. Also, such algorithms tend to suffer from large computational expense. A new signal detection technique based upon the ideas of phasespace diagrams and trajectories is proposed based upon the use of a delayed copy of the AP to highlight discontinuities relative to background noise. This idea has been used to create algorithms that are computationally inexpensive and address the above problems. Distinct AP have been picked out and manually classified from real physiological data recorded from a cockroach. To facilitate testing of the new technique, an Auto Regressive Moving Average (ARMA) noise model has been constructed bases upon background noise of the recordings. Along with the AP classification means this model enables generation of realistic neuronal data sets at arbitrary signal to noise ratio (SNR).

Keywords: Action potential detection, Low SNR, Phase spacediagrams/trajectories, Unsupervised/no-prior knowledge.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1643
153 Context Detection in Spreadsheets Based on Automatically Inferred Table Schema

Authors: Alexander Wachtel, Michael T. Franzen, Walter F. Tichy

Abstract:

Programming requires years of training. With natural language and end user development methods, programming could become available to everyone. It enables end users to program their own devices and extend the functionality of the existing system without any knowledge of programming languages. In this paper, we describe an Interactive Spreadsheet Processing Module (ISPM), a natural language interface to spreadsheets that allows users to address ranges within the spreadsheet based on inferred table schema. Using the ISPM, end users are able to search for values in the schema of the table and to address the data in spreadsheets implicitly. Furthermore, it enables them to select and sort the spreadsheet data by using natural language. ISPM uses a machine learning technique to automatically infer areas within a spreadsheet, including different kinds of headers and data ranges. Since ranges can be identified from natural language queries, the end users can query the data using natural language. During the evaluation 12 undergraduate students were asked to perform operations (sum, sort, group and select) using the system and also Excel without ISPM interface, and the time taken for task completion was compared across the two systems. Only for the selection task did users take less time in Excel (since they directly selected the cells using the mouse) than in ISPM, by using natural language for end user software engineering, to overcome the present bottleneck of professional developers.

Keywords: Natural language processing, end user development; natural language interfaces, human computer interaction, data recognition, dialog systems, spreadsheet.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1122
152 Current Status and Future Trends of Mechanized Fruit Thinning Devices and Sensor Technology

Authors: Marco Lopes, Pedro D. Gaspar, Maria P. Simões

Abstract:

This paper reviews the different concepts that have been investigated concerning the mechanization of fruit thinning as well as multiple working principles and solutions that have been developed for feature extraction of horticultural products, both in the field and industrial environments. The research should be committed towards selective methods, which inevitably need to incorporate some kinds of sensor technology. Computer vision often comes out as an obvious solution for unstructured detection problems, although leaves despite the chosen point of view frequently occlude fruits. Further research on non-traditional sensors that are capable of object differentiation is needed. Ultrasonic and Near Infrared (NIR) technologies have been investigated for applications related to horticultural produce and show a potential to satisfy this need while simultaneously providing spatial information as time of flight sensors. Light Detection and Ranging (LIDAR) technology also shows a huge potential but it implies much greater costs and the related equipment is usually much larger, making it less suitable for portable devices, which may serve a purpose on smaller unstructured orchards. Portable devices may serve a purpose on these types of orchards. In what concerns sensor methods, on-tree fruit detection, major challenge is to overcome the problem of fruits’ occlusion by leaves and branches. Hence, nontraditional sensors capable of providing some type of differentiation should be investigated.

Keywords: Fruit thinning, horticultural field, portable devices, sensor technologies.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 983
151 Region Segmentation based on Gaussian Dirichlet Process Mixture Model and its Application to 3D Geometric Stricture Detection

Authors: Jonghyun Park, Soonyoung Park, Sanggyun Kim, Wanhyun Cho, Sunworl Kim

Abstract:

In general, image-based 3D scenes can now be found in many popular vision systems, computer games and virtual reality tours. So, It is important to segment ROI (region of interest) from input scenes as a preprocessing step for geometric stricture detection in 3D scene. In this paper, we propose a method for segmenting ROI based on tensor voting and Dirichlet process mixture model. In particular, to estimate geometric structure information for 3D scene from a single outdoor image, we apply the tensor voting and Dirichlet process mixture model to a image segmentation. The tensor voting is used based on the fact that homogeneous region in an image are usually close together on a smooth region and therefore the tokens corresponding to centers of these regions have high saliency values. The proposed approach is a novel nonparametric Bayesian segmentation method using Gaussian Dirichlet process mixture model to automatically segment various natural scenes. Finally, our method can label regions of the input image into coarse categories: “ground", “sky", and “vertical" for 3D application. The experimental results show that our method successfully segments coarse regions in many complex natural scene images for 3D.

Keywords: Region segmentation, tensor voting, image-based 3D, geometric structure, Gaussian Dirichlet process mixture model

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1891
150 Fast Factored DCT-LMS Speech Enhancement for Performance Enhancement of Digital Hearing Aid

Authors: Sunitha. S.L., V. Udayashankara

Abstract:

Background noise is particularly damaging to speech intelligibility for people with hearing loss especially for sensorineural loss patients. Several investigations on speech intelligibility have demonstrated sensorineural loss patients need 5-15 dB higher SNR than the normal hearing subjects. This paper describes Discrete Cosine Transform Power Normalized Least Mean Square algorithm to improve the SNR and to reduce the convergence rate of the LMS for Sensory neural loss patients. Since it requires only real arithmetic, it establishes the faster convergence rate as compare to time domain LMS and also this transformation improves the eigenvalue distribution of the input autocorrelation matrix of the LMS filter. The DCT has good ortho-normal, separable, and energy compaction property. Although the DCT does not separate frequencies, it is a powerful signal decorrelator. It is a real valued function and thus can be effectively used in real-time operation. The advantages of DCT-LMS as compared to standard LMS algorithm are shown via SNR and eigenvalue ratio computations. . Exploiting the symmetry of the basis functions, the DCT transform matrix [AN] can be factored into a series of ±1 butterflies and rotation angles. This factorization results in one of the fastest DCT implementation. There are different ways to obtain factorizations. This work uses the fast factored DCT algorithm developed by Chen and company. The computer simulations results show superior convergence characteristics of the proposed algorithm by improving the SNR at least 10 dB for input SNR less than and equal to 0 dB, faster convergence speed and better time and frequency characteristics.

Keywords: Hearing Impairment, DCT Adaptive filter, Sensorineural loss patients, Convergence rate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2171
149 Studying the Effect of Climate Change on the Conditions of Isfahan-s Province Tourism

Authors: A.Gandomkar, F. Khorasanizadeh

Abstract:

Tourism is a phenomenon respected by the human communities since a long time ago. It has been evoloving continually based on a variety of social and economic needs and with respect to increasingly development of communication and considerable increase of tourist-s number and resulted exchange income has attained much out come such as employment for the communities. or the purpose of tourism development in this zone suitable times and locations need to be specified in the zone for the tourist-s attendance. One of the most important needs of the tourists is the knowledge of climate conditions and suitable times for sightseeing. In this survey, the climate trend condition has been identified for attending the tourists in Isfahan province using the modified tourism climate index (TCI) as well as SPSS, GIS, excel, surfer softwares. This index evoluates systematically the climate conditions for tourism affairs and activities using the monthly maximum mean parameters of daily temperature, daily mean temperature, minimum relative humidity, daily mean relative humidity, precipitation (mm), total sunny hours, wind speed and dust. The results obtaind using kendal-s correlation test show that the months January, February, March, April, May, June, July, August, September, October, November and December are significant and have an increasing trend that indicates the best condition for attending the tourists. S, P, T mean , T max and dust are estimated from 1976-2005 and do kendal-s correlation test again to see which parameter has been effective. Based on the test, we also observed on the effective parameters that the rate of dust in February, March, April, May, June, July, August, October and November is decreasing and precipitation in September and January is increasing and also the radiation rate in May and August is increasing that indicate a better condition of convenience. Maximum temperature in June is also decreasing. Isfahan province has two spring and fall peaks and the best places for tourism are in the north and western areas.

Keywords: Climate, Tourism, Correlation Test, Tourism Climate Index, Isfahan Province

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1700
148 Effective Stacking of Deep Neural Models for Automated Object Recognition in Retail Stores

Authors: Ankit Sinha, Soham Banerjee, Pratik Chattopadhyay

Abstract:

Automated product recognition in retail stores is an important real-world application in the domain of Computer Vision and Pattern Recognition. In this paper, we consider the problem of automatically identifying the classes of the products placed on racks in retail stores from an image of the rack and information about the query/product images. We improve upon the existing approaches in terms of effectiveness and memory requirement by developing a two-stage object detection and recognition pipeline comprising of a Faster-RCNN-based object localizer that detects the object regions in the rack image and a ResNet-18-based image encoder that classifies  the detected regions into the appropriate classes. Each of the models is fine-tuned using appropriate data sets for better prediction and data augmentation is performed on each query image to prepare an extensive gallery set for fine-tuning the ResNet-18-based product recognition model. This encoder is trained using a triplet loss function following the strategy of online-hard-negative-mining for improved prediction. The proposed models are lightweight and can be connected in an end-to-end manner during deployment to automatically identify each product object placed in a rack image. Extensive experiments using Grozi-32k and GP-180 data sets verify the effectiveness of the proposed model.

Keywords: Retail stores, Faster-RCNN, object localization, ResNet-18, triplet loss, data augmentation, product recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 583
147 Driving What’s Next: The De La Salle Lipa Social Innovation in Quality Education Initiatives

Authors: Dante Jose R. Amisola, Glenford M. Prospero

Abstract:

'Driving What’s Next' is a strong campaign of the new administration of De La Salle Lipa in promoting social innovation in quality education. The new leadership directs social innovation in quality education in the institutional directions and initiatives to address real-world challenges with real-world solutions. This research under study aims to qualify the commitment of the institution to extend the Lasallian quality human and Christian education to all, as expressed in the Institution’s new mission-vision statement. The Classic Grounded Theory methodology is employed in the process of generating concepts in reference to the documents, a series of meetings, focus group discussions and other related activities that account for the conceptualization and formulation of the new mission-vision along with the new education innovation framework. Notably, Driving What’s Next is the emergent theory that encapsulates the commitment of giving quality human and Christian education to all. It directs the new leadership in driving social innovation in quality education initiatives. Correspondingly, Driving What’s Next is continually resolved through four interrelated strategies also termed as the institution's four strategic directions, namely: (1) driving social innovation in quality education, (2) embracing our shared humanity and championing social inclusion and justice initiatives, (3) creating sustainable futures and (4) engaging diverse stakeholders in our shared mission. Significantly, the four strategic directions capture and integrate the 17 UN sustainable development goals, making the innovative curriculum locally and globally relevant. To conclude, the main concern of the new administration and how it is continually resolved, provide meaningful and fun learning experiences and promote a new way of learning in the light of the 21st century skills among the members of the academic community including stakeholders and extended communities at large, which are defined as: learning together and by association (collaboration), learning through engagement (communication), learning by design (creativity) and learning with social impact (critical thinking).

Keywords: De La Salle Lipa, Driving What’s Next, social innovation in quality education, DLSL mission - vision, strategic directions.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 911
146 An Extensible Software Infrastructure for Computer Aided Custom Monitoring of Patients in Smart Homes

Authors: Ritwik Dutta, Marilyn Wolf

Abstract:

This paper describes the tradeoffs and the design from scratch of a self-contained, easy-to-use health dashboard software system that provides customizable data tracking for patients in smart homes. The system is made up of different software modules and comprises a front-end and a back-end component. Built with HTML, CSS, and JavaScript, the front-end allows adding users, logging into the system, selecting metrics, and specifying health goals. The backend consists of a NoSQL Mongo database, a Python script, and a SimpleHTTPServer written in Python. The database stores user profiles and health data in JSON format. The Python script makes use of the PyMongo driver library to query the database and displays formatted data as a daily snapshot of user health metrics against target goals. Any number of standard and custom metrics can be added to the system, and corresponding health data can be fed automatically, via sensor APIs or manually, as text or picture data files. A real-time METAR request API permits correlating weather data with patient health, and an advanced query system is implemented to allow trend analysis of selected health metrics over custom time intervals. Available on the GitHub repository system, the project is free to use for academic purposes of learning and experimenting, or practical purposes by building on it.

Keywords: Flask, Java, JavaScript, health monitoring, long term care, Mongo, Python, smart home, software engineering, webserver.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2134
145 Developing a Research Culture in the Faculty of Engineering and Information Technology at the Central University of Technology, Free State: Implications for Knowledge Management

Authors: Mpho A. Mbeo, Patient Rambe

Abstract:

The 13th year of the Central University of Technology, Free State’s (CUT) transition from a vocational and professional training orientation institution (i.e. a technikon) into a university with a strong research focus has neither been a smooth nor an easy one. At the heart of this transition was the need to transform the psychological faculties of academic and research staffs compliment who were accustomed to training graduates for industrial placement. The lack of a research culture that fully embraces the strong solid ethos of conducting cutting-edge research needs to be addressed. The induction and socialisation of academic staff into the development and execution of cutting-edge research also required the provision of research support and the creation of a conducive academic environment for research, both for emerging and non-research active academics. Drawing on ten cases, consisting of four heads of departments, three seasoned researchers, and three novice researchers, this study explores the challenges faced in establishing a strong research culture at the university. Furthermore, it gives an account of the extent to which the current research interventions have addressed the perceivably “missing research culture”, and the implications of these interventions for knowledge management. Evidence suggests that the capability of an ideal institutional research environment, consisting of mentorship of novice researchers by seasoned researchers, balanced effort into teaching and research responsibilities, should be supported by strong research-oriented leadership. Furthermore, recruitment of research passionate staff, adoption of a salary structure that encourages the retention of excellent scholars should be matched by a coherent research incentive culture to growth research publication outputs. This is critical for building new knowledge and entrenching knowledge management founded on communities of practice and scholarly networking through the documentation and communication of research findings. The study concludes that the multiple policy documents set for the different domains of research may be creating pressure on researchers to engage research activities and increase output at the expense of research quality.

Keywords: Central University of Technology, performance, publication, research culture, university.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 330
144 Image Magnification Using Adaptive Interpolationby Pixel Level Data-Dependent Geometrical Shapes

Authors: Muhammad Sajjad, Naveed Khattak, Noman Jafri

Abstract:

World has entered in 21st century. The technology of computer graphics and digital cameras is prevalent. High resolution display and printer are available. Therefore high resolution images are needed in order to produce high quality display images and high quality prints. However, since high resolution images are not usually provided, there is a need to magnify the original images. One common difficulty in the previous magnification techniques is that of preserving details, i.e. edges and at the same time smoothing the data for not introducing the spurious artefacts. A definitive solution to this is still an open issue. In this paper an image magnification using adaptive interpolation by pixel level data-dependent geometrical shapes is proposed that tries to take into account information about the edges (sharp luminance variations) and smoothness of the image. It calculate threshold, classify interpolation region in the form of geometrical shapes and then assign suitable values inside interpolation region to the undefined pixels while preserving the sharp luminance variations and smoothness at the same time. The results of proposed technique has been compared qualitatively and quantitatively with five other techniques. In which the qualitative results show that the proposed method beats completely the Nearest Neighbouring (NN), bilinear(BL) and bicubic(BC) interpolation. The quantitative results are competitive and consistent with NN, BL, BC and others.

Keywords: Adaptive, digital image processing, imagemagnification, interpolation, geometrical shapes, qualitative &quantitative analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1800
143 Numerical Investigation of Pressure Drop and Erosion Wear by Computational Fluid Dynamics Simulation

Authors: Praveen Kumar, Nitin Kumar, Hemant Kumar

Abstract:

The modernization of computer technology and commercial computational fluid dynamic (CFD) simulation has given better detailed results as compared to experimental investigation techniques. CFD techniques are widely used in different field due to its flexibility and performance. Evaluation of pipeline erosion is complex phenomenon to solve by numerical arithmetic technique, whereas CFD simulation is an easy tool to resolve that type of problem. Erosion wear behaviour due to solid–liquid mixture in the slurry pipeline has been investigated using commercial CFD code in FLUENT. Multi-phase Euler-Lagrange model was adopted to predict the solid particle erosion wear in 22.5° pipe bend for the flow of bottom ash-water suspension. The present study addresses erosion prediction in three dimensional 22.5° pipe bend for two-phase (solid and liquid) flow using finite volume method with standard k-ε turbulence, discrete phase model and evaluation of erosion wear rate with varying velocity 2-4 m/s. The result shows that velocity of solid-liquid mixture found to be highly dominating parameter as compared to solid concentration, density, and particle size. At low velocity, settling takes place in the pipe bend due to low inertia and gravitational effect on solid particulate which leads to high erosion at bottom side of pipeline.

Keywords: Computational fluid dynamics, erosion, slurry transportation, k-ε Model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1918
142 Stability of Concrete Moment Resisting Frames in View of Current Codes Requirements

Authors: Mahmoud A. Mahmoud, Ashraf Osman

Abstract:

In this study, the different approaches currently followed by design codes to assess the stability of buildings utilizing concrete moment resisting frames structural system are evaluated. For such purpose, a parametric study was performed. It involved analyzing group of concrete moment resisting frames having different slenderness ratios (height/width ratios), designed for different lateral loads to vertical loads ratios and constructed using ordinary reinforced concrete and high strength concrete for stability check and overall buckling using code approaches and computer buckling analysis. The objectives were to examine the influence of such parameters that directly linked to frames’ lateral stiffness on the buildings’ stability and evaluates the code approach in view of buckling analysis results. Based on this study, it was concluded that, the most susceptible buildings to instability and magnification of second order effects are buildings having high aspect ratios (height/width ratio), having low lateral to vertical loads ratio and utilizing construction materials of high strength. In addition, the study showed that the instability limits imposed by codes are mainly mathematical to ensure reliable analysis not a physical ones and that they are in general conservative. Also, it has been shown that the upper limit set by one of the codes that second order moment for structural elements should be limited to 1.4 the first order moment is not justified, instead, the overall story check is more reliable.

Keywords: Buckling, lateral stability, p-delta, second order.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2313
141 Automatic Classification of Lung Diseases from CT Images

Authors: Abobaker Mohammed Qasem Farhan, Shangming Yang, Mohammed Al-Nehari

Abstract:

Pneumonia is a kind of lung disease that creates congestion in the chest. Such pneumonic conditions lead to loss of life due to the severity of high congestion. Pneumonic lung disease is caused by viral pneumonia, bacterial pneumonia, or COVID-19 induced pneumonia. The early prediction and classification of such lung diseases help reduce the mortality rate. We propose the automatic Computer-Aided Diagnosis (CAD) system in this paper using the deep learning approach. The proposed CAD system takes input from raw computerized tomography (CT) scans of the patient's chest and automatically predicts disease classification. We designed the Hybrid Deep Learning Algorithm (HDLA) to improve accuracy and reduce processing requirements. The raw CT scans are pre-processed first to enhance their quality for further analysis. We then applied a hybrid model that consists of automatic feature extraction and classification. We propose the robust 2D Convolutional Neural Network (CNN) model to extract the automatic features from the pre-processed CT image. This CNN model assures feature learning with extremely effective 1D feature extraction for each input CT image. The outcome of the 2D CNN model is then normalized using the Min-Max technique. The second step of the proposed hybrid model is related to training and classification using different classifiers. The simulation outcomes using the publicly available dataset prove the robustness and efficiency of the proposed model compared to state-of-art algorithms.

Keywords: CT scans, COVID-19, deep learning, image processing, pneumonia, lung disease.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 610
140 Image Ranking to Assist Object Labeling for Training Detection Models

Authors: Tonislav Ivanov, Oleksii Nedashkivskyi, Denis Babeshko, Vadim Pinskiy, Matthew Putman

Abstract:

Training a machine learning model for object detection that generalizes well is known to benefit from a training dataset with diverse examples. However, training datasets usually contain many repeats of common examples of a class and lack rarely seen examples. This is due to the process commonly used during human annotation where a person would proceed sequentially through a list of images labeling a sufficiently high total number of examples. Instead, the method presented involves an active process where, after the initial labeling of several images is completed, the next subset of images for labeling is selected by an algorithm. This process of algorithmic image selection and manual labeling continues in an iterative fashion. The algorithm used for the image selection is a deep learning algorithm, based on the U-shaped architecture, which quantifies the presence of unseen data in each image in order to find images that contain the most novel examples. Moreover, the location of the unseen data in each image is highlighted, aiding the labeler in spotting these examples. Experiments performed using semiconductor wafer data show that labeling a subset of the data, curated by this algorithm, resulted in a model with a better performance than a model produced from sequentially labeling the same amount of data. Also, similar performance is achieved compared to a model trained on exhaustive labeling of the whole dataset. Overall, the proposed approach results in a dataset that has a diverse set of examples per class as well as more balanced classes, which proves beneficial when training a deep learning model.

Keywords: Computer vision, deep learning, object detection, semiconductor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 827
139 A Commercial Building Plug Load Management System That Uses Internet of Things Technology to Automatically Identify Plugged-In Devices and Their Locations

Authors: Amy LeBar, Kim L. Trenbath, Bennett Doherty, William Livingood

Abstract:

Plug and process loads (PPLs) account for a large portion of U.S. commercial building energy use. There is a huge potential to reduce whole building consumption by targeting PPLs for energy savings measures or implementing some form of plug load management (PLM). Despite this potential, there has yet to be a widely adopted commercial PLM technology. This paper describes the Automatic Type and Location Identification System (ATLIS), a PLM system framework with automatic and dynamic load detection (ADLD). ADLD gives PLM systems the ability to automatically identify devices as they are plugged into the outlets of a building. The ATLIS framework takes advantage of smart, connected devices to identify device locations in a building, meter and control their power, and communicate this information to a central database. ATLIS includes five primary capabilities: location identification, communication, control, energy metering, and data storage. A laboratory proof of concept (PoC) demonstrated all but the energy metering capability, and these capabilities were validated using a series of system tests. The PoC was able to identify when a device was plugged into an outlet and the location of the device in the building. When a device was moved, the PoC’s dashboard and database were automatically updated with the new location. The PoC implemented controls to devices from the system dashboard so that devices maintained correct schedules regardless of where they were plugged in within the building. ATLIS’s primary technology application is improved PLM, but other applications include asset management, energy audits, and interoperability for grid-interactive efficient buildings. An ATLIS-based system could also be used to direct power to critical devices, such as ventilators, during a brownout or blackout. Such a framework is an opportunity to make PLM more widespread and reduce the amount of energy consumed by PPLs in current and future commercial buildings.

Keywords: commercial buildings, grid-interactive efficient buildings, miscellaneous electric loads, plug loads, plug load management

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 873
138 The Algorithm to Solve the Extend General Malfatti’s Problem in a Convex Circular Triangle

Authors: Ching-Shoei Chiang

Abstract:

The Malfatti’s problem solves the problem of fitting three circles into a right triangle such that these three circles are tangent to each other, and each circle is also tangent to a pair of the triangle’s sides. This problem has been extended to any triangle (called general Malfatti’s problem). Furthermore, the problem has been extended to have 1 + 2 + … + n circles inside the triangle with special tangency properties among circles and triangle sides; it is called the extended general Malfatti’s problem. In the extended general Malfatti’s problem, call it Tri(Tn), where Tn is the triangle number, there are closed-form solutions for the Tri(T₁) (inscribed circle) problem and Tri(T₂) (3 Malfatti’s circles) problem. These problems become more complex when n is greater than 2. In solving the Tri(Tn) problem, n > 2, algorithms have been proposed to solve these problems numerically. With a similar idea, this paper proposed an algorithm to find the radii of circles with the same tangency properties. Instead of the boundary of the triangle being a straight line, we use a convex circular arc as the boundary and try to find Tn circles inside this convex circular triangle with the same tangency properties among circles and boundary as in Tri(Tn) problems. We call these problems the Carc(Tn) problems. The algorithm is a mO(Tn) algorithm, where m is the number of iterations in the loop. It takes less than 1000 iterations and less than 1 second for the Carc(T16) problem, which finds 136 circles inside a convex circular triangle with specified tangency properties. This algorithm gives a solution for circle packing problem inside convex circular triangle with arbitrarily-sized circles. Many applications concerning circle packing may come from the result of the algorithm, such as logo design, architecture design, etc.

Keywords: Circle packing, computer-aided geometric design, geometric constraint solver, Malfatti’s problem.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 143
137 Hierarchies Based On the Number of Cooperating Systems of Finite Automata on Four-Dimensional Input Tapes

Authors: Makoto Sakamoto, Yasuo Uchida, Makoto Nagatomo, Takao Ito, Tsunehiro Yoshinaga, Satoshi Ikeda, Masahiro Yokomichi, Hiroshi Furutani

Abstract:

In theoretical computer science, the Turing machine has played a number of important roles in understanding and exploiting basic concepts and mechanisms in computing and information processing [20]. It is a simple mathematical model of computers [9]. After that, M.Blum and C.Hewitt first proposed two-dimensional automata as a computational model of two-dimensional pattern processing, and investigated their pattern recognition abilities in 1967 [7]. Since then, a lot of researchers in this field have been investigating many properties about automata on a two- or three-dimensional tape. On the other hand, the question of whether processing fourdimensional digital patterns is much more difficult than two- or threedimensional ones is of great interest from the theoretical and practical standpoints. Thus, the study of four-dimensional automata as a computasional model of four-dimensional pattern processing has been meaningful [8]-[19],[21]. This paper introduces a cooperating system of four-dimensional finite automata as one model of four-dimensional automata. A cooperating system of four-dimensional finite automata consists of a finite number of four-dimensional finite automata and a four-dimensional input tape where these finite automata work independently (in parallel). Those finite automata whose input heads scan the same cell of the input tape can communicate with each other, that is, every finite automaton is allowed to know the internal states of other finite automata on the same cell it is scanning at the moment. In this paper, we mainly investigate some accepting powers of a cooperating system of eight- or seven-way four-dimensional finite automata. The seven-way four-dimensional finite automaton is an eight-way four-dimensional finite automaton whose input head can move east, west, south, north, up, down, or in the fu-ture, but not in the past on a four-dimensional input tape.

Keywords: computational complexity, cooperating system, finite automaton, four-dimension, hierarchy, multihead.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1888
136 Memristor-A Promising Candidate for Neural Circuits in Neuromorphic Computing Systems

Authors: Juhi Faridi, Mohd. Ajmal Kafeel

Abstract:

The advancements in the field of Artificial Intelligence (AI) and technology has led to an evolution of an intelligent era. Neural networks, having the computational power and learning ability similar to the brain is one of the key AI technologies. Neuromorphic computing system (NCS) consists of the synaptic device, neuronal circuit, and neuromorphic architecture. Memristor are a promising candidate for neuromorphic computing systems, but when it comes to neuromorphic computing, the conductance behavior of the synaptic memristor or neuronal memristor needs to be studied thoroughly in order to fathom the neuroscience or computer science. Furthermore, there is a need of more simulation work for utilizing the existing device properties and providing guidance to the development of future devices for different performance requirements. Hence, development of NCS needs more simulation work to make use of existing device properties. This work aims to provide an insight to build neuronal circuits using memristors to achieve a Memristor based NCS.  Here we throw a light on the research conducted in the field of memristors for building analog and digital circuits in order to motivate the research in the field of NCS by building memristor based neural circuits for advanced AI applications. This literature is a step in the direction where we describe the various Key findings about memristors and its analog and digital circuits implemented over the years which can be further utilized in implementing the neuronal circuits in the NCS. This work aims to help the electronic circuit designers to understand how the research progressed in memristors and how these findings can be used in implementing the neuronal circuits meant for the recent progress in the NCS.

Keywords: Analog circuits, digital circuits, memristors, neuromorphic computing systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1214
135 Design and Construction Validation of Pile Performance through High Strain Pile Dynamic Tests for both Contiguous Flight Auger and Drilled Displacement Piles

Authors: S. Pirrello

Abstract:

Sydney’s booming real estate market has pushed property developers to invest in historically “no-go” areas, which were previously too expensive to develop. These areas are usually near rivers where the sites are underlain by deep alluvial and estuarine sediments. In these ground conditions, conventional bored pile techniques are often not competitive. Contiguous Flight Auger (CFA) and Drilled Displacement (DD) Piles techniques are on the other hand suitable for these ground conditions. This paper deals with the design and construction challenges encountered with these piling techniques for a series of high-rise towers in Sydney’s West. The advantages of DD over CFA piles such as reduced overall spoil with substantial cost savings and achievable rock sockets in medium strength bedrock are discussed. Design performances were assessed with PIGLET. Pile performances are validated in two stages, during constructions with the interpretation of real-time data from the piling rigs’ on-board computer data, and after construction with analyses of results from high strain pile dynamic testing (PDA). Results are then presented and discussed. High Strain testing data are presented as Case Pile Wave Analysis Program (CAPWAP) analyses.

Keywords: Contiguous flight auger, case pile wave analysis, high strain pile, drilled displacement, pile performance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 983
134 Verification of Space System Dynamics Using the MATLAB Identification Toolbox in Space Qualification Test

Authors: Y. V. Kim

Abstract:

This article presents an approach with regards to the Functional Testing of Space System (SS) that could be a space vehicle (spacecraft-S/C) and/or its equipment and components – S/C subsystems. This test should finalize the Space Qualification Tests (SQT) campaign. It could be considered as a generic test and used for a wide class of SS that, from the point of view of System Dynamics and Control Theory, may be described by the ordinary differential equations. The suggested methodology is based on using semi-natural experiment laboratory stand that does not require complicated, precise and expensive technological control-verification equipment. However, it allows for testing totally assembled system during Assembling, Integration and Testing (AIT) activities at the final phase of SQT, involving system hardware (HW) and software (SW). The test physically activates system input (sensors) and output (actuators) and requires recording their outputs in real time. The data are then inserted in a laboratory computer, where it is post-experiment processed by the MATLAB/Simulink Identification Toolbox. It allows for estimating the system dynamics in the form of estimation of its differential equation coefficients through the verification experimental test and comparing them with expected mathematical model, prematurely verified by mathematical simulation during the design process. Mathematical simulation results presented in the article show that this approach could be applicable and helpful in SQT practice. Further semi-natural experiments should specify detail requirements for the test laboratory equipment and test-procedures.

Keywords: system dynamics, space system ground tests, space qualification, system dynamics identification, satellite attitude control, assembling integration and testing

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 540
133 Classification of Acoustic Emission Based Partial Discharge in Oil Pressboard Insulation System Using Wavelet Analysis

Authors: Prasanta Kundu, N.K. Kishore, A.K. Sinha

Abstract:

Insulation used in transformer is mostly oil pressboard insulation. Insulation failure is one of the major causes of catastrophic failure of transformers. It is established that partial discharges (PD) cause insulation degradation and premature failure of insulation. Online monitoring of PDs can reduce the risk of catastrophic failure of transformers. There are different techniques of partial discharge measurement like, electrical, optical, acoustic, opto-acoustic and ultra high frequency (UHF). Being non invasive and non interference prone, acoustic emission technique is advantageous for online PD measurement. Acoustic detection of p.d. is based on the retrieval and analysis of mechanical or pressure signals produced by partial discharges. Partial discharges are classified according to the origin of discharges. Their effects on insulation deterioration are different for different types. This paper reports experimental results and analysis for classification of partial discharges using acoustic emission signal of laboratory simulated partial discharges in oil pressboard insulation system using three different electrode systems. Acoustic emission signal produced by PD are detected by sensors mounted on the experimental tank surface, stored on an oscilloscope and fed to computer for further analysis. The measured AE signals are analyzed using discrete wavelet transform analysis and wavelet packet analysis. Energy distribution in different frequency bands of discrete wavelet decomposed signal and wavelet packet decomposed signal is calculated. These analyses show a distinct feature useful for PD classification. Wavelet packet analysis can sort out any misclassification arising out of DWT in most cases.

Keywords: Acoustic emission, discrete wavelet transform, partial discharge, wavelet packet analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2987
132 Simulation Aided Life Cycle Sustainability Assessment Framework for Manufacturing Design and Management

Authors: Mijoh A. Gbededo, Kapila Liyanage, Ilias Oraifige

Abstract:

Decision making for sustainable manufacturing design and management requires critical considerations due to the complexity and partly conflicting issues of economic, social and environmental factors. Although there are tools capable of assessing the combination of one or two of the sustainability factors, the frameworks have not adequately integrated all the three factors. Case study and review of existing simulation applications also shows the approach lacks integration of the sustainability factors. In this paper we discussed the development of a simulation based framework for support of a holistic assessment of sustainable manufacturing design and management. To achieve this, a strategic approach is introduced to investigate the strengths and weaknesses of the existing decision supporting tools. Investigation reveals that Discrete Event Simulation (DES) can serve as a rock base for other Life Cycle Analysis frameworks. Simio-DES application optimizes systems for both economic and competitive advantage, Granta CES EduPack and SimaPro collate data for Material Flow Analysis and environmental Life Cycle Assessment, while social and stakeholders’ analysis is supported by Analytical Hierarchy Process, a Multi-Criteria Decision Analysis method. Such a common and integrated framework creates a platform for companies to build a computer simulation model of a real system and assess the impact of alternative solutions before implementing a chosen solution.

Keywords: Discrete event simulation, life cycle sustainability analysis, manufacturing, sustainability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1289