Search results for: brain machine interface (BMI)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5058

Search results for: brain machine interface (BMI)

1008 Organ Dose Calculator for Fetus Undergoing Computed Tomography

Authors: Choonsik Lee, Les Folio

Abstract:

Pregnant patients may undergo CT in emergencies unrelated with pregnancy, and potential risk to the developing fetus is of concern. It is critical to accurately estimate fetal organ doses in CT scans. We developed a fetal organ dose calculation tool using pregnancy-specific computational phantoms combined with Monte Carlo radiation transport techniques. We adopted a series of pregnancy computational phantoms developed at the University of Florida at the gestational ages of 8, 10, 15, 20, 25, 30, 35, and 38 weeks (Maynard et al. 2011). More than 30 organs and tissues and 20 skeletal sites are defined in each fetus model. We calculated fetal organ dose-normalized by CTDIvol to derive organ dose conversion coefficients (mGy/mGy) for the eight fetuses for consequential slice locations ranging from the top to the bottom of the pregnancy phantoms with 1 cm slice thickness. Organ dose from helical scans was approximated by the summation of doses from multiple axial slices included in the given scan range of interest. We then compared dose conversion coefficients for major fetal organs in the abdominal-pelvis CT scan of pregnancy phantoms with the uterine dose of a non-pregnant adult female computational phantom. A comprehensive library of organ conversion coefficients was established for the eight developing fetuses undergoing CT. They were implemented into an in-house graphical user interface-based computer program for convenient estimation of fetal organ doses by inputting CT technical parameters as well as the age of the fetus. We found that the esophagus received the least dose, whereas the kidneys received the greatest dose in all fetuses in AP scans of the pregnancy phantoms. We also found that when the uterine dose of a non-pregnant adult female phantom is used as a surrogate for fetal organ doses, root-mean-square-error ranged from 0.08 mGy (8 weeks) to 0.38 mGy (38 weeks). The uterine dose was up to 1.7-fold greater than the esophagus dose of the 38-week fetus model. The calculation tool should be useful in cases requiring fetal organ dose in emergency CT scans as well as patient dose monitoring.

Keywords: computed tomography, fetal dose, pregnant women, radiation dose

Procedia PDF Downloads 122
1007 The Material-Process Perspective: Design and Engineering

Authors: Lars Andersen

Abstract:

The development of design and engineering in large construction projects are characterized by an increased degree of flattening out of formal structures, extended use of parallel and integrated processes (‘Integrated Concurrent Engineering’) and an increased number of expert disciplines. The integration process is based on ongoing collaborations, dialogues, intercommunication and comments on each other’s work (iterations). This process based on reciprocal communication between actors and disciplines triggers value creation. However, communication between equals is not in itself sufficient to create effective decision making. The complexity of the process and time pressure contribute to an increased risk of a deficit of decisions and loss of process control. The paper refers to a study that aims at developing a resilient decision-making system that does not come in conflict with communication processes based on equality between the disciplines in the process. The study includes the construction of a hospital, following the phases design, engineering and physical building. The Research method is a combination of formative process research, process tracking and phenomenological analyses. The study tracked challenges and problems in the building process to the projection substrates (drawing and models) and further to the organization of the engineering and design phase. A comparative analysis of traditional and new ways of organizing the projecting made it possible to uncover an implicit material order or structure in the process. This uncovering implied a development of a material process perspective. According to this perspective the complexity of the process is rooted in material-functional differentiation. This differentiation presupposes a structuring material (the skeleton of the building) that coordinates the other types of material. Each expert discipline´s competence is related to one or a set of materials. The architect, consulting engineer construction etc. have their competencies related to structuring material, and inherent in this; coordination competence. When dialogues between the disciplines concerning the coordination between them do not result in agreement, the disciplines with responsibility for the structuring material decide the interface issues. Based on these premises, this paper develops a self-organized expert-driven interdisciplinary decision-making system.

Keywords: collaboration, complexity, design, engineering, materiality

Procedia PDF Downloads 204
1006 Analyzing the Causes of Amblyopia among Patients in Tertiary Care Center: Retrospective Study in King Faisal Specialist Hospital and Research Center

Authors: Hebah M. Musalem, Jeylan El-Mansoury, Lin M. Tuleimat, Selwa Alhazza, Abdul-Aziz A. Al Zoba

Abstract:

Background: Amblyopia is a condition that affects the visual system triggering a decrease in visual acuity without a known underlying pathology. It is due to abnormal vision development in childhood or infancy. Most importantly, vision loss is preventable or reversible with the right kind of intervention in most of the cases. Strabismus, sensory defects, and anisometropia are all well-known causes of amblyopia. However, ocular misalignment in Strabismus is considered the most common form of amblyopia worldwide. The risk of developing amblyopia increases in premature children, developmentally delayed or children who had brain lesions affecting the visual pathway. The prevalence of amblyopia varies between 2 to 5 % in the world according to the literature. Objective: To determine the different causes of Amblyopia in pediatric patients seen in ophthalmology clinic of a tertiary care center, i.e. King Faisal Specialist Hospital and Research Center (KFSH&RC). Methods: This is a hospital based, random retrospective, based on reviewing patient’s files in the Ophthalmology Department of KFSH&RC in Riyadh city, Kingdom of Saudi Arabia. Inclusion criteria: amblyopic pediatric patients who attended the clinic from 2015 to 2016, who are between 6 months and 18 years old. Exclusion Criteria: patients above 18 years of age and any patient who is uncooperative to obtain an accurate vision or a proper refraction. Detailed ocular and medical history are recorded. The examination protocol includes a full ocular exam, full cycloplegic refraction, visual acuity measurement, ocular motility and strabismus evaluation. All data were organized in tables and graphs and analyzed by statistician. Results: Our preliminary results will be discussed on spot by our corresponding author. Conclusions: We focused on this study on utilizing various examination techniques which enhanced our results and highlighted a distinguished correlation between amblyopia and its’ causes. This paper recommendation emphasizes on critical testing protocols to be followed among amblyopic patient, especially in tertiary care centers.

Keywords: amblyopia, amblyopia causes, amblyopia diagnostic criterion, amblyopia prevalence, Saudi Arabia

Procedia PDF Downloads 142
1005 Effects of a Head Mounted Display Adaptation on Reaching Behaviour: Implications for a Therapeutic Approach in Unilateral Neglect

Authors: Taku Numao, Kazu Amimoto, Tomoko Shimada, Kyohei Ichikawa

Abstract:

Background: Unilateral spatial neglect (USN) is a common syndrome following damage to one hemisphere of the brain (usually the right side), in which a patient fails to report or respond to stimulation from the contralesional side. These symptoms are not due to primary sensory or motor deficits, but instead, reflect an inability to process input from that side of their environment. Prism adaptation (PA) is a therapeutic treatment for USN, wherein a patient’s visual field is artificially shifted laterally, resulting in a sensory-motor adaptation. However, patients with USN also tend to perceive a left-leaning subjective vertical in the frontal plane. The traditional PA cannot be used to correct a tilt in the subjective vertical, because a prism can only polarize, not twist, the surroundings. However, this can be accomplished using a head mounted display (HMD) and a web-camera. Therefore, this study investigated whether an HMD system could be used to correct the spatial perception of USN patients in the frontal as well as the horizontal plane. We recruited healthy subjects in order to collect data for the refinement of USN patient therapy. Methods: Eight healthy subjects sat on a chair wearing a HMD (Oculus rift DK2), with a web-camera (Ovrvision) displaying a 10 degree leftward rotation and a 10 degree counter-clockwise rotation along the frontal plane. Subjects attempted to point a finger at one of four targets, assigned randomly, a total of 48 times. Before and after the intervention, each subject’s body-centre judgment (BCJ) was tested by asking them to point a finger at a touch panel straight in front of their xiphisternum, 10 times sight unseen. Results: Intervention caused the location pointed to during the BCJ to shift 35 ± 17 mm (Ave ± SD) leftward in the horizontal plane, and 46 ± 29 mm downward in the frontal plane. The results in both planes were significant by paired-t-test (p<.01). Conclusions: The results in the horizontal plane are consistent with those observed following PA. Furthermore, the HMD and web-camera were able to elicit 3D effects, including in both the horizontal and frontal planes. Future work will focus on applying this method to patients with and without USN, and investigating whether subject posture is also affected by the HMD system.

Keywords: head mounted display, posture, prism adaptation, unilateral spatial neglect

Procedia PDF Downloads 263
1004 Smart Help at the Workplace for Persons with Disabilities (SHW-PWD)

Authors: Ghassan Kbar, Shady Aly, Ibrahim Alsharawy, Akshay Bhatia, Nur Alhasan, Ronaldo Enriquez

Abstract:

The Smart Help for persons with disability (PWD) is a part of the project SMARTDISABLE which aims to develop relevant solution for PWD that target to provide an adequate workplace environment for them. It would support PWD needs smartly through smart help to allow them access to relevant information and communicate with other effectively and flexibly, and smart editor that assist them in their daily work. It will assist PWD in knowledge processing and creation as well as being able to be productive at the work place. The technical work of the project involves design of a technological scenario for the Ambient Intelligence (AmI) - based assistive technologies at the workplace consisting of an integrated universal smart solution that suits many different impairment conditions and will be designed to empower the Physically disabled persons (PDP) with the capability to access and effectively utilize the ICTs in order to execute knowledge rich working tasks with minimum efforts and with sufficient comfort level. The proposed technology solution for PWD will support voice recognition along with normal keyboard and mouse to control the smart help and smart editor with dynamic auto display interface that satisfies the requirements for different PWD group. In addition, a smart help will provide intelligent intervention based on the behavior of PWD to guide them and warn them about possible misbehavior. PWD can communicate with others using Voice over IP controlled by voice recognition. Moreover, Auto Emergency Help Response would be supported to assist PWD in case of emergency. This proposed technology solution intended to make PWD very effective at the work environment and flexible using voice to conduct their tasks at the work environment. The proposed solution aims to provide favorable outcomes that assist PWD at the work place, with the opportunity to participate in PWD assistive technology innovation market which is still small and rapidly growing as well as upgrading their quality of life to become similar to the normal people at the workplace. Finally, the proposed smart help solution is applicable in all workplace setting, including offices, manufacturing, hospital, etc.

Keywords: ambient intelligence, ICT, persons with disability PWD, smart application, SHW

Procedia PDF Downloads 406
1003 Mechanical Characterization of Porcine Skin with the Finite Element Method Based Inverse Optimization Approach

Authors: Djamel Remache, Serge Dos Santos, Michael Cliez, Michel Gratton, Patrick Chabrand, Jean-Marie Rossi, Jean-Louis Milan

Abstract:

Skin tissue is an inhomogeneous and anisotropic material. Uniaxial tensile testing is one of the primary testing techniques for the mechanical characterization of skin at large scales. In order to predict the mechanical behavior of materials, the direct or inverse analytical approaches are often used. However, in case of an inhomogeneous and anisotropic material as skin tissue, analytical approaches are not able to provide solutions. The numerical simulation is thus necessary. In this work, the uniaxial tensile test and the FEM (finite element method) based inverse method were used to identify the anisotropic mechanical properties of porcine skin tissue. The uniaxial tensile experiments were performed using Instron 8800 tensile machine®. The uniaxial tensile test was simulated with FEM, and then the inverse optimization approach (or the inverse calibration) was used for the identification of mechanical properties of the samples. Experimentally results were compared to finite element solutions. The results showed that the finite element model predictions of the mechanical behavior of the tested skin samples were well correlated with experimental results.

Keywords: mechanical skin tissue behavior, uniaxial tensile test, finite element analysis, inverse optimization approach

Procedia PDF Downloads 392
1002 Theoretical Performance of a Sustainable Clean Energy On-Site Generation Device to Convert Consumers into Producers and Its Possible Impact on Electrical National Grids

Authors: Eudes Vera

Abstract:

In this paper, a theoretical evaluation is carried out of the performance of a forthcoming fuel-less clean energy generation device, the Air Motor. The underlying physical principles that support this technology are succinctly described. Examples of the machine and theoretical values of input and output powers are also given. In addition, its main features like portability, on-site energy generation and delivery, miniaturization of generation plants, efficiency, and scaling down of the whole electric infrastructure are discussed. The main component of the Air Motor, the Thermal Air Turbine, generates useful power by converting in mechanical energy part of the thermal energy contained in a fan-produced airflow while leaving intact its kinetic energy. Due to this fact an air motor can contain a long succession of identical air turbines and the total power generated out of a single airflow can be very large, as well as its mechanical efficiency. It is found using the corresponding formulae that the mechanical efficiency of this device can be much greater than 100%, while its thermal efficiency is always less than 100%. On account of its multiple advantages, the Air Motor seems to be the perfect device to convert energy consumers into energy producers worldwide. If so, it would appear that current national electrical grids would no longer be necessary, because it does not seem practical or economical to bring the energy from far-away distances while it can be generated and consumed locally at the consumer’s premises using just the thermal energy contained in the ambient air.

Keywords: electrical grid, clean energy, renewable energy, in situ generation and delivery, generation efficiency

Procedia PDF Downloads 166
1001 Designing and Implementing a Tourist-Guide Web Service Based on Volunteer Geographic Information Using Open-Source Technologies

Authors: Javad Sadidi, Ehsan Babaei, Hani Rezayan

Abstract:

The advent of web 2.0 gives a possibility to scale down the costs of data collection and mapping, specifically if the process is done by volunteers. Every volunteer can be thought of as a free and ubiquitous sensor to collect spatial, descriptive as well as multimedia data for tourist services. The lack of large-scale information, such as real-time climate and weather conditions, population density, and other related data, can be considered one of the important challenges in developing countries for tourists to make the best decision in terms of time and place of travel. The current research aims to design and implement a spatiotemporal web map service using volunteer-submitted data. The service acts as a tourist-guide service in which tourists can search interested places based on their requested time for travel. To design the service, three tiers of architecture, including data, logical processing, and presentation tiers, have been utilized. For implementing the service, open-source software programs, client and server-side programming languages (such as OpenLayers2, AJAX, and PHP), Geoserver as a map server, and Web Feature Service (WFS) standards have been used. The result is two distinct browser-based services, one for sending spatial, descriptive, and multimedia volunteer data and another one for tourists and local officials. Local official confirms the veracity of the volunteer-submitted information. In the tourist interface, a spatiotemporal search engine has been designed to enable tourists to find a tourist place based on province, city, and location at a specific time of interest. Implementing the tourist-guide service by this methodology causes the following: the current tourists participate in a free data collection and sharing process for future tourists, a real-time data sharing and accessing for all, avoiding a blind selection of travel destination and significantly, decreases the cost of providing such services.

Keywords: VGI, tourism, spatiotemporal, browser-based, web mapping

Procedia PDF Downloads 77
1000 Morphology Evolution in Titanium Dioxide Nanotubes Arrays Prepared by Electrochemical Anodization

Authors: J. Tirano, H. Zea, C. Luhrs

Abstract:

Photocatalysis has established as viable option in the development of processes for the treatment of pollutants and clean energy production. This option is based on the ability of semiconductors to generate an electron flow by means of the interaction with solar radiation. Owing to its electronic structure, TiO₂ is the most frequently used semiconductors in photocatalysis, although it has a high recombination of photogenerated charges and low solar energy absorption. An alternative to reduce these limitations is the use of nanostructured morphologies which can be produced during the synthesis of TiO₂ nanotubes (TNTs). Therefore, if possible to produce vertically oriented nanostructures it will be possible to generate a greater contact area with electrolyte and better charge transfer. At present, however, the development of these innovative structures still presents an important challenge for the development of competitive photoelectrochemical devices. This research focuses on established correlations between synthesis variables and 1D nanostructure morphology which has a direct effect on the photocatalytic performance. TNTs with controlled morphology were synthesized by two-step potentiostatic anodization of titanium foil. The anodization was carried out at room temperature in an electrolyte composed of ammonium fluoride, deionized water and ethylene glycol. Consequent thermal annealing of as-prepared TNTs was conducted in the air between 450 °C-550 °C. Morphology and crystalline phase of the TNTs were carried out by SEM, EDS and XRD analysis. As results, the synthesis conditions were established to produce nanostructures with specific morphological characteristics. Anatase was the predominant phase of TNTs after thermal treatment. Nanotubes with 10 μm in length, 40 nm in pore diameter and a surface-volume ratio of 50 are important in photoelectrochemical applications based on TiO₂ due to their 1D characteristics, high surface-volume ratio, reduced radial dimensions and high oxide/electrolyte interface. Finally, this knowledge can be used to improve the photocatalytic activity of TNTs by making additional surface modifications with dopants that improve their efficiency.

Keywords: electrochemical anodization, morphology, self-organized nanotubes, TiO₂ nanotubes

Procedia PDF Downloads 138
999 Organic Substance Removal from Pla-Som Family Industrial Wastewater through APCW System

Authors: W. Wararam, K. Angchanpen, T. Pattamapitoon, K. Chunkao, O. Phewnil, M. Srichomphu, T. Jinjaruk

Abstract:

The research focused on the efficiency for treating high organic wastewater from pla-som production process by anaerobic tanks, oxidation ponds and constructed wetland treatment systems (APCW). The combined system consisted of 50-mm plastic screen, five 5.8 m3 oil-grease trap tanks (2-day hydraulic retention time; HRT), four 4.3 m3 anaerobic tanks (1-day HRT), 16.7 m3 oxidation pond no.1 (7-day HRT), 12.0 m3 oxidation pond no.2 (3-day HRT), and 8.2 m3 constructed wetland plot (1-day HRT). After washing fresh raw fishes, they were sliced in small pieces and were converted into ground fish meat by blender machine. The fish meat was rinsed for 8 rounds: 1, 2, 3, 5, 6 and 7 by tap water and 4 and 8 by rice-wash-water, before mixing with salt, garlic, steamed rice and monosodium glutamate, followed by plastic wrapping for 72-hour of edibility. During pla-som production processing, the rinsed wastewater about 5 m3/day was fed to the treatment systems and fully stagnating storage in its components. The result found that, 1) percentage of treatment efficiency for BOD, COD, TDS and SS were 93, 95, 32 and 98 respectively, 2) the treatment was conducted with 500-kg raw fishes along with full equipment of high organic wastewater treatment systems, 3) the trend of the treatment efficiency and quantity in all indicators was similarly processed and 4) the small pieces of fish meat and fish blood were needed more than 3-day HRT in anaerobic digestion process.

Keywords: organic substance, Pla-Som family industry, wastewater, APCW system

Procedia PDF Downloads 341
998 Improvement of Microstructure, Wear and Mechanical Properties of Modified G38NiCrMo8-4-4 Steel Used in Mining Industry

Authors: Mustafa Col, Funda Gul Koc, Merve Yangaz, Eylem Subasi, Can Akbasoglu

Abstract:

G38NiCrMo8-4-4 steel is widely used in mining industries, machine parts, gears due to its high strength and toughness properties. In this study, microstructure, wear and mechanical properties of G38NiCrMo8-4-4 steel modified with boron used in the mining industry were investigated. For this purpose, cast materials were alloyed by melting in an induction furnace to include boron with the rates of 0 ppm, 15 ppm, and 50 ppm (wt.) and were formed in the dimensions of 150x200x150 mm by casting into the sand mould. Homogenization heat treatment was applied to the specimens at 1150˚C for 7 hours. Then all specimens were austenitized at 930˚C for 1 hour, quenched in the polymer solution and tempered at 650˚C for 1 hour. Microstructures of the specimens were investigated by using light microscope and SEM to determine the effect of boron and heat treatment conditions. Changes in microstructure properties and material hardness were obtained due to increasing boron content and heat treatment conditions after microstructure investigations and hardness tests. Wear tests were carried out using a pin-on-disc tribometer under dry sliding conditions. Charpy V notch impact test was performed to determine the toughness properties of the specimens. Fracture and worn surfaces were investigated with scanning electron microscope (SEM). The results show that boron element has a positive effect on the hardness and wear properties of G38NiCrMo8-4-4 steel.

Keywords: G38NiCrMo8-4-4 steel, boron, heat treatment, microstructure, wear, mechanical properties

Procedia PDF Downloads 182
997 The Impact of Artificial Intelligence in the Development of Textile and Fashion Industry

Authors: Basem Kamal Abasakhiroun Farag

Abstract:

Fashion, like many other areas of design, has undergone numerous developments over the centuries. The aim of the article is to recognize and evaluate the importance of advanced technologies in fashion design and to examine how they are transforming the role of contemporary fashion designers by transforming the creative process. It also discusses how contemporary culture is involved in such developments and how it influences fashion design in terms of conceptualization and production. The methodology used is based on examining various examples of the use of technology in fashion design and drawing parallels between what was feasible then and what is feasible today. Comparison of case studies, examples of existing fashion designs and experiences with craft methods; We therefore observe patterns that help us predict the direction of future developments in this area. Discussing the technological elements in fashion design helps us understand the driving force behind the trend. The research presented in the article shows that there is a trend towards significantly increasing interest and progress in the field of fashion technology, leading to the emergence of hybrid artisanal methods. In summary, as fashion technologies advance, their role in clothing production is becoming increasingly important, extending far beyond the humble sewing machine.

Keywords: fashion, identity, such, textiles ambient intelligence, proximity sensors, shape memory materials, sound sensing garments, wearable technology bio textiles, fashion trends, nano textiles, new materials, smart textiles, techno textiles fashion design, functional aesthetics, 3D printing.

Procedia PDF Downloads 38
996 Peruvian Diagnostic Reference Levels for Patients Undergoing Different X-Rays Procedures

Authors: Andres Portocarrero Bonifaz, Caterina Sandra Camarena Rodriguez, Ricardo Palma Esparza, Nicolas Antonio Romero Carlos

Abstract:

Reference levels for common X-rays procedures have been set in many protocols. In Peru, during quality control tests, the dose tolerance is set by these international recommendations. Nevertheless, further studies can be made to assess the national reality and relate dose levels with different parameters such as kV, mA/mAs, exposure time, type of processing (digital, digitalized or conventional), etc. In this paper three radiologic procedures were taken into account for study, general X-rays (fixed and mobile), intraoral X-rays (fixed, mobile and portable) and mammography. For this purpose, an Unfors Xi detector was used; the dose was measured at a focus - detector distance which varied depending on the procedure, and was corrected afterward to find the surface entry dose. The data used in this paper was gathered over a period of over 3 years (2015-2018). In addition, each X-ray machine was taken into consideration only once. The results hope to achieve a new standard which reflects the local practice, and address the issues of the ‘Bonn Call for Action’ in Peru. For this purpose, the 75% percentile of the dose of each radiologic procedure was calculated. In future quality control services, those machines with dose values higher than the selected threshold should be informed that they surpass the reference dose levels established in comparison other radiological centers in the country.

Keywords: general X-rays, intraoral X-rays, mammography, reference dose levels

Procedia PDF Downloads 139
995 Impact of the Non-Energy Sectors Diversification on the Energy Dependency Mitigation: Visualization by the “IntelSymb” Software Application

Authors: Ilaha Rzayeva, Emin Alasgarov, Orkhan Karim-Zada

Abstract:

This study attempts to consider the linkage between management and computer sciences in order to develop the software named “IntelSymb” as a demo application to prove data analysis of non-energy* fields’ diversification, which will positively influence on energy dependency mitigation of countries. Afterward, we analyzed 18 years of economic fields of development (5 sectors) of 13 countries by identifying which patterns mostly prevailed and which can be dominant in the near future. To make our analysis solid and plausible, as a future work, we suggest developing a gateway or interface, which will be connected to all available on-line data bases (WB, UN, OECD, U.S. EIA) for countries’ analysis by fields. Sample data consists of energy (TPES and energy import indicators) and non-energy industries’ (Main Science and Technology Indicator, Internet user index, and Sales and Production indicators) statistics from 13 OECD countries over 18 years (1995-2012). Our results show that the diversification of non-energy industries can have a positive effect on energy sector dependency (energy consumption and import dependence on crude oil) deceleration. These results can provide empirical and practical support for energy and non-energy industries diversification’ policies, such as the promoting of Information and Communication Technologies (ICTs), services and innovative technologies efficiency and management, in other OECD and non-OECD member states with similar energy utilization patterns and policies. Industries, including the ICT sector, generate around 4 percent of total GHG, but this is much higher — around 14 percent — if indirect energy use is included. The ICT sector itself (excluding the broadcasting sector) contributes approximately 2 percent of global GHG emissions, at just under 1 gigatonne of carbon dioxide equivalent (GtCO2eq). Ergo, this can be a good example and lesson for countries which are dependent and independent on energy, and mainly emerging oil-based economies, as well as to motivate non-energy industries diversification in order to be ready to energy crisis and to be able to face any economic crisis as well.

Keywords: energy policy, energy diversification, “IntelSymb” software, renewable energy

Procedia PDF Downloads 212
994 Transient Response of Elastic Structures Subjected to a Fluid Medium

Authors: Helnaz Soltani, J. N. Reddy

Abstract:

Presence of fluid medium interacting with a structure can lead to failure of the structure. Since developing efficient computational model for fluid-structure interaction (FSI) problems has broader impact to realistic problems encountered in aerospace industry, ship industry, oil and gas industry, and so on, one can find an increasing need to find a method in order to investigate the effect of fluid domain on structural response. A coupled finite element formulation of problems involving FSI issue is an accurate method to predict the response of structures in contact with a fluid medium. This study proposes a finite element approach in order to study the transient response of the structures interacting with a fluid medium. Since beam and plate are considered to be the fundamental elements of almost any structure, the developed method is applied to beams and plates benchmark problems in order to demonstrate its efficiency. The formulation is a combination of the various structure theories and the solid-fluid interface boundary condition, which is used to represent the interaction between the solid and fluid regimes. Here, three different beam theories as well as three different plate theories are considered to model the solid medium, and the Navier-Stokes equation is used as the theoretical equation governed the fluid domain. For each theory, a coupled set of equations is derived where the element matrices of both regimes are calculated by Gaussian quadrature integration. The main feature of the proposed methodology is to model the fluid domain as an added mass; the external distributed force due to the presence of the fluid. We validate the accuracy of such formulation by means of some numerical examples. Since the formulation presented in this study covers several theories in literature, the applicability of our proposed approach is independent of any structure geometry. The effect of varying parameters such as structure thickness ratio, fluid density and immersion depth, are studied using numerical simulations. The results indicate that maximum vertical deflection of the structure is affected considerably in the presence of a fluid medium.

Keywords: beam and plate, finite element analysis, fluid-structure interaction, transient response

Procedia PDF Downloads 552
993 Six Sigma-Based Optimization of Shrinkage Accuracy in Injection Molding Processes

Authors: Sky Chou, Joseph C. Chen

Abstract:

This paper focuses on using six sigma methodologies to reach the desired shrinkage of a manufactured high-density polyurethane (HDPE) part produced by the injection molding machine. It presents a case study where the correct shrinkage is required to reduce or eliminate defects and to improve the process capability index Cp and Cpk for an injection molding process. To improve this process and keep the product within specifications, the six sigma methodology, design, measure, analyze, improve, and control (DMAIC) approach, was implemented in this study. The six sigma approach was paired with the Taguchi methodology to identify the optimized processing parameters that keep the shrinkage rate within the specifications by our customer. An L9 orthogonal array was applied in the Taguchi experimental design, with four controllable factors and one non-controllable/noise factor. The four controllable factors identified consist of the cooling time, melt temperature, holding time, and metering stroke. The noise factor is the difference between material brand 1 and material brand 2. After the confirmation run was completed, measurements verify that the new parameter settings are optimal. With the new settings, the process capability index has improved dramatically. The purpose of this study is to show that the six sigma and Taguchi methodology can be efficiently used to determine important factors that will improve the process capability index of the injection molding process.

Keywords: injection molding, shrinkage, six sigma, Taguchi parameter design

Procedia PDF Downloads 159
992 The Use of Layered Neural Networks for Classifying Hierarchical Scientific Fields of Study

Authors: Colin Smith, Linsey S Passarella

Abstract:

Due to the proliferation and decentralized nature of academic publication, no widely accepted scheme exists for organizing papers by their scientific field of study (FoS) to the author’s best knowledge. While many academic journals require author provided keywords for papers, these keywords range wildly in scope and are not consistent across papers, journals, or field domains, necessitating alternative approaches to paper classification. Past attempts to perform field-of-study (FoS) classification on scientific texts have largely used a-hierarchical FoS schemas or ignored the schema’s inherently hierarchical structure, e.g. by compressing the structure into a single layer for multi-label classification. In this paper, we introduce an application of a Layered Neural Network (LNN) to the problem of performing supervised hierarchical classification of scientific fields of study (FoS) on research papers. In this approach, paper embeddings from a pretrained language model are fed into a top-down LNN. Beginning with a single neural network (NN) for the highest layer of the class hierarchy, each node uses a separate local NN to classify the subsequent subfield child node(s) for an input embedding of concatenated paper titles and abstracts. We compare our LNN-FOS method to other recent machine learning methods using the Microsoft Academic Graph (MAG) FoS hierarchy and find that the LNN-FOS offers increased classification accuracy at each FoS hierarchical level.

Keywords: hierarchical classification, layer neural network, scientific field of study, scientific taxonomy

Procedia PDF Downloads 118
991 A Review: Detection and Classification Defects on Banana and Apples by Computer Vision

Authors: Zahow Muoftah

Abstract:

Traditional manual visual grading of fruits has been one of the agricultural industry’s major challenges due to its laborious nature as well as inconsistency in the inspection and classification process. The main requirements for computer vision and visual processing are some effective techniques for identifying defects and estimating defect areas. Automated defect detection using computer vision and machine learning has emerged as a promising area of research with a high and direct impact on the visual inspection domain. Grading, sorting, and disease detection are important factors in determining the quality of fruits after harvest. Many studies have used computer vision to evaluate the quality level of fruits during post-harvest. Many studies have used computer vision to evaluate the quality level of fruits during post-harvest. Many studies have been conducted to identify diseases and pests that affect the fruits of agricultural crops. However, most previous studies concentrated solely on the diagnosis of a lesion or disease. This study focused on a comprehensive study to identify pests and diseases of apple and banana fruits using detection and classification defects on Banana and Apples by Computer Vision. As a result, the current article includes research from these domains as well. Finally, various pattern recognition techniques for detecting apple and banana defects are discussed.

Keywords: computer vision, banana, apple, detection, classification

Procedia PDF Downloads 87
990 Procedure for Impact Testing of Fused Recycled Glass

Authors: David Halley, Tyra Oseng-Rees, Luca Pagano, Juan A Ferriz-Papi

Abstract:

Recycled glass material is made from 100% recycled bottle glass and consumes less energy than re-melt technology. It also uses no additives in the manufacturing process allowing the recycled glass material, in principal, to go back to the recycling stream after end-of-use, contributing to the circular economy with a low ecological impact. The aim of this paper is to investigate the procedure for testing the recycled glass material for impact resistance, so it can be applied to pavements and other surfaces which are at risk of impact during service. A review of different impact test procedures for construction materials was undertaken, comparing methodologies and international standards applied to other materials such as natural stone, ceramics and glass. A drop weight impact testing machine was designed and manufactured in-house to perform these tests. As a case study, samples of the recycled glass material were manufactured with two different thicknesses and tested. The impact energy was calculated theoretically, obtaining results with 5 and 10 J. The results on the material were subsequently discussed. Improvements on the procedure can be made using high speed video technology to calculate velocity just before and immediately after the impact to know the absorbed energy. The initial results obtained in this procedure were positive although repeatability needs to be developed to obtain a correlation of results and finally be able to validate the procedure. The experiment with samples showed the practicality of this procedure and application to the recycled glass material impact testing although further research needs to be developed.

Keywords: construction materials, drop weight impact, impact testing, recycled glass

Procedia PDF Downloads 284
989 Ion Beam Induced 2D Mesophase Patterning of Nanocrystallites in Polymer

Authors: Srutirekha Giri, Manoranjan Sahoo, Anuradha Das, Pravanjan Mallick, Biswajit Mallick

Abstract:

Ion Beam (IB) technique is a very powerful experimental technique for both material synthesis and material modifications. In this work, 3MeV proton beam was generated using the 3MV Tandem machine of the Institute of Physics, Bhubaneswar and extracted into air for the irradiation-induced modification purpose[1]. The polymeric material can be modeled for a three-phase system viz. crystalline(I), amorphous(II) and mesomorphic(III). So far, our knowledge is concerned. There are only few techniques reported for the synthesis of this third-phase(III) of polymer. The IB induced technique is one of them and has been reported very recently [2-4]. It was observed that by irradiating polyethylene terephthalate (PET) fiber at very low proton fluence, 10¹⁰ - 10¹² p/s, possess 2D mesophase structure. This was confirmed using X-ray diffraction technique. A low-intensity broad peak was observed at small angle of about 2θ =6º, when the fiber axis was mounted parallel to the X-ray direction. Such peak vanished in the diffraction spectrum when the fiber axis was mounted perpendicular to the beam direction. The appearance of this extra peak in a particular orientation confirms that the phase is 2-dimensionally oriented (mesophase). It is well known that the mesophase is a 2-dimensionally ordered structure but 3-dimensionally disordered. Again, the crystallite of the mesophase peak particle was measured about 3nm. The MeV proton-induced 2D mesophase patterning of nanocrystallites (3nm) of PET due to irradiation was observed within the above low fluence range and failed in high proton fluence. This is mainly due to the breaking of crystallites, radiation-induced thermal degradation, etc.

Keywords: Ion irradiation, mesophase, nanocrystallites, polymer

Procedia PDF Downloads 183
988 Performance Assessment of Multi-Level Ensemble for Multi-Class Problems

Authors: Rodolfo Lorbieski, Silvia Modesto Nassar

Abstract:

Many supervised machine learning tasks require decision making across numerous different classes. Multi-class classification has several applications, such as face recognition, text recognition and medical diagnostics. The objective of this article is to analyze an adapted method of Stacking in multi-class problems, which combines ensembles within the ensemble itself. For this purpose, a training similar to Stacking was used, but with three levels, where the final decision-maker (level 2) performs its training by combining outputs from the tree-based pair of meta-classifiers (level 1) from Bayesian families. These are in turn trained by pairs of base classifiers (level 0) of the same family. This strategy seeks to promote diversity among the ensembles forming the meta-classifier level 2. Three performance measures were used: (1) accuracy, (2) area under the ROC curve, and (3) time for three factors: (a) datasets, (b) experiments and (c) levels. To compare the factors, ANOVA three-way test was executed for each performance measure, considering 5 datasets by 25 experiments by 3 levels. A triple interaction between factors was observed only in time. The accuracy and area under the ROC curve presented similar results, showing a double interaction between level and experiment, as well as for the dataset factor. It was concluded that level 2 had an average performance above the other levels and that the proposed method is especially efficient for multi-class problems when compared to binary problems.

Keywords: stacking, multi-layers, ensemble, multi-class

Procedia PDF Downloads 257
987 The Use of Ultrasound as a Safe and Cost-Efficient Technique to Assess Visceral Fat in Children with Obesity

Authors: Bassma A. Abdel Haleem, Ehab K. Emam, George E. Yacoub, Ashraf M. Salem

Abstract:

Background: Obesity is an increasingly common problem in childhood. Childhood obesity is considered the main risk factor for the development of metabolic syndrome (MetS) (diabetes type 2, dyslipidemia, and hypertension). Recent studies estimated that among children with obesity 30-60% will develop MetS. Visceral fat thickness is a valuable predictor of the development of MetS. Computed tomography and dual-energy X-ray absorptiometry are the main techniques to assess visceral fat. However, they carry the risk of radiation exposure and are expensive procedures. Consequently, they are seldom used in the assessment of visceral fat in children. Some studies explored the potential of ultrasound as a substitute to assess visceral fat in the elderly and found promising results. Given the vulnerability of children to radiation exposure, we sought to evaluate ultrasound as a safer and more cost-efficient alternative for measuring visceral fat in obese children. Additionally, we assessed the correlation between visceral fat and obesity indicators such as insulin resistance. Methods: A cross-sectional study was conducted on 46 children with obesity (aged 6–16 years). Their visceral fat was evaluated by ultrasound. Subcutaneous fat thickness (SFT), i.e., the measurement from the skin-fat interface to the linea alba, and visceral fat thickness (VFT), i.e., the thickness from the linea alba to the aorta, were measured and correlated with anthropometric measures, fasting lipid profile, homeostatic model assessment for insulin resistance (HOMA-IR) and liver enzymes (ALT). Results: VFT assessed via ultrasound was found to strongly correlate with the BMI, HOMA-IR with AUC for VFT as a predictor of insulin resistance of 0.858 and cut off point of >2.98. VFT also correlates positively with serum triglycerides and serum ALT. VFT correlates negatively with HDL. Conclusions: Ultrasound, a safe and cost-efficient technique, could be a useful tool for measuring the abdominal fat thickness in children with obesity. Ultrasound-measured VFT could be an appropriate prognostic factor for insulin resistance, hypertriglyceridemia, and elevated liver enzymes in obese children.

Keywords: metabolic syndrome, pediatric obesity, sonography, visceral fat

Procedia PDF Downloads 108
986 Detection and Classification Strabismus Using Convolutional Neural Network and Spatial Image Processing

Authors: Anoop T. R., Otman Basir, Robert F. Hess, Eileen E. Birch, Brooke A. Koritala, Reed M. Jost, Becky Luu, David Stager, Ben Thompson

Abstract:

Strabismus refers to a misalignment of the eyes. Early detection and treatment of strabismus in childhood can prevent the development of permanent vision loss due to abnormal development of visual brain areas. We developed a two-stage method for strabismus detection and classification based on photographs of the face. The first stage detects the presence or absence of strabismus, and the second stage classifies the type of strabismus. The first stage comprises face detection using Haar cascade, facial landmark estimation, face alignment, aligned face landmark detection, segmentation of the eye region, and detection of strabismus using VGG 16 convolution neural networks. Face alignment transforms the face to a canonical pose to ensure consistency in subsequent analysis. Using facial landmarks, the eye region is segmented from the aligned face and fed into a VGG 16 CNN model, which has been trained to classify strabismus. The CNN determines whether strabismus is present and classifies the type of strabismus (exotropia, esotropia, and vertical deviation). If stage 1 detects strabismus, the eye region image is fed into stage 2, which starts with the estimation of pupil center coordinates using mask R-CNN deep neural networks. Then, the distance between the pupil coordinates and eye landmarks is calculated along with the angle that the pupil coordinates make with the horizontal and vertical axis. The distance and angle information is used to characterize the degree and direction of the strabismic eye misalignment. This model was tested on 100 clinically labeled images of children with (n = 50) and without (n = 50) strabismus. The True Positive Rate (TPR) and False Positive Rate (FPR) of the first stage were 94% and 6% respectively. The classification stage has produced a TPR of 94.73%, 94.44%, and 100% for esotropia, exotropia, and vertical deviations, respectively. This method also had an FPR of 5.26%, 5.55%, and 0% for esotropia, exotropia, and vertical deviation, respectively. The addition of one more feature related to the location of corneal light reflections may reduce the FPR, which was primarily due to children with pseudo-strabismus (the appearance of strabismus due to a wide nasal bridge or skin folds on the nasal side of the eyes).

Keywords: strabismus, deep neural networks, face detection, facial landmarks, face alignment, segmentation, VGG 16, mask R-CNN, pupil coordinates, angle deviation, horizontal and vertical deviation

Procedia PDF Downloads 64
985 Application of KL Divergence for Estimation of Each Metabolic Pathway Genes

Authors: Shohei Maruyama, Yasuo Matsuyama, Sachiyo Aburatani

Abstract:

The development of the method to annotate unknown gene functions is an important task in bioinformatics. One of the approaches for the annotation is The identification of the metabolic pathway that genes are involved in. Gene expression data have been utilized for the identification, since gene expression data reflect various intracellular phenomena. However, it has been difficult to estimate the gene function with high accuracy. It is considered that the low accuracy of the estimation is caused by the difficulty of accurately measuring a gene expression. Even though they are measured under the same condition, the gene expressions will vary usually. In this study, we proposed a feature extraction method focusing on the variability of gene expressions to estimate the genes' metabolic pathway accurately. First, we estimated the distribution of each gene expression from replicate data. Next, we calculated the similarity between all gene pairs by KL divergence, which is a method for calculating the similarity between distributions. Finally, we utilized the similarity vectors as feature vectors and trained the multiclass SVM for identifying the genes' metabolic pathway. To evaluate our developed method, we applied the method to budding yeast and trained the multiclass SVM for identifying the seven metabolic pathways. As a result, the accuracy that calculated by our developed method was higher than the one that calculated from the raw gene expression data. Thus, our developed method combined with KL divergence is useful for identifying the genes' metabolic pathway.

Keywords: metabolic pathways, gene expression data, microarray, Kullback–Leibler divergence, KL divergence, support vector machines, SVM, machine learning

Procedia PDF Downloads 386
984 Oxygen Transport in Blood Flows Pasts Staggered Fiber Arrays: A Computational Fluid Dynamics Study of an Oxygenator in Artificial Lung

Authors: Yu-Chen Hsu, Kuang C. Lin

Abstract:

The artificial lung called extracorporeal membrane oxygenation (ECMO) is an important medical machine that supports persons whose heart and lungs dysfunction. Previously, investigation of steady deoxygenated blood flows passing through hollow fibers for oxygen transport was carried out experimentally and computationally. The present study computationally analyzes the effect of biological pulsatile flow on the oxygen transport in blood. A 2-D model with a pulsatile flow condition is employed. The power law model is used to describe the non-Newtonian flow and the Hill equation is utilized to simulate the oxygen saturation of hemoglobin. The dimensionless parameters for the physical model include Reynolds numbers (Re), Womersley parameters (α), pulsation amplitudes (A), Sherwood number (Sh) and Schmidt number (Sc). The present model with steady-state flow conditions is well validated against previous experiment and simulations. It is observed that pulsating flow amplitudes significantly influence the velocity profile, pressure of oxygen (PO2), saturation of oxygen (SO2) and the oxygen mass transfer rates (m ̇_O2). In comparison between steady-state and pulsating flows, our findings suggest that the consideration of pulsating flow in the computational model is needed when Re is raised from 2 to 10 in a typical range for flow in artificial lung.

Keywords: artificial lung, oxygen transport, non-Newtonian flows, pulsating flows

Procedia PDF Downloads 300
983 Logistic Regression Based Model for Predicting Students’ Academic Performance in Higher Institutions

Authors: Emmanuel Osaze Oshoiribhor, Adetokunbo MacGregor John-Otumu

Abstract:

In recent years, there has been a desire to forecast student academic achievement prior to graduation. This is to help them improve their grades, particularly for individuals with poor performance. The goal of this study is to employ supervised learning techniques to construct a predictive model for student academic achievement. Many academics have already constructed models that predict student academic achievement based on factors such as smoking, demography, culture, social media, parent educational background, parent finances, and family background, to name a few. This feature and the model employed may not have correctly classified the students in terms of their academic performance. This model is built using a logistic regression classifier with basic features such as the previous semester's course score, attendance to class, class participation, and the total number of course materials or resources the student is able to cover per semester as a prerequisite to predict if the student will perform well in future on related courses. The model outperformed other classifiers such as Naive bayes, Support vector machine (SVM), Decision Tree, Random forest, and Adaboost, returning a 96.7% accuracy. This model is available as a desktop application, allowing both instructors and students to benefit from user-friendly interfaces for predicting student academic achievement. As a result, it is recommended that both students and professors use this tool to better forecast outcomes.

Keywords: artificial intelligence, ML, logistic regression, performance, prediction

Procedia PDF Downloads 82
982 Development of a General Purpose Computer Programme Based on Differential Evolution Algorithm: An Application towards Predicting Elastic Properties of Pavement

Authors: Sai Sankalp Vemavarapu

Abstract:

This paper discusses the application of machine learning in the field of transportation engineering for predicting engineering properties of pavement more accurately and efficiently. Predicting the elastic properties aid us in assessing the current road conditions and taking appropriate measures to avoid any inconvenience to commuters. This improves the longevity and sustainability of the pavement layer while reducing its overall life-cycle cost. As an example, we have implemented differential evolution (DE) in the back-calculation of the elastic modulus of multi-layered pavement. The proposed DE global optimization back-calculation approach is integrated with a forward response model. This approach treats back-calculation as a global optimization problem where the cost function to be minimized is defined as the root mean square error in measured and computed deflections. The optimal solution which is elastic modulus, in this case, is searched for in the solution space by the DE algorithm. The best DE parameter combinations and the most optimum value is predicted so that the results are reproducible whenever the need arises. The algorithm’s performance in varied scenarios was analyzed by changing the input parameters. The prediction was well within the permissible error, establishing the supremacy of DE.

Keywords: cost function, differential evolution, falling weight deflectometer, genetic algorithm, global optimization, metaheuristic algorithm, multilayered pavement, pavement condition assessment, pavement layer moduli back calculation

Procedia PDF Downloads 150
981 Safe and Efficient Deep Reinforcement Learning Control Model: A Hydroponics Case Study

Authors: Almutasim Billa A. Alanazi, Hal S. Tharp

Abstract:

Safe performance and efficient energy consumption are essential factors for designing a control system. This paper presents a reinforcement learning (RL) model that can be applied to control applications to improve safety and reduce energy consumption. As hardware constraints and environmental disturbances are imprecise and unpredictable, conventional control methods may not always be effective in optimizing control designs. However, RL has demonstrated its value in several artificial intelligence (AI) applications, especially in the field of control systems. The proposed model intelligently monitors a system's success by observing the rewards from the environment, with positive rewards counting as a success when the controlled reference is within the desired operating zone. Thus, the model can determine whether the system is safe to continue operating based on the designer/user specifications, which can be adjusted as needed. Additionally, the controller keeps track of energy consumption to improve energy efficiency by enabling the idle mode when the controlled reference is within the desired operating zone, thus reducing the system energy consumption during the controlling operation. Water temperature control for a hydroponic system is taken as a case study for the RL model, adjusting the variance of disturbances to show the model’s robustness and efficiency. On average, the model showed safety improvement by up to 15% and energy efficiency improvements by 35%- 40% compared to a traditional RL model.

Keywords: control system, hydroponics, machine learning, reinforcement learning

Procedia PDF Downloads 159
980 Study of Morning-Glory Spillway Structure in Hydraulic Characteristics by CFD Model

Authors: Mostafa Zandi, Ramin Mansouri

Abstract:

Spillways are one of the most important hydraulic structures of dams that provide the stability of the dam and downstream areas at the time of flood. Morning-Glory spillway is one of the common spillways for discharging the overflow water behind dams, these kinds of spillways are constructed in dams with small reservoirs. In this research, the hydraulic flow characteristics of a morning-glory spillways are investigated with CFD model. Two dimensional unsteady RANS equations were solved numerically using Finite Volume Method. The PISO scheme was applied for the velocity-pressure coupling. The mostly used two-equation turbulence models, k- and k-, were chosen to model Reynolds shear stress term. The power law scheme was used for discretization of momentum, k , and  equations. The VOF method (geometrically reconstruction algorithm) was adopted for interface simulation. The results show that the fine computational grid, the input speed condition for the flow input boundary, and the output pressure for the boundaries that are in contact with the air provide the best possible results. Also, the standard wall function is chosen for the effect of the wall function, and the turbulent model k -ε (Standard) has the most consistent results with experimental results. When the jet is getting closer to end of basin, the computational results increase with the numerical results of their differences. The lower profile of the water jet has less sensitivity to the hydraulic jet profile than the hydraulic jet profile. In the pressure test, it was also found that the results show that the numerical values of the pressure in the lower landing number differ greatly in experimental results. The characteristics of the complex flows over a Morning-Glory spillway were studied numerically using a RANS solver. Grid study showed that numerical results of a 57512-node grid had the best agreement with the experimental values. The desired downstream channel length was preferred to be 1.5 meter, and the standard k-ε turbulence model produced the best results in Morning-Glory spillway. The numerical free-surface profiles followed the theoretical equations very well.

Keywords: morning-glory spillway, CFD model, hydraulic characteristics, wall function

Procedia PDF Downloads 59
979 A Study of Topical and Similarity of Sebum Layer Using Interactive Technology in Image Narratives

Authors: Chao Wang

Abstract:

Under rapid innovation of information technology, the media plays a very important role in the dissemination of information, and it has a totally different analogy generations face. However, the involvement of narrative images provides more possibilities of narrative text. "Images" through the process of aperture, a camera shutter and developable photosensitive processes are manufactured, recorded and stamped on paper, displayed on a computer screen-concretely saved. They exist in different forms of files, data, or evidence as the ultimate looks of events. By the interface of media and network platforms and special visual field of the viewer, class body space exists and extends out as thin as sebum layer, extremely soft and delicate with real full tension. The physical space of sebum layer of confuses the fact that physical objects exist, needs to be established under a perceived consensus. As at the scene, the existing concepts and boundaries of physical perceptions are blurred. Sebum layer physical simulation shapes the “Topical-Similarity" immersing, leading the contemporary social practice communities, groups, network users with a kind of illusion without the presence, i.e. a non-real illusion. From the investigation and discussion of literatures, digital movies editing manufacture and produce the variability characteristics of time (for example, slices, rupture, set, and reset) are analyzed. Interactive eBook has an unique interaction in "Waiting-Greeting" and "Expectation-Response" that makes the operation of image narrative structure more interpretations functionally. The works of digital editing and interactive technology are combined and further analyze concept and results. After digitization of Interventional Imaging and interactive technology, real events exist linked and the media handing cannot be cut relationship through movies, interactive art, practical case discussion and analysis. Audience needs more rational thinking about images carried by the authenticity of the text.

Keywords: sebum layer, topical and similarity, interactive technology, image narrative

Procedia PDF Downloads 378