Search results for: image comparison
5904 Cinema and the Documentation of Mass Killings in Third World Countries: A Study of Selected African Films
Authors: Chijindu D. Mgbemere
Abstract:
Mass killing also known as genocide is the systematic killing of people from national, ethnic, or religious group, or an attempt to do so. The act has been there before 1948, when it was officially recognized for what it is. From then, the world has continued to witness genocide in diverse forms- negating different measures by the United Nations and its agencies to curb it. So far, all the studies and documentations on this subject are biased in favor of radio and the print. This paper therefore extended the interrogation of genocide, drumming its devastating effects, using the film medium; and in doing so devised innovative and pragmatic approach to genocide scholarship. It further centered attention on the factors and impacts of genocide, with a view to determine how effective film can be in such a study. The study is anchored on Bateson’s Framing Theory. Four films- Hotel Rwanda, Half of a Yellow Sun, Attack on Darfur, and sarafina, were analyzed, based on background, factors/causes, impacts, and development of genocide, via Content Analysis. The study discovered that: as other continents strive towards peace, acts of genocide are on the increase in African. Bloodletting stereotypes give Africa negative image in the global society. Difficult political frameworks, the trauma of postcolonial state, aggravated by ethnic and religious intolerance, and limited access to resources are responsible for high cases of genocide in Africa. The media, international communities, and peace agencies often abet other than prevent genocide or mass killings in Africa. High human casualty and displacement, children soldering, looting, hunger, rape, sex-slavery and abuse, mental and psychosomatic stress disorders are some of the impacts of genocide. Genocidaires are either condemned or killed. Grievances can be vented using civil resistance, negotiation, adjudication, arbitration, and mediation. The cinema is an effective means of studying and documenting genocide. Africans must factor the image laundering of their continent into consideration. Punishment of genocidaires without an attempt to de-radicalize them is counterproductive.Keywords: African film, genocide, framing theory, mass murder
Procedia PDF Downloads 1175903 Using the Dokeos Platform for Industrial E-Learning Solution
Authors: Kherafa Abdennasser
Abstract:
The application of Information and Communication Technologies (ICT) to the training area led to the creation of this new reality called E-learning. That last one is described like the marriage of multi- media (sound, image and text) and of the internet (diffusion on line, interactivity). Distance learning became an important totality for training and that last pass in particular by the setup of a distance learning platform. In our memory, we will use an open source platform named Dokeos for the management of a distance training of GPS called e-GPS. The learner is followed in all his training. In this system, trainers and learners communicate individually or in group, the administrator setup and make sure of this system maintenance.Keywords: ICT, E-learning, learning plate-forme, Dokeos, GPS
Procedia PDF Downloads 4755902 Evaluation of Easy-to-Use Energy Building Design Tools for Solar Access Analysis in Urban Contexts: Comparison of Friendly Simulation Design Tools for Architectural Practice in the Early Design Stage
Abstract:
Current building sector is focused on reduction of energy requirements, on renewable energy generation and on regeneration of existing urban areas. These targets need to be solved with a systemic approach, considering several aspects simultaneously such as climate conditions, lighting conditions, solar radiation, PV potential, etc. The solar access analysis is an already known method to analyze the solar potentials, but in current years, simulation tools have provided more effective opportunities to perform this type of analysis, in particular in the early design stage. Nowadays, the study of the solar access is related to the easiness of the use of simulation tools, in rapid and easy way, during the design process. This study presents a comparison of three simulation tools, from the point of view of the user, with the aim to highlight differences in the easy-to-use of these tools. Using a real urban context as case study, three tools; Ecotect, Townscope and Heliodon, are tested, performing models and simulations and examining the capabilities and output results of solar access analysis. The evaluation of the ease-to-use of these tools is based on some detected parameters and features, such as the types of simulation, requirements of input data, types of results, etc. As a result, a framework is provided in which features and capabilities of each tool are shown. This framework shows the differences among these tools about functions, features and capabilities. The aim of this study is to support users and to improve the integration of simulation tools for solar access with the design process.Keywords: energy building design tools, solar access analysis, solar potential, urban planning
Procedia PDF Downloads 3395901 A Three-modal Authentication Method for Industrial Robots
Authors: Luo Jiaoyang, Yu Hongyang
Abstract:
In this paper, we explore a method that can be used in the working scene of intelligent industrial robots to confirm the identity information of operators to ensure that the robot executes instructions in a sufficiently safe environment. This approach uses three information modalities, namely visible light, depth, and sound. We explored a variety of fusion modes for the three modalities and finally used the joint feature learning method to improve the performance of the model in the case of noise compared with the single-modal case, making the maximum noise in the experiment. It can also maintain an accuracy rate of more than 90%.Keywords: multimodal, kinect, machine learning, distance image
Procedia PDF Downloads 785900 Event Data Representation Based on Time Stamp for Pedestrian Detection
Authors: Yuta Nakano, Kozo Kajiwara, Atsushi Hori, Takeshi Fujita
Abstract:
In association with the wave of electric vehicles (EV), low energy consumption systems have become more and more important. One of the key technologies to realize low energy consumption is a dynamic vision sensor (DVS), or we can call it an event sensor, neuromorphic vision sensor and so on. This sensor has several features, such as high temporal resolution, which can achieve 1 Mframe/s, and a high dynamic range (120 DB). However, the point that can contribute to low energy consumption the most is its sparsity; to be more specific, this sensor only captures the pixels that have intensity change. In other words, there is no signal in the area that does not have any intensity change. That is to say, this sensor is more energy efficient than conventional sensors such as RGB cameras because we can remove redundant data. On the other side of the advantages, it is difficult to handle the data because the data format is completely different from RGB image; for example, acquired signals are asynchronous and sparse, and each signal is composed of x-y coordinate, polarity (two values: +1 or -1) and time stamp, it does not include intensity such as RGB values. Therefore, as we cannot use existing algorithms straightforwardly, we have to design a new processing algorithm to cope with DVS data. In order to solve difficulties caused by data format differences, most of the prior arts make a frame data and feed it to deep learning such as Convolutional Neural Networks (CNN) for object detection and recognition purposes. However, even though we can feed the data, it is still difficult to achieve good performance due to a lack of intensity information. Although polarity is often used as intensity instead of RGB pixel value, it is apparent that polarity information is not rich enough. Considering this context, we proposed to use the timestamp information as a data representation that is fed to deep learning. Concretely, at first, we also make frame data divided by a certain time period, then give intensity value in response to the timestamp in each frame; for example, a high value is given on a recent signal. We expected that this data representation could capture the features, especially of moving objects, because timestamp represents the movement direction and speed. By using this proposal method, we made our own dataset by DVS fixed on a parked car to develop an application for a surveillance system that can detect persons around the car. We think DVS is one of the ideal sensors for surveillance purposes because this sensor can run for a long time with low energy consumption in a NOT dynamic situation. For comparison purposes, we reproduced state of the art method as a benchmark, which makes frames the same as us and feeds polarity information to CNN. Then, we measured the object detection performances of the benchmark and ours on the same dataset. As a result, our method achieved a maximum of 7 points greater than the benchmark in the F1 score.Keywords: event camera, dynamic vision sensor, deep learning, data representation, object recognition, low energy consumption
Procedia PDF Downloads 975899 Digital Twin Smart Hospital: A Guide for Implementation and Improvements
Authors: Enido Fabiano de Ramos, Ieda Kanashiro Makiya, Francisco I. Giocondo Cesar
Abstract:
This study investigates the application of Digital Twins (DT) in Smart Hospital Environments (SHE), through a bibliometric study and literature review, including comparison with the principles of Industry 4.0. It aims to analyze the current state of the implementation of digital twins in clinical and non-clinical operations in healthcare settings, identifying trends and challenges, comparing these practices with Industry 4.0 concepts and technologies, in order to present a basic framework including stages and maturity levels. The bibliometric methodology will allow mapping the existing scientific production on the theme, while the literature review will synthesize and critically analyze the relevant studies, highlighting pertinent methodologies and results, additionally the comparison with Industry 4.0 will provide insights on how the principles of automation, interconnectivity and digitalization can be applied in healthcare environments/operations, aiming at improvements in operational efficiency and quality of care. The results of this study will contribute to a deeper understanding of the potential of Digital Twins in Smart Hospitals, in addition to the future potential from the effective integration of Industry 4.0 concepts in this specific environment, presented through the practical framework, after all, the urgent need for changes addressed in this article is undeniable, as well as all their value contribution to human sustainability, designed in SDG3 – Health and well-being: ensuring that all citizens have a healthy life and well-being, at all ages and in all situations. We know that the validity of these relationships will be constantly discussed, and technology can always change the rules of the game.Keywords: digital twin, smart hospital, healthcare operations, industry 4.0, SDG3, technology
Procedia PDF Downloads 515898 Milling Process of Rigid Flex Printed Circuit Board to Which Polyimide Covers the Whole Surface
Authors: Daniela Evtimovska, Ivana Srbinovska, Padraig O’Rourke
Abstract:
Kostal Macedonia has the challenge to mill a rigid-flex printed circuit board (PCB). The PCB elaborated in this paper is made of FR4 material covered with polyimide through the whole surface on the one side, including the tabs where PCBs need to be separated. After milling only 1.44 meters, the updraft routing tool isn’t effective and causes polyimide debris on all PCB cuts if it continues to mill with the same tool. Updraft routing tool is used for all another product in Kostal Macedonia, and it is changing after milling 60 meters. Changing the tool adds 80 seconds to the cycle time. One solution is using a laser-cut machine. Buying a laser-cut machine for cutting only one product doesn’t make financial sense. The focus is given to find an internal solution among the options under review to solve the issue with polyimide debris. In the paper, the design of the rigid-flex panel is described deeply. It is evaluated downdraft routing tool as a possible solution which could be used for the flex rigid panel as a specific product. It is done a comparison between updraft and down draft routing tools from a technical and financial aspect of view, taking into consideration the customer requirements for the rigid-flex PCB. The results show that using the downdraft routing tool is the best solution in this case. This tool is more expensive for 0.62 euros per piece than updraft. The downdraft routing tool needs to be changed after milling 43.44 meters in comparison with the updraft tool, which needs to be changed after milling only 1.44 meters. It is done analysis which actions should be taken in order further improvements and the possibility of maximum serving of downdraft routing tool.Keywords: Kostal Macedonia, rigid flex PCB, polyimide, debris, milling process, up/down draft routing tool
Procedia PDF Downloads 1905897 A Literature Review of Precision Agriculture: Applications of Diagnostic Diseases in Corn, Potato, and Rice Based on Artificial Intelligence
Authors: Carolina Zambrana, Grover Zurita
Abstract:
The food loss production that occurs in deficient agricultural production is one of the major problems worldwide. This puts the population's food security and the efficiency of farming investments at risk. It is to be expected that this food security will be achieved with the own and efficient production of each country. It will have an impact on the well-being of its population and, thus, also on food sovereignty. The production losses in quantity and quality occur due to the lack of efficient detection of diseases at an early stage. It is very difficult to solve the agriculture efficiency using traditional methods since it takes a long time to be carried out due to detection imprecision of the main diseases, especially when the production areas are extensive. Therefore, the main objective of this research study is to perform a systematic literature review, of the latest five years, of Precision Agriculture (PA) to be able to understand the state of the art of the set of new technologies, procedures, and optimization processes with Artificial Intelligence (AI). This study will focus on Corns, Potatoes, and Rice diagnostic diseases. The extensive literature review will be performed on Elsevier, Scopus, and IEEE databases. In addition, this research will focus on advanced digital imaging processing and the development of software and hardware for PA. The convolution neural network will be handling special attention due to its outstanding diagnostic results. Moreover, the studied data will be incorporated with artificial intelligence algorithms for the automatic diagnosis of crop quality. Finally, precision agriculture with technology applied to the agricultural sector allows the land to be exploited efficiently. This system requires sensors, drones, data acquisition cards, and global positioning systems. This research seeks to merge different areas of science, control engineering, electronics, digital image processing, and artificial intelligence for the development, in the near future, of a low-cost image measurement system that allows the optimization of crops with AI.Keywords: precision agriculture, convolutional neural network, deep learning, artificial intelligence
Procedia PDF Downloads 775896 Morphological Comparison of the Total Skeletal of (Common Bottlenose Dolphin) Tursiops truncatus and (Harbour Porpoise) Phocoena phocoena
Authors: Onur Yaşar, Okan Bilge, Ortaç Onmuş
Abstract:
The aim of this study is to investigate and compare the locomotion structures, especially the bone structures, of two different dolphin species, the Common bottlenose dolphin Tursiops truncatus and the Harbor porpoise Phocoena phocoena, and to provide a more detailed and descriptive comparison. To compare the structures of bones of two study species; first, the Spinous Process (SP), Inferior Articular Process (IAP), Laminae Vertebrae (LA), Foramen Vertebrae (FV), Corpus Vertebrae (CV), Transverse Process (TP) were determined and then the length of the Spinous Process (LSP), length of the Foramen Vertebrae (LFV), area of the Corpus Vertebrae (ACV), and length of the Transverse Process (LTP) were measured from the caudal view. The spine consists of a total of 61 vertebrae (7 cervical, 13 thoracic, 14 lumbar, and 27 caudal vertebrae) in the Common bottlenose dolphin, while the Harbor Porpoise has 63 vertebrae (7 cervical, 12 thoracic, 14 lumbar, 30 caudal. In the Common bottlenose dolphin, epiphyseal ossification was between the 21st caudal vertebra and the 27th caudal vertebra, while in the Harbor porpoise, it was observed in all vertebrae. Ankylosing spondylitis was observed in the C1 and C2 vertebrae in the Common bottlenose dolphin and in all cervical vertebrae between C1 and C6 in the Harbor porpoise. We argue that this difference in fused cervical vertebrae between the two species may be due to the fact that the neck movements of the Harbor porpoise in the vertical and horizontal axes are more limited than those of the Common bottlenose dolphin. We also think that as the number of fused cervical vertebrae increases, underwater maneuvers are performed at a wider angle, but to test this idea, we think that different species of dolphins should be compared and the different age groups should be investigated.Keywords: anatomy, morphometry, vertebrae, common bottlenose dolphin, Tursiops truncatus, harbour porpoise, Phocoena phocoena
Procedia PDF Downloads 475895 Computational Study of Composite Films
Authors: Rudolf Hrach, Stanislav Novak, Vera Hrachova
Abstract:
Composite and nanocomposite films represent the class of promising materials and are often objects of the study due to their mechanical, electrical and other properties. The most interesting ones are probably the composite metal/dielectric structures consisting of a metal component embedded in an oxide or polymer matrix. Behaviour of composite films varies with the amount of the metal component inside what is called filling factor. The structures contain individual metal particles or nanoparticles completely insulated by the dielectric matrix for small filling factors and the films have more or less dielectric properties. The conductivity of the films increases with increasing filling factor and finally a transition into metallic state occurs. The behaviour of composite films near a percolation threshold, where the change of charge transport mechanism from a thermally-activated tunnelling between individual metal objects to an ohmic conductivity is observed, is especially important. Physical properties of composite films are given not only by the concentration of metal component but also by the spatial and size distributions of metal objects which are influenced by a technology used. In our contribution, a study of composite structures with the help of methods of computational physics was performed. The study consists of two parts: -Generation of simulated composite and nanocomposite films. The techniques based on hard-sphere or soft-sphere models as well as on atomic modelling are used here. Characterizations of prepared composite structures by image analysis of their sections or projections follow then. However, the analysis of various morphological methods must be performed as the standard algorithms based on the theory of mathematical morphology lose their sensitivity when applied to composite films. -The charge transport in the composites was studied by the kinetic Monte Carlo method as there is a close connection between structural and electric properties of composite and nanocomposite films. It was found that near the percolation threshold the paths of tunnel current forms so-called fuzzy clusters. The main aim of the present study was to establish the correlation between morphological properties of composites/nanocomposites and structures of conducting paths in them in the dependence on the technology of composite films.Keywords: composite films, computer modelling, image analysis, nanocomposite films
Procedia PDF Downloads 3925894 Computational Investigation of Secondary Flow Losses in Linear Turbine Cascade by Modified Leading Edge Fence
Authors: K. N. Kiran, S. Anish
Abstract:
It is well known that secondary flow loses account about one third of the total loss in any axial turbine. Modern gas turbine height is smaller and have longer chord length, which might lead to increase in secondary flow. In order to improve the efficiency of the turbine, it is important to understand the behavior of secondary flow and device mechanisms to curtail these losses. The objective of the present work is to understand the effect of a stream wise end-wall fence on the aerodynamics of a linear turbine cascade. The study is carried out computationally by using commercial software ANSYS CFX. The effect of end-wall on the flow field are calculated based on RANS simulation by using SST transition turbulence model. Durham cascade which is similar to high-pressure axial flow turbine for simulation is used. The aim of fencing in blade passage is to get the maximum benefit from flow deviation and destroying the passage vortex in terms of loss reduction. It is observed that, for the present analysis, fence in the blade passage helps reducing the strength of horseshoe vortex and is capable of restraining the flow along the blade passage. Fence in the blade passage helps in reducing the under turning by 70 in comparison with base case. Fence on end-wall is effective in preventing the movement of pressure side leg of horseshoe vortex and helps in breaking the passage vortex. Computations are carried for different fence height whose curvature is different from the blade camber. The optimum fence geometry and location reduces the loss coefficient by 15.6% in comparison with base case.Keywords: boundary layer fence, horseshoe vortex, linear cascade, passage vortex, secondary flow
Procedia PDF Downloads 3485893 Embedded Visual Perception for Autonomous Agricultural Machines Using Lightweight Convolutional Neural Networks
Authors: René A. Sørensen, Søren Skovsen, Peter Christiansen, Henrik Karstoft
Abstract:
Autonomous agricultural machines act in stochastic surroundings and therefore, must be able to perceive the surroundings in real time. This perception can be achieved using image sensors combined with advanced machine learning, in particular Deep Learning. Deep convolutional neural networks excel in labeling and perceiving color images and since the cost of high-quality RGB-cameras is low, the hardware cost of good perception depends heavily on memory and computation power. This paper investigates the possibility of designing lightweight convolutional neural networks for semantic segmentation (pixel wise classification) with reduced hardware requirements, to allow for embedded usage in autonomous agricultural machines. Using compression techniques, a lightweight convolutional neural network is designed to perform real-time semantic segmentation on an embedded platform. The network is trained on two large datasets, ImageNet and Pascal Context, to recognize up to 400 individual classes. The 400 classes are remapped into agricultural superclasses (e.g. human, animal, sky, road, field, shelterbelt and obstacle) and the ability to provide accurate real-time perception of agricultural surroundings is studied. The network is applied to the case of autonomous grass mowing using the NVIDIA Tegra X1 embedded platform. Feeding case-specific images to the network results in a fully segmented map of the superclasses in the image. As the network is still being designed and optimized, only a qualitative analysis of the method is complete at the abstract submission deadline. Proceeding this deadline, the finalized design is quantitatively evaluated on 20 annotated grass mowing images. Lightweight convolutional neural networks for semantic segmentation can be implemented on an embedded platform and show competitive performance with regards to accuracy and speed. It is feasible to provide cost-efficient perceptive capabilities related to semantic segmentation for autonomous agricultural machines.Keywords: autonomous agricultural machines, deep learning, safety, visual perception
Procedia PDF Downloads 3945892 Fatigue Life Prediction under Variable Loading Based a Non-Linear Energy Model
Authors: Aid Abdelkrim
Abstract:
A method of fatigue damage accumulation based upon application of energy parameters of the fatigue process is proposed in the paper. Using this model is simple, it has no parameter to be determined, it requires only the knowledge of the curve W–N (W: strain energy density N: number of cycles at failure) determined from the experimental Wöhler curve. To examine the performance of nonlinear models proposed in the estimation of fatigue damage and fatigue life of components under random loading, a batch of specimens made of 6082 T 6 aluminium alloy has been studied and some of the results are reported in the present paper. The paper describes an algorithm and suggests a fatigue cumulative damage model, especially when random loading is considered. This work contains the results of uni-axial random load fatigue tests with different mean and amplitude values performed on 6082T6 aluminium alloy specimens. The proposed model has been formulated to take into account the damage evolution at different load levels and it allows the effect of the loading sequence to be included by means of a recurrence formula derived for multilevel loading, considering complex load sequences. It is concluded that a ‘damaged stress interaction damage rule’ proposed here allows a better fatigue damage prediction than the widely used Palmgren–Miner rule, and a formula derived in random fatigue could be used to predict the fatigue damage and fatigue lifetime very easily. The results obtained by the model are compared with the experimental results and those calculated by the most fatigue damage model used in fatigue (Miner’s model). The comparison shows that the proposed model, presents a good estimation of the experimental results. Moreover, the error is minimized in comparison to the Miner’s model.Keywords: damage accumulation, energy model, damage indicator, variable loading, random loading
Procedia PDF Downloads 3955891 Assessing the Utility of Unmanned Aerial Vehicle-Borne Hyperspectral Image and Photogrammetry Derived 3D Data for Wetland Species Distribution Quick Mapping
Authors: Qiaosi Li, Frankie Kwan Kit Wong, Tung Fung
Abstract:
Lightweight unmanned aerial vehicle (UAV) loading with novel sensors offers a low cost approach for data acquisition in complex environment. This study established a framework for applying UAV system in complex environment quick mapping and assessed the performance of UAV-based hyperspectral image and digital surface model (DSM) derived from photogrammetric point clouds for 13 species classification in wetland area Mai Po Inner Deep Bay Ramsar Site, Hong Kong. The study area was part of shallow bay with flat terrain and the major species including reedbed and four mangroves: Kandelia obovata, Aegiceras corniculatum, Acrostichum auerum and Acanthus ilicifolius. Other species involved in various graminaceous plants, tarbor, shrub and invasive species Mikania micrantha. In particular, invasive species climbed up to the mangrove canopy caused damage and morphology change which might increase species distinguishing difficulty. Hyperspectral images were acquired by Headwall Nano sensor with spectral range from 400nm to 1000nm and 0.06m spatial resolution image. A sequence of multi-view RGB images was captured with 0.02m spatial resolution and 75% overlap. Hyperspectral image was corrected for radiative and geometric distortion while high resolution RGB images were matched to generate maximum dense point clouds. Furtherly, a 5 cm grid digital surface model (DSM) was derived from dense point clouds. Multiple feature reduction methods were compared to identify the efficient method and to explore the significant spectral bands in distinguishing different species. Examined methods including stepwise discriminant analysis (DA), support vector machine (SVM) and minimum noise fraction (MNF) transformation. Subsequently, spectral subsets composed of the first 20 most importance bands extracted by SVM, DA and MNF, and multi-source subsets adding extra DSM to 20 spectrum bands were served as input in maximum likelihood classifier (MLC) and SVM classifier to compare the classification result. Classification results showed that feature reduction methods from best to worst are MNF transformation, DA and SVM. MNF transformation accuracy was even higher than all bands input result. Selected bands frequently laid along the green peak, red edge and near infrared. Additionally, DA found that chlorophyll absorption red band and yellow band were also important for species classification. In terms of 3D data, DSM enhanced the discriminant capacity among low plants, arbor and mangrove. Meanwhile, DSM largely reduced misclassification due to the shadow effect and morphological variation of inter-species. In respect to classifier, nonparametric SVM outperformed than MLC for high dimension and multi-source data in this study. SVM classifier tended to produce higher overall accuracy and reduce scattered patches although it costs more time than MLC. The best result was obtained by combining MNF components and DSM in SVM classifier. This study offered a precision species distribution survey solution for inaccessible wetland area with low cost of time and labour. In addition, findings relevant to the positive effect of DSM as well as spectral feature identification indicated that the utility of UAV-borne hyperspectral and photogrammetry deriving 3D data is promising in further research on wetland species such as bio-parameters modelling and biological invasion monitoring.Keywords: digital surface model (DSM), feature reduction, hyperspectral, photogrammetric point cloud, species mapping, unmanned aerial vehicle (UAV)
Procedia PDF Downloads 2555890 Replacement of the Distorted Dentition of the Cone Beam Computed Tomography Scan Models for Orthognathic Surgery Planning
Authors: T. Almutairi, K. Naudi, N. Nairn, X. Ju, B. Eng, J. Whitters, A. Ayoub
Abstract:
Purpose: At present Cone Beam Computed Tomography (CBCT) imaging does not record dental morphology accurately due to the scattering produced by metallic restorations and the reported magnification. The aim of this pilot study is the development and validation of a new method for the replacement of the distorted dentition of CBCT scans with the dental image captured by the digital intraoral camera. Materials and Method: Six dried skulls with orthodontics brackets on the teeth were used in this study. Three intra-oral markers made of dental stone were constructed which were attached to orthodontics brackets. The skulls were CBCT scanned, and occlusal surface was captured using TRIOS® 3D intraoral scanner. Marker based and surface based registrations were performed to fuse the digital intra-oral scan(IOS) into the CBCT models. This produced a new composite digital model of the skull and dentition. The skulls were scanned again using the commercially accurate Laser Faro® arm to produce the 'gold standard' model for the assessment of the accuracy of the developed method. The accuracy of the method was assessed by measuring the distance between the occlusal surfaces of the new composite model and the 'gold standard' 3D model of the skull and teeth. The procedure was repeated a week apart to measure the reproducibility of the method. Results: The results showed no statistically significant difference between the measurements on the first and second occasions. The absolute mean distance between the new composite model and the laser model ranged between 0.11 mm to 0.20 mm. Conclusion: The dentition of the CBCT can be accurately replaced with the dental image captured by the intra-oral scanner to create a composite model. This method will improve the accuracy of orthognathic surgical prediction planning, with the final goal of the fabrication of a physical occlusal wafer without to guide orthognathic surgery and eliminate the need for dental impression.Keywords: orthognathic surgery, superimposition, models, cone beam computed tomography
Procedia PDF Downloads 1955889 Comparison of Various Policies under Different Maintenance Strategies on a Multi-Component System
Authors: Demet Ozgur-Unluakin, Busenur Turkali, Ayse Karacaorenli
Abstract:
Maintenance strategies can be classified into two types, which are reactive and proactive, with respect to the time of the failure and maintenance. If the maintenance activity is done after a breakdown, it is called reactive maintenance. On the other hand, proactive maintenance, which is further divided as preventive and predictive, focuses on maintaining components before a failure occurs to prevent expensive halts. Recently, the number of interacting components in a system has increased rapidly and therefore, the structure of the systems have become more complex. This situation has made it difficult to provide the right maintenance decisions. Herewith, determining effective decisions has played a significant role. In multi-component systems, many methodologies and strategies can be applied when a component or a system has already broken down or when it is desired to identify and avoid proactively defects that could lead to future failure. This study focuses on the comparison of various maintenance strategies on a multi-component dynamic system. Components in the system are hidden, although there exists partial observability to the decision maker and they deteriorate in time. Several predefined policies under corrective, preventive and predictive maintenance strategies are considered to minimize the total maintenance cost in a planning horizon. The policies are simulated via Dynamic Bayesian Networks on a multi-component system with different policy parameters and cost scenarios, and their performances are evaluated. Results show that when the difference between the corrective and proactive maintenance cost is low, none of the proactive maintenance policies is significantly better than the corrective maintenance. However, when the difference is increased, at least one policy parameter for each proactive maintenance strategy gives significantly lower cost than the corrective maintenance.Keywords: decision making, dynamic Bayesian networks, maintenance, multi-component systems, reliability
Procedia PDF Downloads 1285888 Turkish Airlines' 85th Anniversary Commercial: An Analysis of the Institutional Identity of a Brand in Terms of Glocalization
Authors: Samil Ozcan
Abstract:
Airlines companies target different customer segments in consideration of pricing, service quality, flight network, etc. and their brand positioning accords with the marketization strategies developed in the same direction. The object of this study, Turkish Airlines, has many peculiarities regarding its brand positioning as compared to its rivals in the sector. In the first place, it appeals to a global customer group because of its Star Alliance membership and its broad flight network with 315 destination points. The second group in its customer segmentation includes domestic customers. For this group, the company follows a marketing strategy that plays to local culture and accentuates the image of Turkishness as an emotional allurement. The advertisements and publicity projects designed in this regard put little emphasis on the service quality the company offers to its clients; it addresses the emotions of the consumers rather than individual benefits and relies on the historical memory of the nation and shared cultural values. This study examines the publicity work which aims at the second segment customer group focusing on Turkish Airlines’ 85th Anniversary Commercial through a symbolic meaning analysis approach. The commercial presents six stories with undertones of nationalism in its theme. Nationalism is not just the product of collective interests based on reason but a result of patriotism in the sense of loyalty to state and nation and love of ethnic belonging. While nationalism refers to concrete notions such as blood tie, common ancestor, shared history, it is not the actuality of these notions that it draws its real strength but the emotions invested in them. The myths of origin, the idea of common homeland, boundary definitions, and symbolic acculturation have instrumental importance in the development of these commonalities. The commercial offers concrete examples for an analysis of Connor’s definition of nationalism based on emotions. Turning points in the history of the Turkish Republic and the historical mission Turkish Airlines undertook in these moments are narrated in six stories in the commercial with a highly emotional theme. These emotions, in general, depend on collective memory generated by national consciousness. Collective memory is not simply remembering the past. It is constructed through the reconstruction and reinterpretation of the past in the present moment. This study inquires the motivations behind the nationalist emotions generated within the collective memory by engaging with the commercial released for the 85th anniversary of Turkish Airlines as the object of analysis. Symbols and myths can be read as key concepts that reveal the relation between 'identity and memory'. Because myths and symbols do not merely reflect on collective memory, they reconstruct it as well. In this sense, the theme of the commercial defines the image of Turkishness with virtues such as self-sacrifice, helpfulness, humanity, and courage through a process of meaning creation based on symbolic mythologizations like flag and homeland. These virtues go beyond describing the image of Turkishness and become an instrument that defines and gives meaning to Turkish identity.Keywords: collective memory, emotions, identity, nationalism
Procedia PDF Downloads 1525887 3D Estimation of Synaptic Vesicle Distributions in Serial Section Transmission Electron Microscopy
Authors: Mahdieh Khanmohammadi, Sune Darkner, Nicoletta Nava, Jens Randel Nyengaard, Jon Sporring
Abstract:
We study the effect of stress on nervous system and we use two experimental groups of rats: sham rats and rats subjected to acute foot-shock stress. We investigate the synaptic vesicles density as a function of distance to the active zone in serial section transmission electron microscope images in 2 and 3 dimensions. By estimating the density in 2D and 3D we compare two groups of rats.Keywords: stress, 3-dimensional synaptic vesicle density, image registration, bioinformatics
Procedia PDF Downloads 2775886 Comparison of Different Machine Learning Algorithms for Solubility Prediction
Authors: Muhammet Baldan, Emel Timuçin
Abstract:
Molecular solubility prediction plays a crucial role in various fields, such as drug discovery, environmental science, and material science. In this study, we compare the performance of five machine learning algorithms—linear regression, support vector machines (SVM), random forests, gradient boosting machines (GBM), and neural networks—for predicting molecular solubility using the AqSolDB dataset. The dataset consists of 9981 data points with their corresponding solubility values. MACCS keys (166 bits), RDKit properties (20 properties), and structural properties(3) features are extracted for every smile representation in the dataset. A total of 189 features were used for training and testing for every molecule. Each algorithm is trained on a subset of the dataset and evaluated using metrics accuracy scores. Additionally, computational time for training and testing is recorded to assess the efficiency of each algorithm. Our results demonstrate that random forest model outperformed other algorithms in terms of predictive accuracy, achieving an 0.93 accuracy score. Gradient boosting machines and neural networks also exhibit strong performance, closely followed by support vector machines. Linear regression, while simpler in nature, demonstrates competitive performance but with slightly higher errors compared to ensemble methods. Overall, this study provides valuable insights into the performance of machine learning algorithms for molecular solubility prediction, highlighting the importance of algorithm selection in achieving accurate and efficient predictions in practical applications.Keywords: random forest, machine learning, comparison, feature extraction
Procedia PDF Downloads 395885 An Automatic Large Classroom Attendance Conceptual Model Using Face Counting
Authors: Sirajdin Olagoke Adeshina, Haidi Ibrahim, Akeem Salawu
Abstract:
large lecture theatres cannot be covered by a single camera but rather by a multicamera setup because of their size, shape, and seating arrangements. Although, classroom capture is achievable through a single camera. Therefore, a design and implementation of a multicamera setup for a large lecture hall were considered. Researchers have shown emphasis on the impact of class attendance taken on the academic performance of students. However, the traditional method of carrying out this exercise is below standard, especially for large lecture theatres, because of the student population, the time required, sophistication, exhaustiveness, and manipulative influence. An automated large classroom attendance system is, therefore, imperative. The common approach in this system is face detection and recognition, where known student faces are captured and stored for recognition purposes. This approach will require constant face database updates due to constant changes in the facial features. Alternatively, face counting can be performed by cropping the localized faces on the video or image into a folder and then count them. This research aims to develop a face localization-based approach to detect student faces in classroom images captured using a multicamera setup. A selected Haar-like feature cascade face detector trained with an asymmetric goal to minimize the False Rejection Rate (FRR) relative to the False Acceptance Rate (FAR) was applied on Raspberry Pi 4B. A relationship between the two factors (FRR and FAR) was established using a constant (λ) as a trade-off between the two factors for automatic adjustment during training. An evaluation of the proposed approach and the conventional AdaBoost on classroom datasets shows an improvement of 8% TPR (output result of low FRR) and 7% minimization of the FRR. The average learning speed of the proposed approach was improved with 1.19s execution time per image compared to 2.38s of the improved AdaBoost. Consequently, the proposed approach achieved 97% TPR with an overhead constraint time of 22.9s compared to 46.7s of the improved Adaboost when evaluated on images obtained from a large lecture hall (DK5) USM.Keywords: automatic attendance, face detection, haar-like cascade, manual attendance
Procedia PDF Downloads 695884 Effects of Allowance for Corporate Equity on the Financing Choices of Belgian Small and Medium-Sized Enterprises in a Crisis Context
Authors: O. Colot, M. Croquet, L. Cultrera, Y. Fandja Collince
Abstract:
The objective of our research is to evaluate the impact of the allowance for corporate equity (ACE) on the financial structure of Belgian SME in order to highlight the potential existence of a fiscal leverage. To limit the biases linked to the rationing of the capital further to the financial crisis, we compare first the dynamic evolution of the financial structure of the Belgian firms over the period 2006-2015 by focusing on three sub-periods: 2006-2008, 2009-2012 and 2013-2015. We give then an international size to this comparison by including SMEs from countries adjoining Belgium (France, Germany, Netherlands and the United Kingdom) and within which there is no ACE. This comparison allows better understanding the fiscal advantage linked to the ACE of firms evolving in a relatively unstable economic environment further to the financial crisis of 2008. This research is relevant given the economic and political context in which Belgium operates and the very uncertain future of the Belgian ACE. The originality of this research is twofold: the long study period and the consideration of the effects of the financial and economic crisis on the financing structure of Belgian SMEs. The results of this research, even though they confirm the existence of a positive fiscal leverage for the tax deduction for venture capital on the financing structure of Belgian SMEs, do not allow the extent of this leverage to be clearly quantified. The comparative evolution of financing structures over the period 2006-2015 of Belgian, French, German, Dutch and English SMEs shows a strong similarity in the overall evolution of their financing.Keywords: allowance for corporate equity, Belgium, financial structure, small and medium sized firms
Procedia PDF Downloads 2015883 Characterization of Agroforestry Systems in Burkina Faso Using an Earth Observation Data Cube
Authors: Dan Kanmegne
Abstract:
Africa will become the most populated continent by the end of the century, with around 4 billion inhabitants. Food security and climate changes will become continental issues since agricultural practices depend on climate but also contribute to global emissions and land degradation. Agroforestry has been identified as a cost-efficient and reliable strategy to address these two issues. It is defined as the integrated management of trees and crops/animals in the same land unit. Agroforestry provides benefits in terms of goods (fruits, medicine, wood, etc.) and services (windbreaks, fertility, etc.), and is acknowledged to have a great potential for carbon sequestration; therefore it can be integrated into reduction mechanisms of carbon emissions. Particularly in sub-Saharan Africa, the constraint stands in the lack of information about both areas under agroforestry and the characterization (composition, structure, and management) of each agroforestry system at the country level. This study describes and quantifies “what is where?”, earliest to the quantification of carbon stock in different systems. Remote sensing (RS) is the most efficient approach to map such a dynamic technology as agroforestry since it gives relatively adequate and consistent information over a large area at nearly no cost. RS data fulfill the good practice guidelines of the Intergovernmental Panel On Climate Change (IPCC) that is to be used in carbon estimation. Satellite data are getting more and more accessible, and the archives are growing exponentially. To retrieve useful information to support decision-making out of this large amount of data, satellite data needs to be organized so to ensure fast processing, quick accessibility, and ease of use. A new solution is a data cube, which can be understood as a multi-dimensional stack (space, time, data type) of spatially aligned pixels and used for efficient access and analysis. A data cube for Burkina Faso has been set up from the cooperation project between the international service provider WASCAL and Germany, which provides an accessible exploitation architecture of multi-temporal satellite data. The aim of this study is to map and characterize agroforestry systems using the Burkina Faso earth observation data cube. The approach in its initial stage is based on an unsupervised image classification of a normalized difference vegetation index (NDVI) time series from 2010 to 2018, to stratify the country based on the vegetation. Fifteen strata were identified, and four samples per location were randomly assigned to define the sampling units. For safety reasons, the northern part will not be part of the fieldwork. A total of 52 locations will be visited by the end of the dry season in February-March 2020. The field campaigns will consist of identifying and describing different agroforestry systems and qualitative interviews. A multi-temporal supervised image classification will be done with a random forest algorithm, and the field data will be used for both training the algorithm and accuracy assessment. The expected outputs are (i) map(s) of agroforestry dynamics, (ii) characteristics of different systems (main species, management, area, etc.); (iii) assessment report of Burkina Faso data cube.Keywords: agroforestry systems, Burkina Faso, earth observation data cube, multi-temporal image classification
Procedia PDF Downloads 1445882 Exploring the Intrinsic Ecology and Suitable Density of Historic Districts Through a Comparative Analysis of Ancient and Modern Ecological Smart Practices
Authors: HU Changjuan, Gong Cong, Long Hao
Abstract:
Although urban ecological policies and the public's aspiration for livable environments have expedited the pace of ecological revitalization, historic districts that have evolved through natural ecological processes often become obsolete and less habitable amid rapid urbanization. This raises a critical question about historic districts inherently incapable of being ecological and livable. The thriving concept of ‘intrinsic ecology,’ characterized by its ability to transform city-district systems into healthy ecosystems with diverse environments, stable functions, and rapid restoration capabilities, holds potential for guiding the integration of ancient and modern ecological wisdom while supporting the dynamic involvement of cultures. This study explores the intrinsic ecology of historic districts from three aspects: 1) Population Density: By comparing the population density before urban population expansion to the present day, determine the reasonable population density for historic districts. 2) Building Density: Using the ‘Space-mate’ tool for comparative analysis, form a spatial matrix to explore the intrinsic ecology of building density in Chinese historic districts. 3) Green Capacity Ratio: By using ecological districts as control samples, conduct dual comparative analyses (related comparison and upgraded comparison) to determine the intrinsic ecological advantages of the two-dimensional and three-dimensional green volume in historic districts. The study inform a density optimization strategy that supports cultural, social, natural, and economic ecology, contributing to the creation of eco-historic districts.Keywords: eco-historic districts, intrinsic ecology, suitable density, green capacity ratio.
Procedia PDF Downloads 215881 Nonlinear Finite Element Modeling of Deep Beam Resting on Linear and Nonlinear Random Soil
Authors: M. Seguini, D. Nedjar
Abstract:
An accuracy nonlinear analysis of a deep beam resting on elastic perfectly plastic soil is carried out in this study. In fact, a nonlinear finite element modeling for large deflection and moderate rotation of Euler-Bernoulli beam resting on linear and nonlinear random soil is investigated. The geometric nonlinear analysis of the beam is based on the theory of von Kàrmàn, where the Newton-Raphson incremental iteration method is implemented in a Matlab code to solve the nonlinear equation of the soil-beam interaction system. However, two analyses (deterministic and probabilistic) are proposed to verify the accuracy and the efficiency of the proposed model where the theory of the local average based on the Monte Carlo approach is used to analyze the effect of the spatial variability of the soil properties on the nonlinear beam response. The effect of six main parameters are investigated: the external load, the length of a beam, the coefficient of subgrade reaction of the soil, the Young’s modulus of the beam, the coefficient of variation and the correlation length of the soil’s coefficient of subgrade reaction. A comparison between the beam resting on linear and nonlinear soil models is presented for different beam’s length and external load. Numerical results have been obtained for the combination of the geometric nonlinearity of beam and material nonlinearity of random soil. This comparison highlighted the need of including the material nonlinearity and spatial variability of the soil in the geometric nonlinear analysis, when the beam undergoes large deflections.Keywords: finite element method, geometric nonlinearity, material nonlinearity, soil-structure interaction, spatial variability
Procedia PDF Downloads 4125880 Lexical Collocations in Medical Articles of Non-Native vs Native English-Speaking Researchers
Authors: Waleed Mandour
Abstract:
This study presents multidimensional scrutiny of Benson et al.’s seven-category taxonomy of lexical collocations used by Egyptian medical authors and their peers of native-English speakers. It investigates 212 medical papers, all published during a span of 6 years (from 2013 to 2018). The comparison is held to the medical research articles submitted by native speakers of English (25,238 articles in total with over 103 million words) as derived from the Directory of Open Access Journals (a 2.7 billion-word corpus). The non-native speakers compiled corpus was properly annotated and marked-up manually by the researcher according to the standards of Weisser. In terms of statistical comparisons, though, deployed were the conventional frequency-based analysis besides the relevant criteria, such as association measures (AMs) in which LogDice is deployed as per the recommendation of Kilgariff et al. when comparing large corpora. Despite the terminological convergence in the subject corpora, comparison results confirm the previous literature of which the non-native speakers’ compositions reveal limited ranges of lexical collocations in terms of their distribution. However, there is a ubiquitous tendency of overusing the NS-high-frequency multi-words in all lexical categories investigated. Furthermore, Egyptian authors, conversely to their English-speaking peers, tend to embrace more collocations denoting quantitative rather than qualitative analyses in their produced papers. This empirical work, per se, contributes to the English for Academic Purposes (EAP) and English as a Lingua Franca in Academic settings (ELFA). In addition, there are pedagogical implications that would promote a better quality of medical research papers published in Egyptian universities.Keywords: corpus linguistics, EAP, ELFA, lexical collocations, medical discourse
Procedia PDF Downloads 1295879 An Assessment on the Effect of Participation of Rural Woman on Sustainable Rural Water Supply in Yemen
Authors: Afrah Saad Mohsen Al-Mahfadi
Abstract:
In rural areas of developing countries, participation of all stakeholders in water supply projects is an important step towards further development. As most of the beneficiaries are women, it is important that they should be involved to achieve successful and sustainable water supply projects. Women are responsible for the management of water both inside and outside home, and often spend more than six-hours a day fetching drinking water from distant water sources. The problem is that rural women play a role of little importance in the water supply projects’ phases in rural Yemen. Therefore, this research aimed at analyzing the different reasons of their lack of participation in projects and in what way a full participation -if achieved- could contribute to sustainable water supply projects in the rural mountainous areas in Yemen. Four water supply projects were selected as a case study in Al-Della'a Alaala sub-district in the Al-Mahweet governorate, two of them were implemented by the Social Fund and Development (SFD), while others were implemented by the General Authority for Rural Water Supply Projects (GARWSSP). Furthermore, the successful Al-Galba project, which is located in Badan district in Ibb governorate, was selected for comparison. The rural women's active participation in water projects have potential consequences including continuity and maintenance improvement, equipment security, and improvement in the overall health and education status of these areas. The majority of respondents taking part in GARWSSP projects estimated that there is no reason to involve women in the project activities. In the comparison project - in which a woman worked as a supervisor and implemented the project – all respondents indicated that the participation of women is vital for sustainability. Therefore, the results of this research are intended to stimulate rural women's participation in the mountainous areas of Yemen.Keywords: assessment, rural woman, sustainability, water management
Procedia PDF Downloads 6885878 Comparison of the Effectiveness of Tree Algorithms in Classification of Spongy Tissue Texture
Authors: Roza Dzierzak, Waldemar Wojcik, Piotr Kacejko
Abstract:
Analysis of the texture of medical images consists of determining the parameters and characteristics of the examined tissue. The main goal is to assign the analyzed area to one of two basic groups: as a healthy tissue or a tissue with pathological changes. The CT images of the thoracic lumbar spine from 15 healthy patients and 15 with confirmed osteoporosis were used for the analysis. As a result, 120 samples with dimensions of 50x50 pixels were obtained. The set of features has been obtained based on the histogram, gradient, run-length matrix, co-occurrence matrix, autoregressive model, and Haar wavelet. As a result of the image analysis, 290 descriptors of textural features were obtained. The dimension of the space of features was reduced by the use of three selection methods: Fisher coefficient (FC), mutual information (MI), minimization of the classification error probability and average correlation coefficients between the chosen features minimization of classification error probability (POE) and average correlation coefficients (ACC). Each of them returned ten features occupying the initial place in the ranking devised according to its own coefficient. As a result of the Fisher coefficient and mutual information selections, the same features arranged in a different order were obtained. In both rankings, the 50% percentile (Perc.50%) was found in the first place. The next selected features come from the co-occurrence matrix. The sets of features selected in the selection process were evaluated using six classification tree methods. These were: decision stump (DS), Hoeffding tree (HT), logistic model trees (LMT), random forest (RF), random tree (RT) and reduced error pruning tree (REPT). In order to assess the accuracy of classifiers, the following parameters were used: overall classification accuracy (ACC), true positive rate (TPR, classification sensitivity), true negative rate (TNR, classification specificity), positive predictive value (PPV) and negative predictive value (NPV). Taking into account the classification results, it should be stated that the best results were obtained for the Hoeffding tree and logistic model trees classifiers, using the set of features selected by the POE + ACC method. In the case of the Hoeffding tree classifier, the highest values of three parameters were obtained: ACC = 90%, TPR = 93.3% and PPV = 93.3%. Additionally, the values of the other two parameters, i.e., TNR = 86.7% and NPV = 86.6% were close to the maximum values obtained for the LMT classifier. In the case of logistic model trees classifier, the same ACC value was obtained ACC=90% and the highest values for TNR=88.3% and NPV= 88.3%. The values of the other two parameters remained at a level close to the highest TPR = 91.7% and PPV = 91.6%. The results obtained in the experiment show that the use of classification trees is an effective method of classification of texture features. This allows identifying the conditions of the spongy tissue for healthy cases and those with the porosis.Keywords: classification, feature selection, texture analysis, tree algorithms
Procedia PDF Downloads 1765877 Numerical Simulation of a Combined Impact of Cooling and Ventilation on the Indoor Environmental Quality
Authors: Matjaz Prek
Abstract:
Impact of three different combinations of cooling and ventilation systems on the indoor environmental quality (IEQ) has been studied. Comparison of chilled ceiling cooling in combination with displacement ventilation, cooling with fan coil unit and cooling with flat wall displacement outlets was performed. All three combinations were evaluated from the standpoint of whole-body and local thermal comfort criteria as well as from the standpoint of ventilation effectiveness. The comparison was made on the basis of numerical simulation with DesignBuilder and Fluent. Numerical simulations were carried out in two steps. Firstly the DesignBuilder software environment was used to model the buildings thermal performance and evaluation of the interaction between the environment and the building. Heat gains of the building and of the individual space, as well as the heat loss on the boundary surfaces in the room, were calculated. In the second step Fluent software environment was used to simulate the response of the indoor environment, evaluating the interaction between building and human, using the simulation results obtained in the first step. Among the systems presented, the ceiling cooling system in combination with displacement ventilation was found to be the most suitable as it offers a high level of thermal comfort with adequate ventilation efficiency. Fan coil cooling has proved inadequate from the standpoint of thermal comfort whereas flat wall displacement outlets were inadequate from the standpoint of ventilation effectiveness. The study showed the need in evaluating indoor environment not solely from the energy use point of view, but from the point of view of indoor environmental quality as well.Keywords: cooling, ventilation, thermal comfort, ventilation effectiveness, indoor environmental quality, IEQ, computational fluid dynamics
Procedia PDF Downloads 1865876 Streamwise Vorticity in the Wake of a Sliding Bubble
Authors: R. O’Reilly Meehan, D. B. Murray
Abstract:
In many practical situations, bubbles are dispersed in a liquid phase. Understanding these complex bubbly flows is therefore a key issue for applications such as shell and tube heat exchangers, mineral flotation and oxidation in water treatment. Although a large body of work exists for bubbles rising in an unbounded medium, that of bubbles rising in constricted geometries has received less attention. The particular case of a bubble sliding underneath an inclined surface is common to two-phase flow systems. The current study intends to expand this knowledge by performing experiments to quantify the streamwise flow structures associated with a single sliding air bubble under an inclined surface in quiescent water. This is achieved by means of two-dimensional, two-component particle image velocimetry (PIV), performed with a continuous wave laser and high-speed camera. PIV vorticity fields obtained in a plane perpendicular to the sliding surface show that there is significant bulk fluid motion away from the surface. The associated momentum of the bubble means that this wake motion persists for a significant time before viscous dissipation. The magnitude and direction of the flow structures in the streamwise measurement plane are found to depend on the point on its path through which the bubble enters the plane. This entry point, represented by a phase angle, affects the nature and strength of the vortical structures. This study reconstructs the vorticity field in the wake of the bubble, converting the field at different instances in time to slices of a large-scale wake structure. This is, in essence, Taylor’s ”frozen turbulence” hypothesis. Applying this to the vorticity fields provides a pseudo three-dimensional representation from 2-D data, allowing for a more intuitive understanding of the bubble wake. This study provides insights into the complex dynamics of a situation common to many engineering applications, particularly shell and tube heat exchangers in the nucleate boiling regime.Keywords: bubbly flow, particle image velocimetry, two-phase flow, wake structures
Procedia PDF Downloads 3745875 Hepatoprotective Assessment of L-Ascorbate 1-(2-Hydroxyethyl)-4,6-Dimethyl-1, 2-Dihydropyrimidine-2-On Exposure to Carbon Tetrachloride
Authors: Nail Nazarov, Alexandra Vyshtakalyuk, Vyacheslav Semenov, Irina Galyametdinova, Vladimir Zobov, Vladimir Reznik
Abstract:
Among hepatic pyrimidine used as a means of stimulating protein synthesis and recovery of liver cells in her damaged toxic and infectious etiology. When an experimental toxic hepatitis hepatoprotective activity detected some pyrimidine derivatives. There are literature data on oksimetiluratcila hepatoprotective effect. For analogs of pyrimidine nucleobases - drugs Methyluracilum pentoxy and hepatoprotective effect of weakly expressed. According to the American scientists broad spectrum of biological activity, including hepatoprotective properties, have a 2,4-dioxo-5-arilidenimino uracils. Influenced Xymedon medicinal preparation (1- (beta-hydroxyethyl) -4,6-dimethyl-1,2-dihydro-2-oksopirimidin) developed as a means of stimulating the regeneration of tissue revealed increased activity of microsomal oxidases human liver. In studies on the model of toxic liver damage in rats have shown hepatoprotective effect xymedon and stimulating its impact on the recovery of the liver tissue. Hepatoprotective properties of the new compound in the series of pyrimidine derivatives L-ascorbate 1-(2-hydroxyethyl)-4,6-dimethyl-1,2-dihydropirimidine-2-one synthesized on the basis Xymedon preparation were firstly investigated on rats under the carbon tetrachloride action. It was shown the differences of biochemical parameters from the reference value and severity of structural-morphological liver violations decreased in comparison with control group under the influence of the compound injected before exposure carbon tetrachloride. Hepatoprotective properties of the investigated compound were more pronounced in comparison with Xymedon.Keywords: hepatoprotectors, pyrimidine derivatives, toxic liver damage, xymedon
Procedia PDF Downloads 423