Search results for: digital transformation
115 MHD Stagnation Point Flow towards a Shrinking Sheet with Suction in an Upper-Convected Maxwell (UCM) Fluid
Authors: K. Jafar, R. Nazar, A. Ishak, I. Pop
Abstract:
The present analysis considers the steady stagnation point flow and heat transfer towards a permeable shrinking sheet in an upper-convected Maxwell (UCM) electrically conducting fluid, with a constant magnetic field applied in the transverse direction to flow and a local heat generation within the boundary layer, with a heat generation rate proportional to (T-T)p Using a similarity transformation, the governing system of partial differential equations is first transformed into a system of ordinary differential equations, which is then solved numerically using a finite-difference scheme known as the Keller-box method. Numerical results are obtained for the flow and thermal fields for various values of the stretching/shrinking parameter λ, the magnetic parameter M, the elastic parameter K, the Prandtl number Pr, the suction parameter s, the heat generation parameter Q, and the exponent p. The results indicate the existence of dual solutions for the shrinking sheet up to a critical value λc whose value depends on the value of M, K, and s. In the presence of internal heat absorption (Q<0) the surface heat transfer rate decreases with increasing p but increases with parameters Q and s when the sheet is either stretched or shrunk.
Keywords: Magnetohydrodynamic (MHD), boundary layer flow, UCM fluid, stagnation point, shrinking sheet.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2068114 Application of RS and GIS Technique for Identifying Groundwater Potential Zone in Gomukhi Nadhi Sub Basin, South India
Authors: Punitha Periyasamy, Mahalingam Sudalaimuthu, Sachikanta Nanda, Arasu Sundaram
Abstract:
India holds 17.5% of the world’s population but has only 2% of the total geographical area of the world where 27.35% of the area is categorized as wasteland due to lack of or less groundwater. So there is a demand for excessive groundwater for agricultural and non agricultural activities to balance its growth rate. With this in mind, an attempt is made to find the groundwater potential zone in Gomukhi Nadhi sub basin of Vellar River basin, TamilNadu, India covering an area of 1146.6 Sq.Km consists of 9 blocks from Peddanaickanpalayam to Virudhachalam in the sub basin. The thematic maps such as Geology, Geomorphology, Lineament, Landuse and Landcover and Drainage are prepared for the study area using IRS P6 data. The collateral data includes rainfall, water level, soil map are collected for analysis and inference. The digital elevation model (DEM) is generated using Shuttle Radar Topographic Mission (SRTM) and the slope of the study area is obtained. ArcGIS 10.1 acts as a powerful spatial analysis tool to find out the ground water potential zones in the study area by means of weighted overlay analysis. Each individual parameter of the thematic maps are ranked and weighted in accordance with their influence to increase the water level in the ground. The potential zones in the study area are classified viz., Very Good, Good, Moderate, Poor with its aerial extent of 15.67, 381.06, 575.38, 174.49 Sq.Km respectively.
Keywords: ArcGIS, DEM, Groundwater, Recharge, Weighted Overlay.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2994113 Trade Policy Incentives and Economic Growth in Nigeria
Authors: Emmanuel Dele Balogun
Abstract:
This paper analyzes, using descriptive statistics and econometrics data which span the period 1981 to 2014 to gauge the effects of trade policy incentives on economic growth in Nigeria. It argues that the provided incentives penalize economic growth during pre-trade liberalization eras, but stimulated a rapid increase in total factor productivity during the post-liberalization period of 2000 to 2014. The trend analysis shows that Nigeria maintained high tariff walls in economic regulation eras which became low in post liberalization era. The protections were in favor of infant industries, which were mainly appendages of multinationals but against imports of competing food and finished consumer products. The trade openness index confirms the undue exposure of Nigeria’s economy to the vagaries of international market shocks; while banking sector recapitalization and new listing of telecommunications companies deepened the financial markets in post-liberalization era. The structure of economic incentives was biased in favor of construction, trade and services, but against the real sector despite protectionist policies. Total Factor Productivity (TFP) estimates show that the Nigerian economy suffered stagnation in pre-liberalization eras, but experienced rapid growth rates in post-liberalization eras. The regression results relating trade policy incentives to TFP growth rate yielded a significant but negative intercept suggesting that a non-interventionist policy could be detrimental to economic progress, while protective tariff which limits imports of competing products could spur productivity gains in domestic import substitutes beyond factor growth with market liberalization. The main constraint to the effectiveness of trade policy incentives is the failure of benefiting industries to leverage on the domestic factor endowments of the nation. This paper concludes that there is the need to review the current economic transformation strategies urgently with a view to provide policymakers with a better understanding of the most viable options that could make for rapid success.
Keywords: Trade Policies, macroeconomic incentives, total factor productivity and economic growth.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1598112 Numerical Solution of Steady Magnetohydrodynamic Boundary Layer Flow Due to Gyrotactic Microorganism for Williamson Nanofluid over Stretched Surface in the Presence of Exponential Internal Heat Generation
Authors: M. A. Talha, M. Osman Gani, M. Ferdows
Abstract:
This paper focuses on the study of two dimensional magnetohydrodynamic (MHD) steady incompressible viscous Williamson nanofluid with exponential internal heat generation containing gyrotactic microorganism over a stretching sheet. The governing equations and auxiliary conditions are reduced to a set of non-linear coupled differential equations with the appropriate boundary conditions using similarity transformation. The transformed equations are solved numerically through spectral relaxation method. The influences of various parameters such as Williamson parameter γ, power constant λ, Prandtl number Pr, magnetic field parameter M, Peclet number Pe, Lewis number Le, Bioconvection Lewis number Lb, Brownian motion parameter Nb, thermophoresis parameter Nt, and bioconvection constant σ are studied to obtain the momentum, heat, mass and microorganism distributions. Moment, heat, mass and gyrotactic microorganism profiles are explored through graphs and tables. We computed the heat transfer rate, mass flux rate and the density number of the motile microorganism near the surface. Our numerical results are in better agreement in comparison with existing calculations. The Residual error of our obtained solutions is determined in order to see the convergence rate against iteration. Faster convergence is achieved when internal heat generation is absent. The effect of magnetic parameter M decreases the momentum boundary layer thickness but increases the thermal boundary layer thickness. It is apparent that bioconvection Lewis number and bioconvection parameter has a pronounced effect on microorganism boundary. Increasing brownian motion parameter and Lewis number decreases the thermal boundary layer. Furthermore, magnetic field parameter and thermophoresis parameter has an induced effect on concentration profiles.
Keywords: Convection flow, internal heat generation, similarity, spectral method, numerical analysis, Williamson nanofluid.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 971111 The Effect of Motor Learning Based Computer-Assisted Practice for Children with Handwriting Deficit – Comparing with the Effect of Traditional Sensorimotor Approach
Authors: Shao-Hsia Chang, Nan-Ying Yu
Abstract:
The objective of this study was to test how advanced digital technology enables a more effective training on the handwriting of children with handwriting deficit. This study implemented the graphomotor apparatuses to a computer-assisted instruction system. In a randomized controlled trial, the experiments for verifying the intervention effect were conducted. Forty two children with handwriting deficit were assigned to computer-assisted instruction, sensorimotor training or control (no intervention) group. Handwriting performance was measured using the Elementary reading/writing test and computerized handwriting evaluation before and after 6 weeks of intervention. Analysis of variance of change scores were conducted to show whether statistically significant difference across the three groups. Significant difference was found among three groups. Computer group shows significant difference from the other two groups. Significance was denoted in near-point, far-point copy, dictation test, and writing from phonetic symbols. Writing speed and mean stroke velocity in near-, far-point and short paragraph copy were found significantly difference among three groups. Computer group shows significant improvement from the other groups. For clinicians and school teachers, the results of this study provide a motor control based insight for the improvement of handwriting difficulties.
Keywords: Dysgraphia, computerized handwriting evaluation, sensorimotor program, computer assisted program.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2088110 Developing Electronic Medical Record System to Enhance the Satisfaction of Patients and Service Providers
Authors: Siham Jemal Kedir
Abstract:
Information communication technology is dramatically transforming the health sector, especially in developing countries with few resources and burgeoning access to an internet connection. As a result, processes such as record keeping, administration, and human resources have been vastly simplified, allowing hospitals to focus on delivering urgent medical care. This paper will explore the impact of IT through a study of the electronic medical record system in the Mekelle City Health Center in Tigray Region, Ethiopia. This paper has four specific objectives: 1. developing artifacts in the Electronic Medical Record system, 2. preparing a diagram for step-by-step development of Electronic Medical Records, 3. creating a draft website with the proposed Electronic Medical Record system, and 4. Testing and evaluating the performance and user acceptance of the system. The research will be done in a qualitative manner employing interviews and in-person observation. The research has found the following major results: firstly, the medical record system has been difficult to implement. Second, the Mekelle Health Center is using a manual recording system which is time-consuming and inefficient. The old recording system in the Center leads to the dissatisfaction of patients as well as the service provider staff. As a result, to transform the manual recording system into a digital system, an electronic medical recording system has been developed. The developed system has been tested for implementation and has been successful. Consequently, the administrator of the health center is ready to implement and use the developed software to introduce a medical recording system in Mekelle Health Center.
Keywords: Electronic Health Record Implementation, EMR System Development, Medical Record.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 63109 Automated Textile Defect Recognition System Using Computer Vision and Artificial Neural Networks
Authors: Atiqul Islam, Shamim Akhter, Tumnun E. Mursalin
Abstract:
Least Development Countries (LDC) like Bangladesh, whose 25% revenue earning is achieved from Textile export, requires producing less defective textile for minimizing production cost and time. Inspection processes done on these industries are mostly manual and time consuming. To reduce error on identifying fabric defects requires more automotive and accurate inspection process. Considering this lacking, this research implements a Textile Defect Recognizer which uses computer vision methodology with the combination of multi-layer neural networks to identify four classifications of textile defects. The recognizer, suitable for LDC countries, identifies the fabric defects within economical cost and produces less error prone inspection system in real time. In order to generate input set for the neural network, primarily the recognizer captures digital fabric images by image acquisition device and converts the RGB images into binary images by restoration process and local threshold techniques. Later, the output of the processed image, the area of the faulty portion, the number of objects of the image and the sharp factor of the image, are feed backed as an input layer to the neural network which uses back propagation algorithm to compute the weighted factors and generates the desired classifications of defects as an output.Keywords: Computer vision, image acquisition device, machine vision, multi-layer neural networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3302108 Historical Development of Bagh-e Dasht in Herat, Afghanistan: A Comprehensive Field Survey of Physical and Social Aspects
Authors: Khojesta Kawish, Tetsuya Ando, Sayed Abdul Basir Samimi
Abstract:
Bagh-e Dasht area is situated in the northern part of Herat, an old city in western Afghanistan located on the Silk Road which has received a strong influence from Persian culture. Initially, the Bagh-e Dasht area was developed for gardens and palaces near Joy-e Injil canal during the Timurid Empire in the 15th century. It is assumed Bagh-e Dasht became a settlement in the 16th century during the Safavid Empire. The oldest area is the southern part around the canal bank which is characterized by Dalans, sun-dried brick arcades above which houses are often constructed. Traditional houses in this area are built with domical vault roofs constructed with sun-dried bricks. Bagh-e Dasht is one of the best-preserved settlements of traditional houses in Herat. This study examines the transformation of the Bagh-e Dasht area with a focus on Dalans, where traditional houses with domical vault roofs have been well-preserved until today. The aim of the study is to examine the extent of physical changes to the area as well as changes to houses and the community. This research paper contains original results which have previously not been published in architectural history. The roof types of houses in the area are investigated through examining high resolution satellite images. The boundary of each building and space is determined by both a field survey and aerial photographs of the study area. A comprehensive field survey was then conducted to examine each space and building in the area. In addition, a questionnaire was distributed to the residents of the Dalan houses and interviews were conducted with the Wakil (Chief) of the area, a local historian, residents and traditional builders. The study finds that the oldest part of Bagh-e Dasht area, the south, contains both Dalans and domical vault roof houses. The next oldest part, which is the north, only has domical vault roof houses. The rest of the area only has houses with modernized flat roofs. This observation provides an insight into the process of historical development in the Bagh-e Dasht area.
Keywords: Afghanistan, Bagh-e Dasht, Dalan, Domical vault, Herat, over path house, traditional house.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 926107 Relevance for Traditional Medicine in South Africa: Experiences of Urban Traditional Healers, Izinyanga
Authors: Ntokozo Mthembu
Abstract:
Access to relevant health indicates people’s likelihood of survival, including craft of indigenous healing and its related practitioners- izinyanga. However, the emergence of a dreaded novel corona virus - COVID-19 that has engulfed almost the whole world has necessitated the need to revisit the state of traditional healers in South Africa. This circumstance tended to expose the reality of social settings in various social structures and related policies including the manner coloniality reveal its ugly head when it comes treatment between western and African based therapeutic practices in this country. In attempting to gain a better understanding of such experiences, primary and secondary sources were consulted when collecting data that perusal of various literature in this instance including face-to-face interviews with traditional healers working on the street of Tshwane Municipality in South Africa. Preliminary findings revealed that the emergence of this deadly virus coincided with the moment when the government agenda was focussed on fulfilment of its promise of addressing the past inequity practices, including the transformation of medical sector. This scenario can be witnessed by the manner in which government and related agencies such as health department keeps on undermining indigenous healing practice irrespective of its historical record in terms of healing profession and fighting various diseases before times of father of medicine, Imhotep. Based on these preliminary findings, it is recommended that the government should hasten the incorporation of African knowledge systems especially medicine to offer alternatives and diverse to assess the underutilised indigenous African therapeutic approach and relevant skills that could be useful in combating ailments such as COVID 19. Perhaps, the plural medical systems should be recognized and related policies are formulated to guarantee mutual respect among citizens and the incorporation of healing practices in South African health sector, Africa and in the broader global community.Keywords: Indigenous healing practice, inyanga, COVID-19, therapeutic, urban, experience.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 440106 A Communication Signal Recognition Algorithm Based on Holder Coefficient Characteristics
Authors: Hui Zhang, Ye Tian, Fang Ye, Ziming Guo
Abstract:
Communication signal modulation recognition technology is one of the key technologies in the field of modern information warfare. At present, communication signal automatic modulation recognition methods are mainly divided into two major categories. One is the maximum likelihood hypothesis testing method based on decision theory, the other is a statistical pattern recognition method based on feature extraction. Now, the most commonly used is a statistical pattern recognition method, which includes feature extraction and classifier design. With the increasingly complex electromagnetic environment of communications, how to effectively extract the features of various signals at low signal-to-noise ratio (SNR) is a hot topic for scholars in various countries. To solve this problem, this paper proposes a feature extraction algorithm for the communication signal based on the improved Holder cloud feature. And the extreme learning machine (ELM) is used which aims at the problem of the real-time in the modern warfare to classify the extracted features. The algorithm extracts the digital features of the improved cloud model without deterministic information in a low SNR environment, and uses the improved cloud model to obtain more stable Holder cloud features and the performance of the algorithm is improved. This algorithm addresses the problem that a simple feature extraction algorithm based on Holder coefficient feature is difficult to recognize at low SNR, and it also has a better recognition accuracy. The results of simulations show that the approach in this paper still has a good classification result at low SNR, even when the SNR is -15dB, the recognition accuracy still reaches 76%.Keywords: Communication signal, feature extraction, holder coefficient, improved cloud model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 709105 A Pilot Study of Robot Reminiscence in Dementia Care
Authors: Ryuji Yamazaki, Masahiro Kochi, Weiran Zhu, Hiroko Kase
Abstract:
In care for older adults, behavioral and psychological symptoms of dementia (BPSD) like agitation and aggression are distressing for patients and their caretakers, often resulting in premature institutionalization with increased costs of care. To improve mood and mitigate symptoms, as a non-pharmaceutical approach, emotion-oriented therapy like reminiscence work is adopted in face-to-face communication. Telecommunication support is expected to be provided by robotic media as a bridge for digital divide for those with dementia and facilitate social interaction both verbally and nonverbally. The purpose of this case study is to explore the conditions in which robotic media can effectively attract attention from older adults with dementia and promote their well-being. As a pilot study, we introduced the pillow-phone Hugvie®, a huggable humanly shaped communication medium to five residents with dementia at a care facility, to investigate how the following conditions work for the elderly when they use the medium; 1) no sound, 2) radio, non-interactive, 3) daily conversation, and 4) reminiscence work. As a result, under condition 4, reminiscence work, the five participants kept concentration in interacting with the medium for a longer duration than other conditions. In condition 4, they also showed larger amount of utterances than under other conditions. These results indicate that providing topics related to personal histories through robotic media could affect communication positively and should, therefore, be further investigated. In addition, the issue of ethical implications by using persuasive technology that affects emotions and behaviors of older adults is also discussed.
Keywords: BPSD, reminiscence, tactile telecommunication, utterances.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1159104 Comparison of Central Light Reflex Width-to-Retinal Vessel Diameter Ratio between Glaucoma and Normal Eyes by Using Edge Detection Technique
Authors: P. Siriarchawatana, K. Leungchavaphongse, N. Covavisaruch, K. Rojananuangnit, P. Boondaeng, N. Panyayingyong
Abstract:
Glaucoma is a disease that causes visual loss in adults. Glaucoma causes damage to the optic nerve and its overall pathophysiology is still not fully understood. Vasculopathy may be one of the possible causes of nerve damage. Photographic imaging of retinal vessels by fundus camera during eye examination may complement clinical management. This paper presents an innovation for measuring central light reflex width-to-retinal vessel diameter ratio (CRR) from digital retinal photographs. Using our edge detection technique, CRRs from glaucoma and normal eyes were compared to examine differences and associations. CRRs were evaluated on fundus photographs of participants from Mettapracharak (Wat Raikhing) Hospital in Nakhon Pathom, Thailand. Fifty-five photographs from normal eyes and twenty-one photographs from glaucoma eyes were included. Participants with hypertension were excluded. In each photograph, CRRs from four retinal vessels, including arteries and veins in the inferotemporal and superotemporal regions, were quantified using edge detection technique. From our finding, mean CRRs of all four retinal arteries and veins were significantly higher in persons with glaucoma than in those without glaucoma (0.34 vs. 0.32, p < 0.05 for inferotemporal vein, 0.33 vs. 0.30, p < 0.01 for inferotemporal artery, 0.34 vs. 0.31, p < 0.01 for superotemporal vein, and 0.33 vs. 0.30, p < 0.05 for superotemporal artery). From these results, an increase in CRRs of retinal vessels, as quantitatively measured from fundus photographs, could be associated with glaucoma.
Keywords: Glaucoma, retinal vessel, central light reflex, image processing, fundus photograph, edge detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1087103 Co-Administration Effects of Conjugated Linoleic Acid and L-Carnitine on Weight Gain and Biochemical Profile in Diet Induced Obese Rats
Authors: Maryam Nazari, Majid Karandish, Alihossein Saberi
Abstract:
Obesity as a global health challenge motivates pharmaceutical industries to produce anti-obesity drugs. However, effectiveness of these agents is remained unclear. Because of popularity of dietary supplements, the aim of this study was tp investigate the effects of Conjugated Linoleic Acid (CLA) and L-carnitine (LC) on serum glucose, triglyceride, cholesterol and weight changes in diet induced obese rats. 48 male Wistar rats were randomly divided into two groups: Normal fat diet (n=8), and High fat diet (HFD) (n=32). After eight weeks, the second group which was maintained on HFD until the end of study, was subdivided into four categories: a) 500 mg Corn Oil (as control group), b) 500 mg CLA, c) 200 mg LC, d) 500 mg CLA+ 200 mg LC.All doses are planned per kg body weights, which were administered by oral gavage for four weeks. Body weights were measured and recorded weekly by means of a digital scale. At the end of the study, blood samples were collected for biochemical markers measurement. SPSS Version 16 was used for statistical analysis. At the end of 8th week, a significant difference in weight was observed between HFD and NFD group. After 12 weeks, LC significantly reduced weight gain by 4.2%. Trend of weight gain in CLA and CLA+LC groups was insignificantly decelerated. CLA+LC reduced triglyceride level significantly, but just CLA had significant influence on total cholesterol and insignificant decreasing effect on FBS. Our results showed that an obesogenic diet in a relative short time led to obesity and dyslipidemia which can be modified by LC and CLA to some extent.
Keywords: Conjugated linoleic acid, high fat diet, L-carnitine, obesity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 943102 Generation of 3D Models Obtained with Low-Cost RGB and Thermal Sensors Mounted on Drones
Authors: Julio Manuel de Luis Ruiz, Javier Sedano Cibrián, Rubén Pérez Álvarez, Raúl Pereda García, Felipe Piña García
Abstract:
Nowadays it is common to resort to aerial photography to carry out the prospection and/or exploration of archaeological sites. In recent years, Unmanned Aerial Vehicles (UAVs) have been applied as the vehicles that carry the sensor. This implies certain advantages, such as the possibility of including low-cost sensors, given that these vehicles can carry the sensor at relatively low altitudes. Due to this, low-cost dual sensors have recently begun to be used. This new equipment can collaborate with classic Digital Elevation Models (DEMs) in the exploration of archaeological sites, but this entails the need for a methodological setting to optimize the acquisition, processing and exploitation of the information provided by low-cost dual sensors. This research focuses on the design of an appropriate workflow to obtain 3D models with low-cost sensors carried on UAVs, both in the RGB and thermal domains. All the foregoing has been applied to the archaeological site of Juliobriga, located in Cantabria (Spain). To this end, a flight with this type of sensors has been planned, developed and analyzed. It has been applied to the archaeological site of Juliobriga (Cantabria, Spain). A strong dependence of the thermal sensor on the GSD, and the capability of this technique to interpret underground materials. This research allows to state that the thermal nature of the site does not provide main information about the site itself, but with combination with other types of information, such as the DEM, the typology of materials, etc., can produce very positive results with respect to the exploration and knowledge of the site.
Keywords: process optimization, RGB models, thermal models, UAV, workflow
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 619101 The Impact of Changing Political and Economic Conditions on International Production Cooperation with a focus on Multinational Corporations and Transnational Corporations
Authors: Tomiris Tussupova
Abstract:
The research highlights the influence of political conditions on the operations, investment decisions, and international production networks of Multinational Corporations (MNCs) and Transnational Corporations (TNCs). It investigates how factors such as political instability, protectionist policies, and regulatory changes impact the structure and functioning of International Production Cooperation (IPC). Furthermore, the analysis identifies gaps in the literature and formulates pertinent research questions to address in the paper. The study explores MNCs and TNCs' responses to changing political and economic conditions, emphasizing their strategies for adaptation. Additionally, it delves into the specific mechanisms employed by these corporations to mitigate risks and challenges arising from evolving political and economic landscapes. The research provides policy recommendations for governments, international organizations, and industry associations. These recommendations focus on enhancing policy stability, promoting regional integration, supporting digital technology adoption, and encouraging responsible and sustainable practices in IPC. By incorporating these suggestions, policymakers and practitioners can foster an enabling environment for MNCs and TNCs, thereby facilitating stable and efficient international production networks. Overall, this research contributes to a deeper understanding of the role of MNCs and TNCs in IPC under changing political and economic conditions. The insights garnered from this study can guide future research and inform policy decisions to promote sustainable and resilient international production cooperation.
Keywords: International cooperation, Multinational Corporations, Transnational Corporations, international production networks, Global Value Chains.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29100 Hierarchies Based On the Number of Cooperating Systems of Finite Automata on Four-Dimensional Input Tapes
Authors: Makoto Sakamoto, Yasuo Uchida, Makoto Nagatomo, Takao Ito, Tsunehiro Yoshinaga, Satoshi Ikeda, Masahiro Yokomichi, Hiroshi Furutani
Abstract:
In theoretical computer science, the Turing machine has played a number of important roles in understanding and exploiting basic concepts and mechanisms in computing and information processing [20]. It is a simple mathematical model of computers [9]. After that, M.Blum and C.Hewitt first proposed two-dimensional automata as a computational model of two-dimensional pattern processing, and investigated their pattern recognition abilities in 1967 [7]. Since then, a lot of researchers in this field have been investigating many properties about automata on a two- or three-dimensional tape. On the other hand, the question of whether processing fourdimensional digital patterns is much more difficult than two- or threedimensional ones is of great interest from the theoretical and practical standpoints. Thus, the study of four-dimensional automata as a computasional model of four-dimensional pattern processing has been meaningful [8]-[19],[21]. This paper introduces a cooperating system of four-dimensional finite automata as one model of four-dimensional automata. A cooperating system of four-dimensional finite automata consists of a finite number of four-dimensional finite automata and a four-dimensional input tape where these finite automata work independently (in parallel). Those finite automata whose input heads scan the same cell of the input tape can communicate with each other, that is, every finite automaton is allowed to know the internal states of other finite automata on the same cell it is scanning at the moment. In this paper, we mainly investigate some accepting powers of a cooperating system of eight- or seven-way four-dimensional finite automata. The seven-way four-dimensional finite automaton is an eight-way four-dimensional finite automaton whose input head can move east, west, south, north, up, down, or in the fu-ture, but not in the past on a four-dimensional input tape.
Keywords: computational complexity, cooperating system, finite automaton, four-dimension, hierarchy, multihead.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 188899 FACTS Based Stabilization for Smart Grid Applications
Authors: Adel M. Sharaf, Foad H. Gandoman
Abstract:
Nowadays, Photovoltaic-PV Farms/ Parks and large PV-Smart Grid Interface Schemes are emerging and commonly utilized in Renewable Energy distributed generation. However, PVhybrid- Dc-Ac Schemes using interface power electronic converters usually has negative impact on power quality and stabilization of modern electrical network under load excursions and network fault conditions in smart grid. Consequently, robust FACTS based interface schemes are required to ensure efficient energy utilization and stabilization of bus voltages as well as limiting switching/fault onrush current condition. FACTS devices are also used in smart grid- Battery Interface and Storage Schemes with PV-Battery Storage hybrid systems as an elegant alternative to renewable energy utilization with backup battery storage for electric utility energy and demand side management to provide needed energy and power capacity under heavy load conditions. The paper presents a robust interface PV-Li-Ion Battery Storage Interface Scheme for Distribution/Utilization Low Voltage Interface using FACTS stabilization enhancement and dynamic maximum PV power tracking controllers. Digital simulation and validation of the proposed scheme is done using MATLAB/Simulink software environment for Low Voltage- Distribution/Utilization system feeding a hybrid Linear-Motorized inrush and nonlinear type loads from a DC-AC Interface VSC-6- pulse Inverter Fed from the PV Park/Farm with a back-up Li-Ion Storage Battery.
Keywords: AC FACTS, Smart grid, Stabilization, PV-Battery Storage, Switched Filter-Compensation (SFC).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 324798 The Estimation Method of Stress Distribution for Beam Structures Using the Terrestrial Laser Scanning
Authors: Sang Wook Park, Jun Su Park, Byung Kwan Oh, Yousok Kim, Hyo Seon Park
Abstract:
This study suggests the estimation method of stress distribution for the beam structures based on TLS (Terrestrial Laser Scanning). The main components of method are the creation of the lattices of raw data from TLS to satisfy the suitable condition and application of CSSI (Cubic Smoothing Spline Interpolation) for estimating stress distribution. Estimation of stress distribution for the structural member or the whole structure is one of the important factors for safety evaluation of the structure. Existing sensors which include ESG (Electric strain gauge) and LVDT (Linear Variable Differential Transformer) can be categorized as contact type sensor which should be installed on the structural members and also there are various limitations such as the need of separate space where the network cables are installed and the difficulty of access for sensor installation in real buildings. To overcome these problems inherent in the contact type sensors, TLS system of LiDAR (light detection and ranging), which can measure the displacement of a target in a long range without the influence of surrounding environment and also get the whole shape of the structure, has been applied to the field of structural health monitoring. The important characteristic of TLS measuring is a formation of point clouds which has many points including the local coordinate. Point clouds are not linear distribution but dispersed shape. Thus, to analyze point clouds, the interpolation is needed vitally. Through formation of averaged lattices and CSSI for the raw data, the method which can estimate the displacement of simple beam was developed. Also, the developed method can be extended to calculate the strain and finally applicable to estimate a stress distribution of a structural member. To verify the validity of the method, the loading test on a simple beam was conducted and TLS measured it. Through a comparison of the estimated stress and reference stress, the validity of the method is confirmed.Keywords: Structural health monitoring, terrestrial laser scanning, estimation of stress distribution, coordinate transformation, cubic smoothing spline interpolation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 274397 A Novel Multiple Valued Logic OHRNS Modulo rn Adder Circuit
Authors: Mehdi Hosseinzadeh, Somayyeh Jafarali Jassbi, Keivan Navi
Abstract:
Residue Number System (RNS) is a modular representation and is proved to be an instrumental tool in many digital signal processing (DSP) applications which require high-speed computations. RNS is an integer and non weighted number system; it can support parallel, carry-free, high-speed and low power arithmetic. A very interesting correspondence exists between the concepts of Multiple Valued Logic (MVL) and Residue Number Arithmetic. If the number of levels used to represent MVL signals is chosen to be consistent with the moduli which create the finite rings in the RNS, MVL becomes a very natural representation for the RNS. There are two concerns related to the application of this Number System: reaching the most possible speed and the largest dynamic range. There is a conflict when one wants to resolve both these problem. That is augmenting the dynamic range results in reducing the speed in the same time. For achieving the most performance a method is considere named “One-Hot Residue Number System" in this implementation the propagation is only equal to one transistor delay. The problem with this method is the huge increase in the number of transistors they are increased in order m2 . In real application this is practically impossible. In this paper combining the Multiple Valued Logic and One-Hot Residue Number System we represent a new method to resolve both of these two problems. In this paper we represent a novel design of an OHRNS-based adder circuit. This circuit is useable for Multiple Valued Logic moduli, in comparison to other RNS design; this circuit has considerably improved the number of transistors and power consumption.
Keywords: Computer Arithmetic, Residue Number System, Multiple Valued Logic, One-Hot, VLSI.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 184396 Web-Based Cognitive Writing Instruction (WeCWI): A Theoretical-and-Pedagogical e-Framework for Language Development
Authors: Boon Yih Mah
Abstract:
Web-based Cognitive Writing Instruction (WeCWI)’s contribution towards language development can be divided into linguistic and non-linguistic perspectives. In linguistic perspective, WeCWI focuses on the literacy and language discoveries, while the cognitive and psychological discoveries are the hubs in non-linguistic perspective. In linguistic perspective, WeCWI draws attention to free reading and enterprises, which are supported by the language acquisition theories. Besides, the adoption of process genre approach as a hybrid guided writing approach fosters literacy development. Literacy and language developments are interconnected in the communication process; hence, WeCWI encourages meaningful discussion based on the interactionist theory that involves input, negotiation, output, and interactional feedback. Rooted in the elearning interaction-based model, WeCWI promotes online discussion via synchronous and asynchronous communications, which allows interactions happened among the learners, instructor, and digital content. In non-linguistic perspective, WeCWI highlights on the contribution of reading, discussion, and writing towards cognitive development. Based on the inquiry models, learners’ critical thinking is fostered during information exploration process through interaction and questioning. Lastly, to lower writing anxiety, WeCWI develops the instructional tool with supportive features to facilitate the writing process. To bring a positive user experience to the learner, WeCWI aims to create the instructional tool with different interface designs based on two different types of perceptual learning style.
Keywords: WeCWI, literacy discovery, language discovery, cognitive discovery, psychological discovery.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 323395 Beam Coding with Orthogonal Complementary Golay Codes for Signal to Noise Ratio Improvement in Ultrasound Mammography
Authors: Y. Kumru, K. Enhos, H. Köymen
Abstract:
In this paper, we report the experimental results on using complementary Golay coded signals at 7.5 MHz to detect breast microcalcifications of 50 µm size. Simulations using complementary Golay coded signals show perfect consistence with the experimental results, confirming the improved signal to noise ratio for complementary Golay coded signals. For improving the success on detecting the microcalcifications, orthogonal complementary Golay sequences having cross-correlation for minimum interference are used as coded signals and compared to tone burst pulse of equal energy in terms of resolution under weak signal conditions. The measurements are conducted using an experimental ultrasound research scanner, Digital Phased Array System (DiPhAS) having 256 channels, a phased array transducer with 7.5 MHz center frequency and the results obtained through experiments are validated by Field-II simulation software. In addition, to investigate the superiority of coded signals in terms of resolution, multipurpose tissue equivalent phantom containing series of monofilament nylon targets, 240 µm in diameter, and cyst-like objects with attenuation of 0.5 dB/[MHz x cm] is used in the experiments. We obtained ultrasound images of monofilament nylon targets for the evaluation of resolution. Simulation and experimental results show that it is possible to differentiate closely positioned small targets with increased success by using coded excitation in very weak signal conditions.
Keywords: Coded excitation, complementary Golay codes, DiPhAS, medical ultrasound.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 90794 Arriving at an Optimum Value of Tolerance Factor for Compressing Medical Images
Authors: Sumathi Poobal, G. Ravindran
Abstract:
Medical imaging uses the advantage of digital technology in imaging and teleradiology. In teleradiology systems large amount of data is acquired, stored and transmitted. A major technology that may help to solve the problems associated with the massive data storage and data transfer capacity is data compression and decompression. There are many methods of image compression available. They are classified as lossless and lossy compression methods. In lossy compression method the decompressed image contains some distortion. Fractal image compression (FIC) is a lossy compression method. In fractal image compression an image is coded as a set of contractive transformations in a complete metric space. The set of contractive transformations is guaranteed to produce an approximation to the original image. In this paper FIC is achieved by PIFS using quadtree partitioning. PIFS is applied on different images like , Ultrasound, CT Scan, Angiogram, X-ray, Mammograms. In each modality approximately twenty images are considered and the average values of compression ratio and PSNR values are arrived. In this method of fractal encoding, the parameter, tolerance factor Tmax, is varied from 1 to 10, keeping the other standard parameters constant. For all modalities of images the compression ratio and Peak Signal to Noise Ratio (PSNR) are computed and studied. The quality of the decompressed image is arrived by PSNR values. From the results it is observed that the compression ratio increases with the tolerance factor and mammogram has the highest compression ratio. The quality of the image is not degraded upto an optimum value of tolerance factor, Tmax, equal to 8, because of the properties of fractal compression.Keywords: Fractal image compression, IFS, PIFS, PSNR, Quadtree partitioning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 174093 Mapping of Alteration Zones in Mineral Rich Belt of South-East Rajasthan Using Remote Sensing Techniques
Authors: Mrinmoy Dhara, Vivek K. Sengar, Shovan L. Chattoraj, Soumiya Bhattacharjee
Abstract:
Remote sensing techniques have emerged as an asset for various geological studies. Satellite images obtained by different sensors contain plenty of information related to the terrain. Digital image processing further helps in customized ways for the prospecting of minerals. In this study, an attempt has been made to map the hydrothermally altered zones using multispectral and hyperspectral datasets of South East Rajasthan. Advanced Space-borne Thermal Emission and Reflection Radiometer (ASTER) and Hyperion (Level1R) dataset have been processed to generate different Band Ratio Composites (BRCs). For this study, ASTER derived BRCs were generated to delineate the alteration zones, gossans, abundant clays and host rocks. ASTER and Hyperion images were further processed to extract mineral end members and classified mineral maps have been produced using Spectral Angle Mapper (SAM) method. Results were validated with the geological map of the area which shows positive agreement with the image processing outputs. Thus, this study concludes that the band ratios and image processing in combination play significant role in demarcation of alteration zones which may provide pathfinders for mineral prospecting studies.
Keywords: Advanced space-borne thermal emission and reflection radiometer, ASTER, Hyperion, Band ratios, Alteration zones, spectral angle mapper.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 148892 Performance Analysis of Chrominance Red and Chrominance Blue in JPEG
Authors: Mamta Garg
Abstract:
While compressing text files is useful, compressing still image files is almost a necessity. A typical image takes up much more storage than a typical text message and without compression images would be extremely clumsy to store and distribute. The amount of information required to store pictures on modern computers is quite large in relation to the amount of bandwidth commonly available to transmit them over the Internet and applications. Image compression addresses the problem of reducing the amount of data required to represent a digital image. Performance of any image compression method can be evaluated by measuring the root-mean-square-error & peak signal to noise ratio. The method of image compression that will be analyzed in this paper is based on the lossy JPEG image compression technique, the most popular compression technique for color images. JPEG compression is able to greatly reduce file size with minimal image degradation by throwing away the least “important" information. In JPEG, both color components are downsampled simultaneously, but in this paper we will compare the results when the compression is done by downsampling the single chroma part. In this paper we will demonstrate more compression ratio is achieved when the chrominance blue is downsampled as compared to downsampling the chrominance red in JPEG compression. But the peak signal to noise ratio is more when the chrominance red is downsampled as compared to downsampling the chrominance blue in JPEG compression. In particular we will use the hats.jpg as a demonstration of JPEG compression using low pass filter and demonstrate that the image is compressed with barely any visual differences with both methods.Keywords: JPEG, Discrete Cosine Transform, Quantization, Color Space Conversion, Image Compression, Peak Signal to Noise Ratio & Compression Ratio.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 167791 A Survey of Field Programmable Gate Array-Based Convolutional Neural Network Accelerators
Authors: Wei Zhang
Abstract:
With the rapid development of deep learning, neural network and deep learning algorithms play a significant role in various practical applications. Due to the high accuracy and good performance, Convolutional Neural Networks (CNNs) especially have become a research hot spot in the past few years. However, the size of the networks becomes increasingly large scale due to the demands of the practical applications, which poses a significant challenge to construct a high-performance implementation of deep learning neural networks. Meanwhile, many of these application scenarios also have strict requirements on the performance and low-power consumption of hardware devices. Therefore, it is particularly critical to choose a moderate computing platform for hardware acceleration of CNNs. This article aimed to survey the recent advance in Field Programmable Gate Array (FPGA)-based acceleration of CNNs. Various designs and implementations of the accelerator based on FPGA under different devices and network models are overviewed, and the versions of Graphic Processing Units (GPUs), Application Specific Integrated Circuits (ASICs) and Digital Signal Processors (DSPs) are compared to present our own critical analysis and comments. Finally, we give a discussion on different perspectives of these acceleration and optimization methods on FPGA platforms to further explore the opportunities and challenges for future research. More helpfully, we give a prospect for future development of the FPGA-based accelerator.Keywords: Deep learning, field programmable gate array, FPGA, hardware acceleration, convolutional neural networks, CNN.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 89890 Rice Area Determination Using Landsat-Based Indices and Land Surface Temperature Values
Authors: Burçin Saltık, Levent Genç
Abstract:
In this study, it was aimed to determine a route for identification of rice cultivation areas within Thrace and Marmara regions of Turkey using remote sensing and GIS. Landsat 8 (OLI-TIRS) imageries acquired in production season of 2013 with 181/32 Path/Row number were used. Four different seasonal images were generated utilizing original bands and different transformation techniques. All images were classified individually using supervised classification techniques and Land Use Land Cover Maps (LULC) were generated with 8 classes. Areas (ha, %) of each classes were calculated. In addition, district-based rice distribution maps were developed and results of these maps were compared with Turkish Statistical Institute (TurkSTAT; TSI)’s actual rice cultivation area records. Accuracy assessments were conducted, and most accurate map was selected depending on accuracy assessment and coherency with TSI results. Additionally, rice areas on over 4° slope values were considered as mis-classified pixels and they eliminated using slope map and GIS tools. Finally, randomized rice zones were selected to obtain maximum-minimum value ranges of each date (May, June, July, August, September images separately) NDVI, LSWI, and LST images to test whether they may be used for rice area determination via raster calculator tool of ArcGIS. The most accurate classification for rice determination was obtained from seasonal LSWI LULC map, and considering TSI data and accuracy assessment results and mis-classified pixels were eliminated from this map. According to results, 83151.5 ha of rice areas exist within study area. However, this result is higher than TSI records with an area of 12702.3 ha. Use of maximum-minimum range of rice area NDVI, LSWI, and LST was tested in Meric district. It was seen that using the value ranges obtained from July imagery, gave the closest results to TSI records, and the difference was only 206.4 ha. This difference is normal due to relatively low resolution of images. Thus, employment of images with higher spectral, spatial, temporal and radiometric resolutions may provide more reliable results.Keywords: Landsat 8 (OLI-TIRS), LULC, spectral indices, rice.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 130089 Multiaxial Fatigue Analysis of a High Performance Nickel-Based Superalloy
Authors: P. Selva, B. Lorrain, J. Alexis, A. Seror, A. Longuet, C. Mary, F. Denard
Abstract:
Over the past four decades, the fatigue behavior of nickel-based alloys has been widely studied. However, in recent years, significant advances in the fabrication process leading to grain size reduction have been made in order to improve fatigue properties of aircraft turbine discs. Indeed, a change in particle size affects the initiation mode of fatigue cracks as well as the fatigue life of the material. The present study aims to investigate the fatigue behavior of a newly developed nickel-based superalloy under biaxial-planar loading. Low Cycle Fatigue (LCF) tests are performed at different stress ratios so as to study the influence of the multiaxial stress state on the fatigue life of the material. Full-field displacement and strain measurements as well as crack initiation detection are obtained using Digital Image Correlation (DIC) techniques. The aim of this presentation is first to provide an in-depth description of both the experimental set-up and protocol: the multiaxial testing machine, the specific design of the cruciform specimen and performances of the DIC code are introduced. Second, results for sixteen specimens related to different load ratios are presented. Crack detection, strain amplitude and number of cycles to crack initiation vs. triaxial stress ratio for each loading case are given. Third, from fractographic investigations by scanning electron microscopy it is found that the mechanism of fatigue crack initiation does not depend on the triaxial stress ratio and that most fatigue cracks initiate from subsurface carbides.
Keywords: Cruciform specimen, multiaxial fatigue, Nickelbased superalloy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 219388 The Effect of CPU Location in Total Immersion of Microelectronics
Authors: A. Almaneea, N. Kapur, J. L. Summers, H. M. Thompson
Abstract:
Meeting the growth in demand for digital services such as social media, telecommunications, and business and cloud services requires large scale data centres, which has led to an increase in their end use energy demand. Generally, over 30% of data centre power is consumed by the necessary cooling overhead. Thus energy can be reduced by improving the cooling efficiency. Air and liquid can both be used as cooling media for the data centre. Traditional data centre cooling systems use air, however liquid is recognised as a promising method that can handle the more densely packed data centres. Liquid cooling can be classified into three methods; rack heat exchanger, on-chip heat exchanger and full immersion of the microelectronics. This study quantifies the improvements of heat transfer specifically for the case of immersed microelectronics by varying the CPU and heat sink location. Immersion of the server is achieved by filling the gap between the microelectronics and a water jacket with a dielectric liquid which convects the heat from the CPU to the water jacket on the opposite side. Heat transfer is governed by two physical mechanisms, which is natural convection for the fixed enclosure filled with dielectric liquid and forced convection for the water that is pumped through the water jacket. The model in this study is validated with published numerical and experimental work and shows good agreement with previous work. The results show that the heat transfer performance and Nusselt number (Nu) is improved by 89% by placing the CPU and heat sink on the bottom of the microelectronics enclosure.
Keywords: CPU location, data centre cooling, heat sink in enclosures, Immersed microelectronics, turbulent natural convection in enclosures.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 217587 Detecting Fake News: A Natural Language Processing, Reinforcement Learning, and Blockchain Approach
Authors: Ashly Joseph, Jithu Paulose
Abstract:
In an era where misleading information may quickly circulate on digital news channels, it is crucial to have efficient and trustworthy methods to detect and reduce the impact of misinformation. This research proposes an innovative framework that combines Natural Language Processing (NLP), Reinforcement Learning (RL), and Blockchain technologies to precisely detect and minimize the spread of false information in news articles on social media. The framework starts by gathering a variety of news items from different social media sites and performing preprocessing on the data to ensure its quality and uniformity. NLP methods are utilized to extract complete linguistic and semantic characteristics, effectively capturing the subtleties and contextual aspects of the language used. These features are utilized as input for a RL model. This model acquires the most effective tactics for detecting and mitigating the impact of false material by modeling the intricate dynamics of user engagements and incentives on social media platforms. The integration of blockchain technology establishes a decentralized and transparent method for storing and verifying the accuracy of information. The Blockchain component guarantees the unchangeability and safety of verified news records, while encouraging user engagement for detecting and fighting false information through an incentive system based on tokens. The suggested framework seeks to provide a thorough and resilient solution to the problems presented by misinformation in social media articles.
Keywords: Natural Language Processing, Reinforcement Learning, Blockchain, fake news mitigation, misinformation detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9386 Assessment of Breeding Soundness by Comparative Radiography and Ultrasonography of Rabbit Testes
Authors: Adenike O. Olatunji-Akioye, Emmanual B Farayola
Abstract:
In order to improve the animal protein recommended daily intake of Nigerians, there is an upsurge in breeding of hitherto shunned food animals one of which is the rabbit. Radiography and ultrasonography are tools for diagnosing disease and evaluating the anatomical architecture of parts of the body non-invasively. As the rabbit is becoming a more important food animal, to achieve improved breeding of these animals, the best of the species form a breeding stock and will usually depend on breeding soundness which may be evaluated by assessment of the male reproductive organs by these tools. Four male intact rabbits weighing between 1.2 to 1.5 kg were acquired and acclimatized for 2 weeks. Dorsoventral views of the testes were acquired using a digital radiographic machine and a 5 MHz portable ultrasound scanner was used to acquire images of the testes in longitudinal, sagittal and transverse planes. Radiographic images acquired revealed soft tissue images of the testes in all rabbits. The testes lie in individual scrotal sacs sides on both sides of the midline at the level of the caudal vertebrae and thus are superimposed by caudal vertebrae and the caudal limits of the pelvic girdle. The ultrasonographic images revealed mostly homogenously hypoechogenic testes and a hyperechogenic mediastinum testis. The dorsal and ventral poles of the testes were heterogeneously hypoechogenic and correspond to the epididymis and spermatic cord. The rabbit is unique in the ability to retract the testes particularly when stressed and so careful and stressless handling during the procedures is of paramount importance. The imaging of rabbit testes can be safely done using both imaging methods but ultrasonography is a better method of assessment and evaluation of soundness for breeding.
Keywords: Breeding soundness, rabbits, radiography, ultrasonography.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 888