Search results for: edge computing
801 New Approaches to the Determination of the Time Costs of Movements
Authors: Dana Kristalova
Abstract:
This article deals with geographical conditions in terrain and their effect on the movement of vehicles, their effect on speed and safety of movement of people and vehicles. Finding of the optimal routes outside the communication is studied in the army environment, but it occur in civilian as well, primarily in crisis situation, or by the provision of assistance when natural disasters such as floods, fires, storms, etc. have happened. These movements require the optimization of routes when effects of geographical factors should be included. The most important factor is surface of the terrain. It is based on several geographical factors as are slopes, soil conditions, micro-relief, a type of surface and meteorological conditions. Their mutual impact has been given by coefficient of deceleration. This coefficient can be used for commander´s decision. New approaches and methods of terrain testing, mathematical computing, mathematical statistics or cartometric investigation are necessary parts of this evaluation.Keywords: surface of a terrain, movement of vehicles, geographical factor, optimization of routes
Procedia PDF Downloads 465800 The Variation of the Inferior Gluteal Artery Origin
Authors: Waseem Al Talalwah, Shorok Al Dorazi, Roger Soames
Abstract:
The inferior gluteal artery is a prominent branch of the anterior trunk of internal iliac artery. It escapes from the pelvic cavity through the greater sciatic foramen below the lower edge of piriformis. In gluteal region, it provides several muscular branches to gluteal maximus and articular branch to hip joint. Further, it provides sciatic branch to sciatic nerve. Current study investigates the origin of the inferior gluteal artery of 41 cadavers in Centre for Anatomy and Human Identification, University of Dundee, UK. It arose from the anterior trunk in 37.5% independently and 45.7% dependently as with the internal pudendal artery. Therefore, it arose from the anterior trunk in 83.2%. However, it found to be as a branch of the posterior trunk of internal iliac artery in 7.7% which is either a direct branch in 6.2% as or indirect branch in 1.5%. Beside the inferior gluteal artery arose with internal pudendal artery as from GPT of anterior division in 45.7%, it arose from the GPT arising from the internal iliac artery bifurcation site in 1.5%. Further, the inferior gluteal artery arose from the trunk with internal pudendal and obturator arteries in 1.5% referred as obturatogluteopudendal trunk. Occasionally, it arose from the sciatic artery in 1.5%. In few cases, the inferior gluteal artery found to be congenital absence in 4.6% which is compensated by the persistent sciatic artery. Therefore, radiologists have to aware of the origin variability of the inferior gluteal artery to alert surgeons. Knowing the origin of the inferior gluteal artery may help the surgeons to avoid iatrogenic sciatic neuropathy in pelvic procedures such as removing prostate or of uterine fibroid. Further, it may also prevent avascular necrosis of femur neck as well as gluteal claudication.Keywords: inferior gluteal artery, internal iliac artery, sciatic neuropathy, gluteal claudication
Procedia PDF Downloads 353799 Predicting Data Center Resource Usage Using Quantile Regression to Conserve Energy While Fulfilling the Service Level Agreement
Authors: Ahmed I. Alutabi, Naghmeh Dezhabad, Sudhakar Ganti
Abstract:
Data centers have been growing in size and dema nd continuously in the last two decades. Planning for the deployment of resources has been shallow and always resorted to over-provisioning. Data center operators try to maximize the availability of their services by allocating multiple of the needed resources. One resource that has been wasted, with little thought, has been energy. In recent years, programmable resource allocation has paved the way to allow for more efficient and robust data centers. In this work, we examine the predictability of resource usage in a data center environment. We use a number of models that cover a wide spectrum of machine learning categories. Then we establish a framework to guarantee the client service level agreement (SLA). Our results show that using prediction can cut energy loss by up to 55%.Keywords: machine learning, artificial intelligence, prediction, data center, resource allocation, green computing
Procedia PDF Downloads 109798 Principle Component Analysis on Colon Cancer Detection
Authors: N. K. Caecar Pratiwi, Yunendah Nur Fuadah, Rita Magdalena, R. D. Atmaja, Sofia Saidah, Ocky Tiaramukti
Abstract:
Colon cancer or colorectal cancer is a type of cancer that attacks the last part of the human digestive system. Lymphoma and carcinoma are types of cancer that attack human’s colon. Colon cancer causes deaths about half a million people every year. In Indonesia, colon cancer is the third largest cancer case for women and second in men. Unhealthy lifestyles such as minimum consumption of fiber, rarely exercising and lack of awareness for early detection are factors that cause high cases of colon cancer. The aim of this project is to produce a system that can detect and classify images into type of colon cancer lymphoma, carcinoma, or normal. The designed system used 198 data colon cancer tissue pathology, consist of 66 images for Lymphoma cancer, 66 images for carcinoma cancer and 66 for normal / healthy colon condition. This system will classify colon cancer starting from image preprocessing, feature extraction using Principal Component Analysis (PCA) and classification using K-Nearest Neighbor (K-NN) method. Several stages in preprocessing are resize, convert RGB image to grayscale, edge detection and last, histogram equalization. Tests will be done by trying some K-NN input parameter setting. The result of this project is an image processing system that can detect and classify the type of colon cancer with high accuracy and low computation time.Keywords: carcinoma, colorectal cancer, k-nearest neighbor, lymphoma, principle component analysis
Procedia PDF Downloads 207797 Automatic Early Breast Cancer Segmentation Enhancement by Image Analysis and Hough Transform
Authors: David Jurado, Carlos Ávila
Abstract:
Detection of early signs of breast cancer development is crucial to quickly diagnose the disease and to define adequate treatment to increase the survival probability of the patient. Computer Aided Detection systems (CADs), along with modern data techniques such as Machine Learning (ML) and Neural Networks (NN), have shown an overall improvement in digital mammography cancer diagnosis, reducing the false positive and false negative rates becoming important tools for the diagnostic evaluations performed by specialized radiologists. However, ML and NN-based algorithms rely on datasets that might bring issues to the segmentation tasks. In the present work, an automatic segmentation and detection algorithm is described. This algorithm uses image processing techniques along with the Hough transform to automatically identify microcalcifications that are highly correlated with breast cancer development in the early stages. Along with image processing, automatic segmentation of high-contrast objects is done using edge extraction and circle Hough transform. This provides the geometrical features needed for an automatic mask design which extracts statistical features of the regions of interest. The results shown in this study prove the potential of this tool for further diagnostics and classification of mammographic images due to the low sensitivity to noisy images and low contrast mammographies.Keywords: breast cancer, segmentation, X-ray imaging, hough transform, image analysis
Procedia PDF Downloads 84796 Adolescent and Adult Hip Dysplasia on Plain Radiographs. Analysis of Measurements and Attempt for Optimization of Diagnostic and Performance Approaches for Patients with Periacetabular Osteotomy (PAO).
Authors: Naum Simanovsky MD, Michael Zaidman MD, Vladimir Goldman MD.
Abstract:
105 plain AP radiographs of normal adult pelvises (210 hips) were evaluated. Different measurements of normal and dysplastic hip joints in 45 patients were analyzed. Attempt was made to establish reproducible, easy applicable in practice approach for evaluation and follow up of patients with hip dysplasia. The youngest of our patients was 11 years and the oldest was 47 years. Only one of our patients needed conversion to total hip replacement (THR) during ten years of follow-up. It was emphasized that selected set of measurements was built for purpose to serve, especially those who’s scheduled or undergone PAO. This approach was based on concept of acetabulum-femoral head complex and importance of reliable reference points of measurements. Comparative analysis of measured parameters between normal and dysplastic hips was performed. Among 10 selected parameters, we use already well established such as lateral center edge angle and head extrusion index, but to serve specific group of patients with PAO, new parameters were considered such as complex lateralization and complex proximal migration. By our opinion proposed approach is easy applicable in busy clinical practice, satisfactorily delineate hip pathology and give to surgeon who’s going to perform PAO guidelines in condensed form. It is also useful tools for postoperative follow up after PAO.Keywords: periacetabular osteotomy, plain radiograph’s measurements, adolescents, adult
Procedia PDF Downloads 68795 Study and GIS Development of Geothermal Potential in South Algeria (Adrar Region)
Authors: A. Benatiallah, D. Benatiallah, F. Abaidi, B. Nasri, A. Harrouz, S. Mansouri
Abstract:
The region of Adrar is located in the south-western Algeria and covers a total area of 443.782 km², occupied by a population of 432,193 inhabitants. The main activity of population is agriculture, mainly based on the date palm cultivation occupies a total area of 23,532 ha. Adrar region climate is a continental desert characterized by a high variation in temperature between months (July, August) it exceeds 48°C and coldest months (December, January) with 16°C. Rainfall is very limited in frequency and volume with an aridity index of 4.6 to 5 which corresponds to a type of arid climate. Geologically Adrar region is located on the edge North West and is characterized by a Precambrian basement cover stolen sedimentary deposit of Phanerozoic age transgressive. The depression is filled by Touat site Paleozoic deposits (Cambrian to Namurian) of a vast sedimentary basin extending secondary age of the Saharan Atlas to the north hamada Tinhirt Tademaït and the plateau of south and Touat Gourara west to Gulf of Gabes in the Northeast. In this work we have study geothermal potential of Adrar region from the borehole data eatable in various sites across the area of 400,000 square kilometres; from these data we developed a GIS (Adrar_GIS) that plots data on the various points and boreholes in the region specifying information on available geothermal potential has variable depths.Keywords: sig, geothermal, potenteil, temperature
Procedia PDF Downloads 465794 The Influence of Structural Disorder and Phonon on Metal-To-Insulator Transition of VO₂
Authors: Sang-Wook Han, In-Hui Hwang, Zhenlan Jin, Chang-In Park
Abstract:
We used temperature-dependent X-Ray absorption fine structure (XAFS) measurements to examine the local structural properties around vanadium atoms at the V K edge from VO₂ films. A direct comparison of simultaneously-measured resistance and XAFS from the VO₂ films showed that the thermally-driven structural phase transition (SPT) occurred prior to the metal-insulator transition (MIT) during heating, whereas these changed simultaneously during cooling. XAFS revealed a significant increase in the Debye-Waller factors of the V-O and V-V pairs in the {111} direction of the R-phase VO₂ due to the phonons of the V-V arrays along the direction in a metallic phase. A substantial amount of structural disorder existing on the V-V pairs along the c-axis in both M₁ and R phases indicates the structural instability of V-V arrays in the axis. The anomalous structural disorder observed on all atomic sites at the SPT prevents the migration of the V 3d¹ electrons, resulting in a Mott insulator in the M₂-phase VO₂. The anomalous structural disorder, particularly, at vanadium sites, effectively affects the migration of metallic electrons, resulting in the Mott insulating properties in M₂ phase and a non-congruence of the SPT, MIT, and local density of state. The thermally-induced phonons in the {111} direction assist the delocalization of the V 3d¹ electrons in the R phase VO₂ and the electrons likely migrate via the V-V array in the {111} direction as well as the V-V dimerization along the c-axis. This study clarifies that the tetragonal symmetry is essentially important for the metallic phase in VO₂.Keywords: metal-insulator transition, XAFS, VO₂, structural-phase transition
Procedia PDF Downloads 272793 Layer by Layer Coating of Zinc Oxide/Metal Organic Framework Nanocomposite on Ceramic Support for Solvent/Solvent Separation Using Pervaporation Method
Authors: S. A. A. Nabeela Nasreen, S. Sundarrajan, S. A. Syed Nizar, Seeram Ramakrishna
Abstract:
Metal-organic frameworks (MOFs) have attracted considerable interest due to its diverse pore size tunability, fascinating topologies and extensive uses in fields such as catalysis, membrane separation, chemical sensing, etc. Zeolitic imidazolate frameworks (ZIFs) are a class of MOF with porous crystals containing extended three-dimensional structures of tetrahedral metal ions (e.g., Zn) bridged by Imidazolate (Im). Selected ZIFs are used to separate solvent/solvent mixtures. A layer by layer formation of the nanocomposite of Zinc oxide (ZnO) and ZIF on a ceramic support using a solvothermal method was engaged and tested for target solvent/solvent separation. Metal oxide layer was characterized by XRD, SEM, and TEM to confirm the smooth and continuous coating for the separation process. The chemical composition of ZIF films was studied by using X-Ray absorption near-edge structure (XANES) spectroscopy. The obtained ceramic tube with metal oxide and ZIF layer coating were tested for its packing density, thickness, distribution of seed layers and variation of permeation rate of solvent mixture (isopropyl alcohol (IPA)/methyl isobutyl ketone (MIBK). Pervaporation technique was used for the separation to achieve a high permeation rate with separation ratio of > 99.5% of the solvent mixture.Keywords: metal oxide, membrane, pervaporation, solvothermal, ZIF
Procedia PDF Downloads 197792 Computational Analysis on Thermal Performance of Chip Package in Electro-Optical Device
Authors: Long Kim Vu
Abstract:
The central processing unit in Electro-Optical devices is a Field-programmable gate array (FPGA) chip package allowing flexible, reconfigurable computing but energy consumption. Because chip package is placed in isolated devices based on IP67 waterproof standard, there is no air circulation and the heat dissipation is a challenge. In this paper, the author successfully modeled a chip package which various interposer materials such as silicon, glass and organics. Computational fluid dynamics (CFD) was utilized to analyze the thermal performance of chip package in the case of considering comprehensive heat transfer modes: conduction, convection and radiation, which proposes equivalent heat dissipation. The logic chip temperature varying with time is compared between the simulation and experiment results showing the excellent correlation, proving the reasonable chip modeling and simulation method.Keywords: CFD, FPGA, heat transfer, thermal analysis
Procedia PDF Downloads 184791 Optical and Luminescence Studies on Dy³+ Singly Doped and Dy³+/Ce³+ Co-doped Alumina Borosilicate Glasses for Photonics Device Application
Authors: M. Monisha, Sudha D. Kamath
Abstract:
We investigate the optical and photoluminescence properties from Dy³+ singly doped and Dy³+ co-doped with Ce³+alumino borosilicate glasses prepared using high temperature melt-quenching technique. The glass composition formula is 25SiO₂-(40-x-y)B2O₃-10Al₂O₃-15NaF-10ZnO-xDy₂O₃ yCe₂O₃ where, x = 0.5 mol% and y = 0, 0.1, and 0.5 mol%. The XRD study reveals the amorphous nature of both singly doped and co-doped glasses. Absorption study on Dy3+ singly doped glass shows nearly twelve absorption peaks arising from the ground level of Dy³+ ions (⁶H₁₅/₂) to various upper levels, and for Dy³+/Ce³+ co-doped glasses, few of the transitions in the visible region are suppressed. The absorption band edge is shifted towards the higher wavelength region on increasing Ce3+concentration. The decrease in indirect energy bandgap and increase in Urbach energy of the prepared glasses is observed due to codoping with Ce3+ ions. The photoluminescence studies on singly doped glass under 350 nm excitation showed three peaks at the blue (482 nm), yellow (575 nm), and red (663 nm) region. For codoped glasses, the emission peak at 403 nm is raised due to the 4d to 5f transition of Ce3+ ions. Lifetime values (ms) of co-doped glass is found to be higher than singly doped glass. Under 350 nm excitation, CIE coordinates of the co-doped glasses moved towards the bright white light region. The correlated color temperature (CCT) values were obtained in the range 4500 – 4700 K. Thus, the prepared glasses can be used for photonics device applications.Keywords: absorption spectra, borosilicate glasses, Ce³+, Dy³+, photoluminescence
Procedia PDF Downloads 150790 Influence of Strong Optical Feedback on Frequency Chirp and Lineshape Broadening in High-Speed Semiconductor Laser
Authors: Moustafa Ahmed, Fumio Koyama
Abstract:
Directly-modulated semiconductor lasers, including edge-emitting and vertical-cavity surface-emitting lasers, have received considerable interest recently for use in data transmitters in cost-effective high-speed data centers, metro, and access networks. Optical feedback has been proved as an efficient technique to boost the modulation bandwidth and enhance the speed of the semiconductor laser. However, both the laser linewidth and frequency chirping in directly-modulated lasers are sensitive to both intensity modulation and optical feedback. These effects along width fiber dispersion affect the transmission bit rate and distance in single-mode fiber links. In this work, we continue our recent research on directly-modulated semiconductor lasers with modulation bandwidth in the millimeter-wave band by introducing simultaneous modeling and simulations on both the frequency chirping and lineshape broadening. The lasers are operating under strong optical feedback. The model takes into account the multiple reflections of laser reflections of laser radiation in the external cavity. The analyses are given in terms of the chirp-to-modulated power ratio, and the results are shown for the possible dynamic states of continuous wave, period-1 oscillation, and chaos.Keywords: chirp, linewidth, optical feedback, semiconductor laser
Procedia PDF Downloads 482789 Learning Grammars for Detection of Disaster-Related Micro Events
Authors: Josef Steinberger, Vanni Zavarella, Hristo Tanev
Abstract:
Natural disasters cause tens of thousands of victims and massive material damages. We refer to all those events caused by natural disasters, such as damage on people, infrastructure, vehicles, services and resource supply, as micro events. This paper addresses the problem of micro - event detection in online media sources. We present a natural language grammar learning algorithm and apply it to online news. The algorithm in question is based on distributional clustering and detection of word collocations. We also explore the extraction of micro-events from social media and describe a Twitter mining robot, who uses combinations of keywords to detect tweets which talk about effects of disasters.Keywords: online news, natural language processing, machine learning, event extraction, crisis computing, disaster effects, Twitter
Procedia PDF Downloads 480788 A Quality Index Optimization Method for Non-Invasive Fetal ECG Extraction
Authors: Lucia Billeci, Gennaro Tartarisco, Maurizio Varanini
Abstract:
Fetal cardiac monitoring by fetal electrocardiogram (fECG) can provide significant clinical information about the healthy condition of the fetus. Despite this potentiality till now the use of fECG in clinical practice has been quite limited due to the difficulties in its measuring. The recovery of fECG from the signals acquired non-invasively by using electrodes placed on the maternal abdomen is a challenging task because abdominal signals are a mixture of several components and the fetal one is very weak. This paper presents an approach for fECG extraction from abdominal maternal recordings, which exploits the characteristics of pseudo-periodicity of fetal ECG. It consists of devising a quality index (fQI) for fECG and of finding the linear combinations of preprocessed abdominal signals, which maximize these fQI (quality index optimization - QIO). It aims at improving the performances of the most commonly adopted methods for fECG extraction, usually based on maternal ECG (mECG) estimating and canceling. The procedure for the fECG extraction and fetal QRS (fQRS) detection is completely unsupervised and based on the following steps: signal pre-processing; maternal ECG (mECG) extraction and maternal QRS detection; mECG component approximation and canceling by weighted principal component analysis; fECG extraction by fQI maximization and fetal QRS detection. The proposed method was compared with our previously developed procedure, which obtained the highest at the Physionet/Computing in Cardiology Challenge 2013. That procedure was based on removing the mECG from abdominal signals estimated by a principal component analysis (PCA) and applying the Independent component Analysis (ICA) on the residual signals. Both methods were developed and tuned using 69, 1 min long, abdominal measurements with fetal QRS annotation of the dataset A provided by PhysioNet/Computing in Cardiology Challenge 2013. The QIO-based and the ICA-based methods were compared in analyzing two databases of abdominal maternal ECG available on the Physionet site. The first is the Abdominal and Direct Fetal Electrocardiogram Database (ADdb) which contains the fetal QRS annotations thus allowing a quantitative performance comparison, the second is the Non-Invasive Fetal Electrocardiogram Database (NIdb), which does not contain the fetal QRS annotations so that the comparison between the two methods can be only qualitative. In particular, the comparison on NIdb was performed defining an index of quality for the fetal RR series. On the annotated database ADdb the QIO method, provided the performance indexes Sens=0.9988, PPA=0.9991, F1=0.9989 overcoming the ICA-based one, which provided Sens=0.9966, PPA=0.9972, F1=0.9969. The comparison on NIdb was performed defining an index of quality for the fetal RR series. The index of quality resulted higher for the QIO-based method compared to the ICA-based one in 35 records out 55 cases of the NIdb. The QIO-based method gave very high performances with both the databases. The results of this study foresees the application of the algorithm in a fully unsupervised way for the implementation in wearable devices for self-monitoring of fetal health.Keywords: fetal electrocardiography, fetal QRS detection, independent component analysis (ICA), optimization, wearable
Procedia PDF Downloads 282787 MXene Quantum Dots Decorated Double-Shelled Ceo₂ Hollow Spheres for Efficient Electrocatalytic Nitrogen Oxidation
Authors: Quan Li, Dongcai Shen, Zhengting Xiao, Xin Liu Mingrui Wu, Licheng Liu, Qin Li, Xianguo Li, Wentai Wang
Abstract:
Direct electrocatalytic nitrogen oxidation (NOR) provides a promising alternative strategy for synthesizing high-value-added nitric acid from widespread N₂, which overcomes the disadvantages of the Haber-Bosch-Ostwald process. However, the NOR process suffers from the limitation of high N≡N bonding energy (941 kJ mol− ¹), sluggish kinetics, low efficiency and yield. It is a prerequisite to develop more efficient electrocatalysts for NOR. Herein, we synthesized double-shelled CeO₂ hollow spheres (D-CeO₂) and further modified with Ti₃C₂ MXene quantum dots (MQDs) for electrocatalytic N₂ oxidation, which exhibited a NO₃− yield of 71.25 μg h− ¹ mgcat− ¹ and FE of 31.80% at 1.7 V. The unique quantum size effect and abundant edge active sites lead to a more effective capture of nitrogen. Moreover, the double-shelled hollow structure is favorable for N₂ fixation and gathers intermediate products in the interlayer of the core-shell. The in-situ infrared Fourier transform spectroscopy confirmed the formation of *NO and NO₃− species during the NOR reaction, and the kinetics and possible pathways of NOR were calculated by density functional theory (DFT). In addition, a Zn-N₂ reaction device was assembled with D-CeO₂/MQDs as anode and Zn plate as cathode, obtaining an extremely high NO₃− yield of 104.57 μg h− ¹ mgcat− ¹ at 1 mA cm− ².Keywords: electrocatalytic N₂ oxidation, nitrate production, CeO₂, MXene quantum dots, double-shelled hollow spheres
Procedia PDF Downloads 71786 Thick Disc Molecular Gas Fraction in NGC 6946
Authors: Narendra Nath Patra
Abstract:
Several recent studies reinforce the existence of a thick molecular disc in galaxies along with the dynamically cold thin disc. Assuming a two-component molecular disc, we model the disc of NGC 6946 as a four-component system consists of stars, HI, thin disc molecular gas, and thick disc molecular gas in vertical hydrostatic equilibrium. Following, we set up the joint Poisson-Boltzmann equation of hydrostatic equilibrium and solve it numerically to obtain a three-dimensional density distribution of different baryonic components. Using the density solutions and the observed rotation curve, we further build a three-dimensional dynamical model of the molecular disc and consecutively produce simulated CO spectral cube and spectral width profile. We find that the simulated spectral width profiles distinguishably differs for different assumed thick disc molecular gas fraction. Several CO spectral width profiles are then produced for different assumed thick disc molecular gas fractions and compared with the observed one to obtain the best fit thick disc molecular gas fraction profile. We find that the thick disc molecular gas fraction in NGC 6946 largely remains constant across its molecular disc with a mean value of 0.70 +/- 0.09. We also estimate the amount of extra-planar molecular gas in NGC 6946. We find 60% of the total molecular gas is extra-planar at the central region, whereas this fraction reduces to ~ 35% at the edge of the molecular disc. With our method, for the first time, we estimate the thick disc molecular gas fraction as a function of radius in an external galaxy with sub-kpc resolution.Keywords: galaxies: kinematics and dynamic, galaxies: spiral, galaxies: structure , ISM: molecules, molecular data
Procedia PDF Downloads 144785 Tradition and Modernity in Translation Studies: The Case of Undergraduate and Graduate Programs at Unicamp, Brazil
Authors: Erica Lima
Abstract:
In Brazil, considering the (little) age of translation studies, it can be argued that the University of Campinas is traditionally an important place for graduate studies in translation. The story is told from the accreditation for the Masters, in 1987, and the Doctoral program, in 1993, within the Graduate Program in Applied Linguistics. Since the beginning, the program boasted cutting-edge research, with theoretical reflections on various aspects, and with different methodological trends. However, on the one hand, the graduate studies development was continuously growing, but on the other, it is not what was observed in the undergraduate degree program. Currently, there are only a few disciplines of Translation Theory and Practice, which does not seem to respond to student aspirations. The objective of this paper is to present the characteristics of the university’s graduate program as something profitable, considering the concern in relating the research to the historical moment in which we are living, with research conducted in a socially compromised environment and committed to the impact that it will cause ethically and socially, as well as to question the undergraduate program paths. The objective is also to discuss and propose changes, considering the limited scope currently achieved. In light of the information age, in which we have an avalanche of information, we believe that the training of translators in the undergraduate degree should be reviewed, with the goal of retracing current paths and following others that are consistent with our historical period, marked by virtual and real, by the shuffling of borders and languages, the need for new language policies, greater inclusion, and more acceptance of others. We conclude that we need new proposals for the development of the translator in an undergraduate program, and also present suggestions to be implemented in the graduate program.Keywords: graduate Brazilian program, undergraduate Brazilian program, translator’s education, Unicamp
Procedia PDF Downloads 336784 Modal FDTD Method for Wave Propagation Modeling Customized for Parallel Computing
Authors: H. Samadiyeh, R. Khajavi
Abstract:
A new FD-based procedure, modal finite difference method (MFDM), is proposed for seismic wave propagation modeling, in which simulation is dealt with in the modal space. The method employs eigenvalues of a characteristic matrix formed by appropriate time-space FD stencils. Since MFD runs for different modes are totally independent of each other, MFDM can easily be parallelized while considerable simplicity in parallel-algorithm is also achieved. There is no requirement to any domain-decomposition procedure and inter-core data exchange. More important is the possibility to skip processing of less-significant modes, which enables one to adjust the procedure up to the level of accuracy needed. Thus, in addition to considerable ease of parallel programming, computation and storage costs are significantly reduced. The method is qualified for its efficiency by some numerical examples.Keywords: Finite Difference Method, Graphics Processing Unit (GPU), Message Passing Interface (MPI), Modal, Wave propagation
Procedia PDF Downloads 297783 Autonomic Threat Avoidance and Self-Healing in Database Management System
Authors: Wajahat Munir, Muhammad Haseeb, Adeel Anjum, Basit Raza, Ahmad Kamran Malik
Abstract:
Databases are the key components of the software systems. Due to the exponential growth of data, it is the concern that the data should be accurate and available. The data in databases is vulnerable to internal and external threats, especially when it contains sensitive data like medical or military applications. Whenever the data is changed by malicious intent, data analysis result may lead to disastrous decisions. Autonomic self-healing is molded toward computer system after inspiring from the autonomic system of human body. In order to guarantee the accuracy and availability of data, we propose a technique which on a priority basis, tries to avoid any malicious transaction from execution and in case a malicious transaction affects the system, it heals the system in an isolated mode in such a way that the availability of system would not be compromised. Using this autonomic system, the management cost and time of DBAs can be minimized. In the end, we test our model and present the findings.Keywords: autonomic computing, self-healing, threat avoidance, security
Procedia PDF Downloads 505782 Automatic Queuing Model Applications
Authors: Fahad Suleiman
Abstract:
Queuing, in medical system is the process of moving patients in a specific sequence to a specific service according to the patients’ nature of illness. The term scheduling stands for the process of computing a schedule. This may be done by a queuing based scheduler. This paper focuses on the medical consultancy system, the different queuing algorithms that are used in healthcare system to serve the patients, and the average waiting time. The aim of this paper is to build automatic queuing system for organizing the medical queuing system that can analyses the queue status and take decision which patient to serve. The new queuing architecture model can switch between different scheduling algorithms according to the testing results and the factor of the average waiting time. The main innovation of this work concerns the modeling of the average waiting time is taken into processing, in addition with the process of switching to the scheduling algorithm that gives the best average waiting time.Keywords: queuing systems, queuing system models, scheduling algorithms, patients
Procedia PDF Downloads 355781 Risks beyond Cyber in IoT Infrastructure and Services
Authors: Mattias Bergstrom
Abstract:
Significance of the Study: This research will provide new insights into the risks with digital embedded infrastructure. Through this research, we will analyze each risk and its potential negation strategies, especially for AI and autonomous automation. Moreover, the analysis that is presented in this paper will convey valuable information for future research that can create more stable, secure, and efficient autonomous systems. To learn and understand the risks, a large IoT system was envisioned, and risks with hardware, tampering, and cyberattacks were collected, researched, and evaluated to create a comprehensive understanding of the potential risks. Potential solutions have then been evaluated on an open source IoT hardware setup. This list shows the identified passive and active risks evaluated in the research. Passive Risks: (1) Hardware failures- Critical Systems relying on high rate data and data quality are growing; SCADA systems for infrastructure are good examples of such systems. (2) Hardware delivers erroneous data- Sensors break, and when they do so, they don’t always go silent; they can keep going, just that the data they deliver is garbage, and if that data is not filtered out, it becomes disruptive noise in the system. (3) Bad Hardware injection- Erroneous generated sensor data can be pumped into a system by malicious actors with the intent to create disruptive noise in critical systems. (4) Data gravity- The weight of the data collected will affect Data-Mobility. (5) Cost inhibitors- Running services that need huge centralized computing is cost inhibiting. Large complex AI can be extremely expensive to run. Active Risks: Denial of Service- It is one of the most simple attacks, where an attacker just overloads the system with bogus requests so that valid requests disappear in the noise. Malware- Malware can be anything from simple viruses to complex botnets created with specific goals, where the creator is stealing computer power and bandwidth from you to attack someone else. Ransomware- It is a kind of malware, but it is so different in its implementation that it is worth its own mention. The goal with these pieces of software is to encrypt your system so that it can only be unlocked with a key that is held for ransom. DNS spoofing- By spoofing DNS calls, valid requests and data dumps can be sent to bad destinations, where the data can be extracted for extortion or to corrupt and re-inject into a running system creating a data echo noise loop. After testing multiple potential solutions. We found that the most prominent solution to these risks was to use a Peer 2 Peer consensus algorithm over a blockchain to validate the data and behavior of the devices (sensors, storage, and computing) in the system. By the devices autonomously policing themselves for deviant behavior, all risks listed above can be negated. In conclusion, an Internet middleware that provides these features would be an easy and secure solution to any future autonomous IoT deployments. As it provides separation from the open Internet, at the same time, it is accessible over the blockchain keys.Keywords: IoT, security, infrastructure, SCADA, blockchain, AI
Procedia PDF Downloads 107780 Structural Testing and the Finite Element Modelling of Anchors Loaded Against Partially Confined Surfaces
Authors: Ali Karrech, Alberto Puccini, Ben Galvin, Davide Galli
Abstract:
This paper summarises the laboratory tests, numerical models and statistical approach developed to investigate the behaviour of concrete blocks loaded in shear through metallic anchors. This research is proposed to bridge a gap in the state of the art and practice related to anchors loaded against partially confined concrete surfaces. Eight concrete blocks (420 mm x 500 mm x 1000 mm) with 150 and/or 250 deep anchors were tested. The stainless-steel anchors of diameter 16 mm were bonded with HIT-RE 500 V4 injection epoxy resin and were subjected to shear loading against partially supported edges. In addition, finite element models were constructed to validate the laboratory tests and explore the influence of key parameters such as anchor depth, anchor distance from the edge, and compressive strength on the stability of the block. Upon their validation experimentally, the numerical results were used to populate, develop and interpret a systematic parametric study based on the Design of Experiment approach through the Box-Behnken design and Response Surface Methodology. An empirical model has been derived based on this approach, which predicts the load capacity with the desirable intervals of confidence.Keywords: finite element modelling, design of experiment, response surface methodology, Box-Behnken design, empirical model, interval of confidence, load capacity
Procedia PDF Downloads 27779 Prediction of Flow Around a NACA 0015 Profile
Authors: Boukhadia Karima
Abstract:
The fluid mechanics is the study of fluid motion laws and their interaction with solid bodies, this project leads to illustrate this interaction with depth studies and approved by experiments on the wind tunnel TE44, ensuring the efficiency, accuracy and reliability of these tests on a NACA0015 profile. A symmetric NACA0015 was placed in a subsonic wind tunnel, and measurements were made of the pressure on the upper and lower surface of the wing and of the velocity across the vortex trailing downstream from the tip of the wing. The aim of this work is to investigate experimentally the scattered pressure profile in a free airflow and the aerodynamic forces acting on this profile. The addition of around-lateral edge to the wing tip was found to eliminate the secondary vortex near the wing tip, but had little effect on the downstream characteristics of the trailing vortex. The increase in wing lift near the tip because of the presence of the trailing vortex was evident in the surface pressure, but was not captured by circulation-box measurements. The circumferential velocity within the vortex was found to reach free-stream values and produce core rotational speeds. Near the wing, the trailing vortex is asymmetric and contains definite zones where the stream wise velocity both exceeds and falls behind the free-stream value. When referenced to the free stream velocity, the maximum vertical velocity of the vortex is directly dependent on α and is independent of Re. A numerical study was conducted through a CFD code called FLUENT 6.0, and the results are compared with experimental.Keywords: CFD code, NACA Profile, detachment, angle of incidence, wind tunnel
Procedia PDF Downloads 411778 Managing Change in the Academic Libraries in the Perspective of Web 2.0
Authors: Raj Kumar, Navjyoti Dhingra
Abstract:
Academic libraries are the hubs in which knowledge is a major resource and the performances of these knowledge in terms of adding and delivering value to their users depend upon their ability and effectiveness in engendering, arranging, managing, and using this knowledge. Developments in Information and Communication Technology’s (ICT), the libraries have been incorporated at the electronic edge to facilitate a rapid transfer of information on a global scale. Web2.0 refers to the development of online services that encourage collaboration, communication and information sharing. Web 2.0 reflects changes in how one can use the web rather than describing any technical or structural change. Libraries provide manifold channels of Information access to its e-users. The rapid expansion of tools, formats, services and technologies has presented many options to unfold Library Collection. Academic libraries must develop ways and means to meet their user’s expectations and remain viable. Web 2.0 tools are the first step on that journey. Web 2.0 has been widely used by the libraries to promote functional services like access to catalogue or for external activities like information or photographs of library events, enhancement of usage of library resources and bringing users closer to the library. The purpose of this paper is to provide a reconnaissance of Web 2.0 tools for enhancing library services in India. The study shows that a lot of user-friendly tools can be adopted by information professionals to effectively cater to information needs of its users. The authors have suggested a roadmap towards a revitalized future for providing various information opportunities to techno-savvy users.Keywords: academic libraries, change management, social media, Web 2.0
Procedia PDF Downloads 212777 Decoding the Structure of Multi-Agent System Communication: A Comparative Analysis of Protocols and Paradigms
Authors: Gulshad Azatova, Aleksandr Kapitonov, Natig Aminov
Abstract:
Multiagent systems have gained significant attention in various fields, such as robotics, autonomous vehicles, and distributed computing, where multiple agents cooperate and communicate to achieve complex tasks. Efficient communication among agents is a crucial aspect of these systems, as it directly impacts their overall performance and scalability. This scholarly work provides an exploration of essential communication elements and conducts a comparative assessment of diverse protocols utilized in multiagent systems. The emphasis lies in scrutinizing the strengths, weaknesses, and applicability of these protocols across various scenarios. The research also sheds light on emerging trends within communication protocols for multiagent systems, including the incorporation of machine learning methods and the adoption of blockchain-based solutions to ensure secure communication. These trends provide valuable insights into the evolving landscape of multiagent systems and their communication protocols.Keywords: communication, multi-agent systems, protocols, consensus
Procedia PDF Downloads 76776 Using the Cluster Computing to Improve the Computational Speed of the Modular Exponentiation in RSA Cryptography System
Authors: Te-Jen Chang, Ping-Sheng Huang, Shan-Ten Cheng, Chih-Lin Lin, I-Hui Pan, Tsung- Hsien Lin
Abstract:
RSA system is a great contribution for the encryption and the decryption. It is based on the modular exponentiation. We call this system as “a large of numbers for calculation”. The operation of a large of numbers is a very heavy burden for CPU. For increasing the computational speed, in addition to improve these algorithms, such as the binary method, the sliding window method, the addition chain method, and so on, the cluster computer can be used to advance computational speed. The cluster system is composed of the computers which are installed the MPICH2 in laboratory. The parallel procedures of the modular exponentiation can be processed by combining the sliding window method with the addition chain method. It will significantly reduce the computational time of the modular exponentiation whose digits are more than 512 bits and even more than 1024 bits.Keywords: cluster system, modular exponentiation, sliding window, addition chain
Procedia PDF Downloads 525775 Evaluation of Ceres Wheat and Rice Model for Climatic Conditions in Haryana, India
Authors: Mamta Rana, K. K. Singh, Nisha Kumari
Abstract:
The simulation models with its soil-weather-plant atmosphere interacting system are important tools for assessing the crops in changing climate conditions. The CERES-Wheat & Rice vs. 4.6 DSSAT was calibrated and evaluated for one of the major producers of wheat and rice state- Haryana, India. The simulation runs were made under irrigated conditions and three fertilizer applications dose of N-P-K to estimate crop yield and other growth parameters along with the phenological development of the crop. The genetic coefficients derived by iteratively manipulating the relevant coefficients that characterize the phenological process of wheat and rice crop to the best fit match between the simulated and observed anthesis, physological maturity and final grain yield. The model validated by plotting the simulated and remote sensing derived LAI. LAI product from remote sensing provides the edge of spatial, timely and accurate assessment of crop. For validating the yield and yield components, the error percentage between the observed and simulated data was calculated. The analysis shows that the model can be used to simulate crop yield and yield components for wheat and rice cultivar under different management practices. During the validation, the error percentage was less than 10%, indicating the utility of the calibrated model for climate risk assessment in the selected region.Keywords: simulation model, CERES-wheat and rice model, crop yield, genetic coefficient
Procedia PDF Downloads 305774 Analyses of Uniaxial and Biaxial Flexure Tests Used in Ceramic Materials
Authors: Barry Hojjatie
Abstract:
Uniaxial (e.g., three-point bending) and biaxial flexure tests are used frequently for determining the strength of ceramics. It is generally believed that the biaxial test has an advantage as compared to uniaxial test because it produces a state of pure tension on the lower surface of the specimen and the maximum tensile stress, which is usually responsible for crack initiation and failure is unaffected by the edge condition. However, inconsistent strength values have been reported for the same material and testing conditions. The objective of this study was to analyze the strength of dental porcelain materials using the two different test methods and evaluate the main contributions to variability in biaxial testing and to analyze the relative influence of variables such as specimen geometric conditions and loading conditions on calculated strength of porcelain subjected to biaxial testing. Porcelain disks (16 mm dia x 2 mm thick) were subjected to biaxial flexure (pin-on-three-ball), and flexure strength values were calculated. A 3-D finite element model was developed to simulate various biaxial flexure test conditions. Stresses were analyzed for ceramic thickness in the range of 1.0-3.0 mm. For a 2-mm-thick disk subjected to a point load of 200 N, the maximum tensile stress at the lower surface was 180 MPa. This stress decreased to 95, 77, 68, and 59 MPa for the radius of the load values of 0.15, 0.3, 0.6, and 1.0 mm, respectively. Tensile stresses which developed at the top surface near the site of loading were small for the radius of the load ≥ 0.6 mm.Keywords: ceramis, biaxial, flexure test, uniaxial
Procedia PDF Downloads 156773 Efficient Heuristic Algorithm to Speed Up Graphcut in Gpu for Image Stitching
Authors: Tai Nguyen, Minh Bui, Huong Ninh, Tu Nguyen, Hai Tran
Abstract:
GraphCut algorithm has been widely utilized to solve various types of computer vision problems. Its expensive computational cost encouraged many researchers to improve the speed of the algorithm. Recent works proposed schemes that work on parallel computing platforms such as CUDA. However, the problem of low convergence speed prevents the usage of GraphCut for real time applications. In this paper, we propose global suppression heuristic to boost the conver-gence process of the algorithm. A parallel implementation of GraphCut algorithm on CUDA designed for the image stitching problem is introduced. Our method achieves up to 3× time boost on the graph of size 80 × 480 compared to the best sequential GraphCut algorithm while achieving satisfactory stitched images, suitable for panorama applications. Our source code will be soon available for further research.Keywords: CUDA, graph cut, image stitching, texture synthesis, maxflow/mincut algorithm
Procedia PDF Downloads 132772 Multimodal Characterization of Emotion within Multimedia Space
Authors: Dayo Samuel Banjo, Connice Trimmingham, Niloofar Yousefi, Nitin Agarwal
Abstract:
Technological advancement and its omnipresent connection have pushed humans past the boundaries and limitations of a computer screen, physical state, or geographical location. It has provided a depth of avenues that facilitate human-computer interaction that was once inconceivable such as audio and body language detection. Given the complex modularities of emotions, it becomes vital to study human-computer interaction, as it is the commencement of a thorough understanding of the emotional state of users and, in the context of social networks, the producers of multimodal information. This study first acknowledges the accuracy of classification found within multimodal emotion detection systems compared to unimodal solutions. Second, it explores the characterization of multimedia content produced based on their emotions and the coherence of emotion in different modalities by utilizing deep learning models to classify emotion across different modalities.Keywords: affective computing, deep learning, emotion recognition, multimodal
Procedia PDF Downloads 160