Search results for: machine readable format
559 Design of UV Based Unicycle Robot to Disinfect Germs and Communicate With Multi-Robot System
Authors: Charles Koduru, Parth Patel, M. Hassan Tanveer
Abstract:
In this paper, the communication between a team of robots is used to sanitize an environment with germs is proposed. We introduce capabilities from a team of robots (most likely heterogeneous), a wheeled robot named ROSbot 2.0 that consists of a mounted LiDAR and Kinect sensor, and a modified prototype design of a unicycle-drive Roomba robot called the UV robot. The UV robot consists of ultrasonic sensors to avoid obstacles and is equipped with an ultraviolet light system to disinfect and kill germs, such as bacteria and viruses. In addition, the UV robot is equipped with disinfectant spray to target hidden objects that ultraviolet light is unable to reach. Using the sensors from the ROSbot 2.0, the robot will create a 3-D model of the environment which will be used to factor how the ultraviolet robot will disinfect the environment. Together this proposed system is known as the RME assistive robot device or RME system, which communicates between a navigation robot and a germ disinfecting robot operated by a user. The RME system includes a human-machine interface that allows the user to control certain features of each robot in the RME assistive robot device. This method allows the cleaning process to be done at a more rapid and efficient pace as the UV robot disinfects areas just by moving around in the environment while using the ultraviolet light system to kills germs. The RME system can be used in many applications including, public offices, stores, airports, hospitals, and schools. The RME system will be beneficial even after the COVID-19 pandemic. The Kennesaw State University will continue the research in the field of robotics, engineering, and technology and play its role to serve humanity.Keywords: multi robot system, assistive robots, COVID-19 pandemic, ultraviolent technology
Procedia PDF Downloads 187558 Task Scheduling and Resource Allocation in Cloud-based on AHP Method
Authors: Zahra Ahmadi, Fazlollah Adibnia
Abstract:
Scheduling of tasks and the optimal allocation of resources in the cloud are based on the dynamic nature of tasks and the heterogeneity of resources. Applications that are based on the scientific workflow are among the most widely used applications in this field, which are characterized by high processing power and storage capacity. In order to increase their efficiency, it is necessary to plan the tasks properly and select the best virtual machine in the cloud. The goals of the system are effective factors in scheduling tasks and resource selection, which depend on various criteria such as time, cost, current workload and processing power. Multi-criteria decision-making methods are a good choice in this field. In this research, a new method of work planning and resource allocation in a heterogeneous environment based on the modified AHP algorithm is proposed. In this method, the scheduling of input tasks is based on two criteria of execution time and size. Resource allocation is also a combination of the AHP algorithm and the first-input method of the first client. Resource prioritization is done with the criteria of main memory size, processor speed and bandwidth. What is considered in this system to modify the AHP algorithm Linear Max-Min and Linear Max normalization methods are the best choice for the mentioned algorithm, which have a great impact on the ranking. The simulation results show a decrease in the average response time, return time and execution time of input tasks in the proposed method compared to similar methods (basic methods).Keywords: hierarchical analytical process, work prioritization, normalization, heterogeneous resource allocation, scientific workflow
Procedia PDF Downloads 146557 Mapping Structurally Significant Areas of G-CSF during Thermal Degradation with NMR
Authors: Mark-Adam Kellerman
Abstract:
Proteins are capable of exploring vast mutational spaces. This makes it difficult for protein engineers to devise rational methods to improve stability and function via mutagenesis. Deciding which residues to mutate requires knowledge of the characteristics they elicit. We probed the characteristics of residues in granulocyte-colony stimulating factor (G-CSF) using a thermal melt (from 295K to 323K) to denature it in a 700 MHz Bruker spectrometer. These characteristics included dynamics, micro-environmental changes experienced/ induced during denaturing and structure-function relationships. 15N-1H HSQC experiments were performed at 2K increments along with this thermal melt. We observed that dynamic residues that also undergo a lot of change in their microenvironment were predominantly in unstructured regions. Moreover, we were able to identify four residues (G4, A6, T133 and Q134) that we class as high priority targets for mutagenesis, given that they all appear in both the top 10% of measures for environmental changes and dynamics (∑Δ and ∆PI). We were also able to probe these NMR observables and combine them with molecular dynamics (MD) to elucidate what appears to be an opening motion of G-CSFs binding site III. V48 appears to be pivotal to this opening motion, which also seemingly distorts the loop region between helices A and B. This observation is in agreement with previous findings that the conformation of this loop region becomes altered in an aggregation-prone state of G-CSF. Hence, we present here an approach to profile the characteristics of residues in order to highlight their potential as rational mutagenesis targets and their roles in important conformational changes. These findings present not only an opportunity to effectively make biobetters, but also open up the possibility to further understand epistasis and machine learn residue behaviours.Keywords: protein engineering, rational mutagenesis, NMR, molecular dynamics
Procedia PDF Downloads 255556 Multiaxial Fatigue Analysis of a High Performance Nickel-Based Superalloy
Authors: P. Selva, B. Lorraina, J. Alexis, A. Seror, A. Longuet, C. Mary, F. Denard
Abstract:
Over the past four decades, the fatigue behavior of nickel-based alloys has been widely studied. However, in recent years, significant advances in the fabrication process leading to grain size reduction have been made in order to improve fatigue properties of aircraft turbine discs. Indeed, a change in particle size affects the initiation mode of fatigue cracks as well as the fatigue life of the material. The present study aims to investigate the fatigue behavior of a newly developed nickel-based superalloy under biaxial-planar loading. Low Cycle Fatigue (LCF) tests are performed at different stress ratios so as to study the influence of the multiaxial stress state on the fatigue life of the material. Full-field displacement and strain measurements as well as crack initiation detection are obtained using Digital Image Correlation (DIC) techniques. The aim of this presentation is first to provide an in-depth description of both the experimental set-up and protocol: the multiaxial testing machine, the specific design of the cruciform specimen and performances of the DIC code are introduced. Second, results for sixteen specimens related to different load ratios are presented. Crack detection, strain amplitude and number of cycles to crack initiation vs. triaxial stress ratio for each loading case are given. Third, from fractographic investigations by scanning electron microscopy it is found that the mechanism of fatigue crack initiation does not depend on the triaxial stress ratio and that most fatigue cracks initiate from subsurface carbides.Keywords: cruciform specimen, multiaxial fatigue, nickel-based superalloy
Procedia PDF Downloads 296555 Design and Manufacture of a Hybrid Gearbox Reducer System
Authors: Ahmed Mozamel, Kemal Yildizli
Abstract:
Due to mechanical energy losses and a competitive of minimizing these losses and increases the machine efficiency, the need for contactless gearing system has raised. In this work, one stage of mechanical planetary gear transmission system integrated with one stage of magnetic planetary gear system is designed as a two-stage hybrid gearbox system. The permanent magnets internal energy in the form of the magnetic field is used to create meshing between contactless magnetic rotors in order to provide self-system protection against overloading and decrease the mechanical loss of the transmission system by eliminating the friction losses. Classical methods, such as analytical, tabular method and the theory of elasticity are used to calculate the planetary gear design parameters. The finite element method (ANSYS Maxwell) is used to predict the behaviors of a magnetic gearing system. The concentric magnetic gearing system has been modeled and analyzed by using 2D finite element method (ANSYS Maxwell). In addition to that, design and manufacturing processes of prototype components (a planetary gear, concentric magnetic gear, shafts and the bearings selection) of a gearbox system are investigated. The output force, the output moment, the output power and efficiency of the hybrid gearbox system are experimentally evaluated. The viability of applying a magnetic force to transmit mechanical power through a non-contact gearing system is presented. The experimental test results show that the system is capable to operate continuously within the range of speed from 400 rpm to 3000 rpm with the reduction ratio of 2:1 and maximum efficiency of 91%.Keywords: hybrid gearbox, mechanical gearboxes, magnetic gears, magnetic torque
Procedia PDF Downloads 153554 Cloud-Based Multiresolution Geodata Cube for Efficient Raster Data Visualization and Analysis
Authors: Lassi Lehto, Jaakko Kahkonen, Juha Oksanen, Tapani Sarjakoski
Abstract:
The use of raster-formatted data sets in geospatial analysis is increasing rapidly. At the same time, geographic data are being introduced into disciplines outside the traditional domain of geoinformatics, like climate change, intelligent transport, and immigration studies. These developments call for better methods to deliver raster geodata in an efficient and easy-to-use manner. Data cube technologies have traditionally been used in the geospatial domain for managing Earth Observation data sets that have strict requirements for effective handling of time series. The same approach and methodologies can also be applied in managing other types of geospatial data sets. A cloud service-based geodata cube, called GeoCubes Finland, has been developed to support online delivery and analysis of most important geospatial data sets with national coverage. The main target group of the service is the academic research institutes in the country. The most significant aspects of the GeoCubes data repository include the use of multiple resolution levels, cloud-optimized file structure, and a customized, flexible content access API. Input data sets are pre-processed while being ingested into the repository to bring them into a harmonized form in aspects like georeferencing, sampling resolutions, spatial subdivision, and value encoding. All the resolution levels are created using an appropriate generalization method, selected depending on the nature of the source data set. Multiple pre-processed resolutions enable new kinds of online analysis approaches to be introduced. Analysis processes based on interactive visual exploration can be effectively carried out, as the level of resolution most close to the visual scale can always be used. In the same way, statistical analysis can be carried out on resolution levels that best reflect the scale of the phenomenon being studied. Access times remain close to constant, independent of the scale applied in the application. The cloud service-based approach, applied in the GeoCubes Finland repository, enables analysis operations to be performed on the server platform, thus making high-performance computing facilities easily accessible. The developed GeoCubes API supports this kind of approach for online analysis. The use of cloud-optimized file structures in data storage enables the fast extraction of subareas. The access API allows for the use of vector-formatted administrative areas and user-defined polygons as definitions of subareas for data retrieval. Administrative areas of the country in four levels are available readily from the GeoCubes platform. In addition to direct delivery of raster data, the service also supports the so-called virtual file format, in which only a small text file is first downloaded. The text file contains links to the raster content on the service platform. The actual raster data is downloaded on demand, from the spatial area and resolution level required in each stage of the application. By the geodata cube approach, pre-harmonized geospatial data sets are made accessible to new categories of inexperienced users in an easy-to-use manner. At the same time, the multiresolution nature of the GeoCubes repository facilitates expert users to introduce new kinds of interactive online analysis operations.Keywords: cloud service, geodata cube, multiresolution, raster geodata
Procedia PDF Downloads 139553 Towards Law Data Labelling Using Topic Modelling
Authors: Daniel Pinheiro Da Silva Junior, Aline Paes, Daniel De Oliveira, Christiano Lacerda Ghuerren, Marcio Duran
Abstract:
The Courts of Accounts are institutions responsible for overseeing and point out irregularities of Public Administration expenses. They have a high demand for processes to be analyzed, whose decisions must be grounded on severity laws. Despite the existing large amount of processes, there are several cases reporting similar subjects. Thus, previous decisions on already analyzed processes can be a precedent for current processes that refer to similar topics. Identifying similar topics is an open, yet essential task for identifying similarities between several processes. Since the actual amount of topics is considerably large, it is tedious and error-prone to identify topics using a pure manual approach. This paper presents a tool based on Machine Learning and Natural Language Processing to assists in building a labeled dataset. The tool relies on Topic Modelling with Latent Dirichlet Allocation to find the topics underlying a document followed by Jensen Shannon distance metric to generate a probability of similarity between documents pairs. Furthermore, in a case study with a corpus of decisions of the Rio de Janeiro State Court of Accounts, it was noted that data pre-processing plays an essential role in modeling relevant topics. Also, the combination of topic modeling and a calculated distance metric over document represented among generated topics has been proved useful in helping to construct a labeled base of similar and non-similar document pairs.Keywords: courts of accounts, data labelling, document similarity, topic modeling
Procedia PDF Downloads 180552 Evaluation of the Dry Compressive Strength of Refractory Bricks Developed from Local Kaolin
Authors: Olanrewaju Rotimi Bodede, Akinlabi Oyetunji
Abstract:
Modeling the dry compressive strength of sodium silicate bonded kaolin refractory bricks was studied. The materials used for this research work included refractory clay obtained from Ijero-Ekiti kaolin deposit on coordinates 7º 49´N and 5º 5´E, sodium silicate obtained from the open market in Lagos on coordinates 6°27′11″N 3°23′45″E all in the South Western part of Nigeria. The mineralogical composition of the kaolin clay was determined using the Energy Dispersive X-Ray Fluorescence Spectrometer (ED-XRF). The clay samples were crushed and sieved using the laboratory pulveriser, ball mill and sieve shaker respectively to obtain 100 μm diameter particles. Manual pipe extruder of dimension 30 mm diameter by 43.30 mm height was used to prepare the samples with varying percentage volume of sodium silicate 5 %, 7.5 % 10 %, 12.5 %, 15 %, 17.5 %, 20% and 22.5 % while kaolin and water were kept at 50 % and 5 % respectively for the comprehensive test. The samples were left to dry in the open laboratory atmosphere for 24 hours to remove moisture. The samples were then were fired in an electrically powered muffle furnace. Firing was done at the following temperatures; 700ºC, 750ºC, 800ºC, 850ºC, 900ºC, 950ºC, 1000ºC and 1100ºC. Compressive strength test was carried out on the dried samples using a Testometric Universal Testing Machine (TUTM) equipped with a computer and printer, optimum compression of 4.41 kN/mm2 was obtained at 12.5 % sodium silicate; the experimental results were modeled with MATLAB and Origin packages using polynomial regression equations that predicted the estimated values for dry compressive strength and later validated with Pearson’s rank correlation coefficient, thereby obtaining a very high positive correlation value of 0.97.Keywords: dry compressive strength, kaolin, modeling, sodium silicate
Procedia PDF Downloads 455551 Stainless Steel Swarfs for Replacement of Copper in Non-Asbestos Organic Brake-Pads
Authors: Vishal Mahale, Jayashree Bijwe, Sujeet K. Sinha
Abstract:
Nowadays extensive research is going on in the field of friction materials (FMs) for development of eco-friendly brake-materials by removing copper as it is a proven threat to the aquatic organisms. Researchers are keen to find the solution for copper-free FMs by using different metals or without metals. Steel wool is used as a reinforcement in non-asbestos organic (NAO) FMs mainly for increasing thermal conductivity, and it affects wear adversely, most of the times and also adds friction fluctuations. Copper and brass used to be the preferred choices because of superior performance in almost every aspect except cost. Since these are being phased out because of a proven threat to the aquatic life. Keeping this in view, a series of realistic multi-ingredient FMs containing stainless steel (SS) swarfs as a theme ingredient in increasing amount (0, 5, 10 and 15 wt. %- S₅, S₁₀, and S₁₅) were developed in the form of brake-pads. One more composite containing copper instead of SS swarfs (C₁₀) was developed. These composites were characterized for physical, mechanical, chemical and tribological performance. Composites were tribo-evaluated on a chase machine with various test loops as per SAE J661 standards. Various performance parameters such as normal µ, hot µ, performance µ, fade µ, recovery µ, % fade, % recovery, wear resistance, etc. were used to evaluate the role of amount of SS swarfs in FMs. It was concluded that SS swarfs proved successful in Cu replacement almost in all respects except wear resistance. With increase in amount of SS swarfs, most of the properties improved. Worn surface analysis and wear mechanism were studied using SEM and EDAX techniques.Keywords: Chase type friction tester, copper-free, non-asbestos organic (NAO) friction materials, stainless steel swarfs
Procedia PDF Downloads 184550 Assessment of Breeding Soundness by Comparative Radiography and Ultrasonography of Rabbit Testes
Authors: Adenike O. Olatunji-Akioye, Emmanual B Farayola
Abstract:
In order to improve the animal protein recommended daily intake of Nigerians, there is an upsurge in breeding of hitherto shunned food animals one of which is the rabbit. Radiography and ultrasonography are tools for diagnosing disease and evaluating the anatomical architecture of parts of the body non-invasively. As the rabbit is becoming a more important food animal, to achieve improved breeding of these animals, the best of the species form a breeding stock and will usually depend on breeding soundness which may be evaluated by assessment of the male reproductive organs by these tools. Four male intact rabbits weighing between 1.2 to 1.5 kg were acquired and acclimatized for 2 weeks. Dorsoventral views of the testes were acquired using a digital radiographic machine and a 5 MHz portable ultrasound scanner was used to acquire images of the testes in longitudinal, sagittal and transverse planes. Radiographic images acquired revealed soft tissue images of the testes in all rabbits. The testes lie in individual scrotal sacs sides on both sides of the midline at the level of the caudal vertebrae and thus are superimposed by caudal vertebrae and the caudal limits of the pelvic girdle. The ultrasonographic images revealed mostly homogenously hypoechogenic testes and a hyperechogenic mediastinum testis. The dorsal and ventral poles of the testes were heterogeneously hypoechogenic and correspond to the epididymis and spermatic cord. The rabbit is unique in the ability to retract the testes particularly when stressed and so careful and stressless handling during the procedures is of paramount importance. The imaging of rabbit testes can be safely done using both imaging methods but ultrasonography is a better method of assessment and evaluation of soundness for breeding.Keywords: breeding soundness, rabbit, radiography, ultrasonography
Procedia PDF Downloads 133549 Markov Random Field-Based Segmentation Algorithm for Detection of Land Cover Changes Using Uninhabited Aerial Vehicle Synthetic Aperture Radar Polarimetric Images
Authors: Mehrnoosh Omati, Mahmod Reza Sahebi
Abstract:
The information on land use/land cover changing plays an essential role for environmental assessment, planning and management in regional development. Remotely sensed imagery is widely used for providing information in many change detection applications. Polarimetric Synthetic aperture radar (PolSAR) image, with the discrimination capability between different scattering mechanisms, is a powerful tool for environmental monitoring applications. This paper proposes a new boundary-based segmentation algorithm as a fundamental step for land cover change detection. In this method, first, two PolSAR images are segmented using integration of marker-controlled watershed algorithm and coupled Markov random field (MRF). Then, object-based classification is performed to determine changed/no changed image objects. Compared with pixel-based support vector machine (SVM) classifier, this novel segmentation algorithm significantly reduces the speckle effect in PolSAR images and improves the accuracy of binary classification in object-based level. The experimental results on Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) polarimetric images show a 3% and 6% improvement in overall accuracy and kappa coefficient, respectively. Also, the proposed method can correctly distinguish homogeneous image parcels.Keywords: coupled Markov random field (MRF), environment, object-based analysis, polarimetric SAR (PolSAR) images
Procedia PDF Downloads 219548 Analysis of the Cutting Force with Ultrasonic Assisted Manufacturing of Steel (S235JR)
Authors: Philipp Zopf, Franz Haas
Abstract:
Manufacturing of very hard and refractory materials like ceramics, glass or carbide poses particular challenges on tools and machines. The company Sauer GmbH developed especially for this application area ultrasonic tool holders working in a frequency range from 15 to 60 kHz and superimpose the common tool movement in the vertical axis. This technique causes a structural weakening in the contact area and facilitates the machining. The possibility of the force reduction for these special materials especially in drilling of carbide with diamond tools up to 30 percent made the authors try to expand the application range of this method. To make the results evaluable, the authors decide to start with existing processes in which the positive influence of the ultrasonic assistance is proven to understand the mechanism. The comparison of a grinding process the Institute use to machine materials mentioned in the beginning and steel could not be more different. In the first case, the authors use tools with geometrically undefined edges. In the second case, the edges are geometrically defined. To get valid results of the tests, the authors decide to investigate two manufacturing methods, drilling and milling. The main target of the investigation is to reduce the cutting force measured with a force measurement platform underneath the workpiece. Concerning to the direction of the ultrasonic assistance, the authors expect lower cutting forces and longer endurance of the tool in the drilling process. To verify the frequencies and the amplitudes an FFT-analysis is performed. It shows the increasing damping depending on the infeed rate of the tool. The reducing of amplitude of the cutting force comes along.Keywords: drilling, machining, milling, ultrasonic
Procedia PDF Downloads 274547 Optimization Modeling of the Hybrid Antenna Array for the DoA Estimation
Authors: Somayeh Komeylian
Abstract:
The direction of arrival (DoA) estimation is the crucial aspect of the radar technologies for detecting and dividing several signal sources. In this scenario, the antenna array output modeling involves numerous parameters including noise samples, signal waveform, signal directions, signal number, and signal to noise ratio (SNR), and thereby the methods of the DoA estimation rely heavily on the generalization characteristic for establishing a large number of the training data sets. Hence, we have analogously represented the two different optimization models of the DoA estimation; (1) the implementation of the decision directed acyclic graph (DDAG) for the multiclass least-squares support vector machine (LS-SVM), and (2) the optimization method of the deep neural network (DNN) radial basis function (RBF). We have rigorously verified that the LS-SVM DDAG algorithm is capable of accurately classifying DoAs for the three classes. However, the accuracy and robustness of the DoA estimation are still highly sensitive to technological imperfections of the antenna arrays such as non-ideal array design and manufacture, array implementation, mutual coupling effect, and background radiation and thereby the method may fail in representing high precision for the DoA estimation. Therefore, this work has a further contribution on developing the DNN-RBF model for the DoA estimation for overcoming the limitations of the non-parametric and data-driven methods in terms of array imperfection and generalization. The numerical results of implementing the DNN-RBF model have confirmed the better performance of the DoA estimation compared with the LS-SVM algorithm. Consequently, we have analogously evaluated the performance of utilizing the two aforementioned optimization methods for the DoA estimation using the concept of the mean squared error (MSE).Keywords: DoA estimation, Adaptive antenna array, Deep Neural Network, LS-SVM optimization model, Radial basis function, and MSE
Procedia PDF Downloads 101546 Scheduling in a Single-Stage, Multi-Item Compatible Process Using Multiple Arc Network Model
Authors: Bokkasam Sasidhar, Ibrahim Aljasser
Abstract:
The problem of finding optimal schedules for each equipment in a production process is considered, which consists of a single stage of manufacturing and which can handle different types of products, where changeover for handling one type of product to the other type incurs certain costs. The machine capacity is determined by the upper limit for the quantity that can be processed for each of the products in a set up. The changeover costs increase with the number of set ups and hence to minimize the costs associated with the product changeover, the planning should be such that similar types of products should be processed successively so that the total number of changeovers and in turn the associated set up costs are minimized. The problem of cost minimization is equivalent to the problem of minimizing the number of set ups or equivalently maximizing the capacity utilization in between every set up or maximizing the total capacity utilization. Further, the production is usually planned against customers’ orders, and generally different customers’ orders are assigned one of the two priorities – “normal” or “priority” order. The problem of production planning in such a situation can be formulated into a Multiple Arc Network (MAN) model and can be solved sequentially using the algorithm for maximizing flow along a MAN and the algorithm for maximizing flow along a MAN with priority arcs. The model aims to provide optimal production schedule with an objective of maximizing capacity utilization, so that the customer-wise delivery schedules are fulfilled, keeping in view the customer priorities. Algorithms have been presented for solving the MAN formulation of the production planning with customer priorities. The application of the model is demonstrated through numerical examples.Keywords: scheduling, maximal flow problem, multiple arc network model, optimization
Procedia PDF Downloads 402545 VIAN-DH: Computational Multimodal Conversation Analysis Software and Infrastructure
Authors: Teodora Vukovic, Christoph Hottiger, Noah Bubenhofer
Abstract:
The development of VIAN-DH aims at bridging two linguistic approaches: conversation analysis/interactional linguistics (IL), so far a dominantly qualitative field, and computational/corpus linguistics and its quantitative and automated methods. Contemporary IL investigates the systematic organization of conversations and interactions composed of speech, gaze, gestures, and body positioning, among others. These highly integrated multimodal behaviour is analysed based on video data aimed at uncovering so called “multimodal gestalts”, patterns of linguistic and embodied conduct that reoccur in specific sequential positions employed for specific purposes. Multimodal analyses (and other disciplines using videos) are so far dependent on time and resource intensive processes of manual transcription of each component from video materials. Automating these tasks requires advanced programming skills, which is often not in the scope of IL. Moreover, the use of different tools makes the integration and analysis of different formats challenging. Consequently, IL research often deals with relatively small samples of annotated data which are suitable for qualitative analysis but not enough for making generalized empirical claims derived quantitatively. VIAN-DH aims to create a workspace where many annotation layers required for the multimodal analysis of videos can be created, processed, and correlated in one platform. VIAN-DH will provide a graphical interface that operates state-of-the-art tools for automating parts of the data processing. The integration of tools that already exist in computational linguistics and computer vision, facilitates data processing for researchers lacking programming skills, speeds up the overall research process, and enables the processing of large amounts of data. The main features to be introduced are automatic speech recognition for the transcription of language, automatic image recognition for extraction of gestures and other visual cues, as well as grammatical annotation for adding morphological and syntactic information to the verbal content. In the ongoing instance of VIAN-DH, we focus on gesture extraction (pointing gestures, in particular), making use of existing models created for sign language and adapting them for this specific purpose. In order to view and search the data, VIAN-DH will provide a unified format and enable the import of the main existing formats of annotated video data and the export to other formats used in the field, while integrating different data source formats in a way that they can be combined in research. VIAN-DH will adapt querying methods from corpus linguistics to enable parallel search of many annotation levels, combining token-level and chronological search for various types of data. VIAN-DH strives to bring crucial and potentially revolutionary innovation to the field of IL, (that can also extend to other fields using video materials). It will allow the processing of large amounts of data automatically and, the implementation of quantitative analyses, combining it with the qualitative approach. It will facilitate the investigation of correlations between linguistic patterns (lexical or grammatical) with conversational aspects (turn-taking or gestures). Users will be able to automatically transcribe and annotate visual, spoken and grammatical information from videos, and to correlate those different levels and perform queries and analyses.Keywords: multimodal analysis, corpus linguistics, computational linguistics, image recognition, speech recognition
Procedia PDF Downloads 110544 Modeling of Particle Reduction and Volatile Compounds Profile during Chocolate Conching by Electronic Nose and Genetic Programming (GP) Based System
Authors: Juzhong Tan, William Kerr
Abstract:
Conching is one critical procedure in chocolate processing, where special flavors are developed, and smooth mouse feel the texture of the chocolate is developed due to particle size reduction of cocoa mass and other additives. Therefore, determination of the particle size and volatile compounds profile of cocoa bean is important for chocolate manufacturers to ensure the quality of chocolate products. Currently, precise particle size measurement is usually done by laser scattering which is expensive and inaccessible to small/medium size chocolate manufacturers. Also, some other alternatives, such as micrometer and microscopy, can’t provide good measurements and provide little information. Volatile compounds analysis of cocoa during conching, has similar problems due to its high cost and limited accessibility. In this study, a self-made electronic nose system consists of gas sensors (TGS 800 and 2000 series) was inserted to a conching machine and was used to monitoring the volatile compound profile of chocolate during the conching. A model correlated volatile compounds profiles along with factors including the content of cocoa, sugar, and the temperature during the conching to particle size of chocolate particles by genetic programming was established. The model was used to predict the particle size reduction of chocolates with different cocoa mass to sugar ratio (1:2, 1:1, 1.5:1, 2:1) at 8 conching time (15min, 30min, 1h, 1.5h, 2h, 4h, 8h, and 24h). And the predictions were compared to laser scattering measurements of the same chocolate samples. 91.3% of the predictions were within the range of later scatting measurement ± 5% deviation. 99.3% were within the range of later scatting measurement ± 10% deviation.Keywords: cocoa bean, conching, electronic nose, genetic programming
Procedia PDF Downloads 255543 Design and Implementation of Control System in Underwater Glider of Ganeshblue
Authors: Imam Taufiqurrahman, Anugrah Adiwilaga, Egi Hidayat, Bambang Riyanto Trilaksono
Abstract:
Autonomous Underwater Vehicle glider is one of the renewal of underwater vehicles. This vehicle is one of the autonomous underwater vehicles that are being developed in Indonesia. Glide ability is obtained by controlling the buoyancy and attitude of the vehicle using the movers within the vehicle. The glider motion mechanism is expected to provide energy resistance from autonomous underwater vehicles so as to increase the cruising range of rides while performing missions. The control system on the vehicle consists of three parts: controlling the attitude of the pitch, the buoyancy engine controller and the yaw controller. The buoyancy and pitch controls on the vehicle are sequentially referring to the finite state machine with pitch angle and depth of diving inputs to obtain a gliding cycle. While the yaw control is done through the rudder for the needs of the guide system. This research is focused on design and implementation of control system of Autonomous Underwater Vehicle glider based on PID anti-windup. The control system is implemented on an ARM TS-7250-V2 device along with a mathematical model of the vehicle in MATLAB using the hardware-in-the-loop simulation (HILS) method. The TS-7250-V2 is chosen because it complies industry standards, has high computing capability, minimal power consumption. The results show that the control system in HILS process can form glide cycle with depth and angle of operation as desired. In the implementation using half control and full control mode, from the experiment can be concluded in full control mode more precision when tracking the reference. While half control mode is considered more efficient in carrying out the mission.Keywords: control system, PID, underwater glider, marine robotics
Procedia PDF Downloads 374542 A Relative Entropy Regularization Approach for Fuzzy C-Means Clustering Problem
Authors: Ouafa Amira, Jiangshe Zhang
Abstract:
Clustering is an unsupervised machine learning technique; its aim is to extract the data structures, in which similar data objects are grouped in the same cluster, whereas dissimilar objects are grouped in different clusters. Clustering methods are widely utilized in different fields, such as: image processing, computer vision , and pattern recognition, etc. Fuzzy c-means clustering (fcm) is one of the most well known fuzzy clustering methods. It is based on solving an optimization problem, in which a minimization of a given cost function has been studied. This minimization aims to decrease the dissimilarity inside clusters, where the dissimilarity here is measured by the distances between data objects and cluster centers. The degree of belonging of a data point in a cluster is measured by a membership function which is included in the interval [0, 1]. In fcm clustering, the membership degree is constrained with the condition that the sum of a data object’s memberships in all clusters must be equal to one. This constraint can cause several problems, specially when our data objects are included in a noisy space. Regularization approach took a part in fuzzy c-means clustering technique. This process introduces an additional information in order to solve an ill-posed optimization problem. In this study, we focus on regularization by relative entropy approach, where in our optimization problem we aim to minimize the dissimilarity inside clusters. Finding an appropriate membership degree to each data object is our objective, because an appropriate membership degree leads to an accurate clustering result. Our clustering results in synthetic data sets, gaussian based data sets, and real world data sets show that our proposed model achieves a good accuracy.Keywords: clustering, fuzzy c-means, regularization, relative entropy
Procedia PDF Downloads 259541 Fabrication of Cheap Novel 3d Porous Scaffolds Activated by Nano-Particles and Active Molecules for Bone Regeneration and Drug Delivery Applications
Authors: Mostafa Mabrouk, Basma E. Abdel-Ghany, Mona Moaness, Bothaina M. Abdel-Hady, Hanan H. Beherei
Abstract:
Tissue engineering became a promising field for bone repair and regenerative medicine in which cultured cells, scaffolds and osteogenic inductive signals are used to regenerate tissues. The annual cost of treating bone defects in Egypt has been estimated to be many billions, while enormous costs are spent on imported bone grafts for bone injuries, tumors, and other pathologies associated with defective fracture healing. The current study is aimed at developing a more strategic approach in order to speed-up recovery after bone damage. This will reduce the risk of fatal surgical complications and improve the quality of life of people affected with such fractures. 3D scaffolds loaded with cheap nano-particles that possess an osteogenic effect were prepared by nano-electrospinning. The Microstructure and morphology characterizations of the 3D scaffolds were monitored using scanning electron microscopy (SEM). The physicochemical characterization was investigated using X-ray diffractometry (XRD) and infrared spectroscopy (IR). The Physicomechanical properties of the 3D scaffold were determined by a universal testing machine. The in vitro bioactivity of the 3D scaffold was assessed in simulated body fluid (SBF). The bone-bonding ability of novel 3D scaffolds was also evaluated. The obtained nanofibrous scaffolds demonstrated promising microstructure, physicochemical and physicomechanical features appropriate for enhanced bone regeneration. Therefore, the utilized nanomaterials loaded with the drug are greatly recommended as cheap alternatives to growth factors.Keywords: bone regeneration, cheap scaffolds, nanomaterials, active molecules
Procedia PDF Downloads 191540 On the Existence of Homotopic Mapping Between Knowledge Graphs and Graph Embeddings
Authors: Jude K. Safo
Abstract:
Knowledge Graphs KG) and their relation to Graph Embeddings (GE) represent a unique data structure in the landscape of machine learning (relative to image, text and acoustic data). Unlike the latter, GEs are the only data structure sufficient for representing hierarchically dense, semantic information needed for use-cases like supply chain data and protein folding where the search space exceeds the limits traditional search methods (e.g. page-rank, Dijkstra, etc.). While GEs are effective for compressing low rank tensor data, at scale, they begin to introduce a new problem of ’data retreival’ which we observe in Large Language Models. Notable attempts by transE, TransR and other prominent industry standards have shown a peak performance just north of 57% on WN18 and FB15K benchmarks, insufficient practical industry applications. They’re also limited, in scope, to next node/link predictions. Traditional linear methods like Tucker, CP, PARAFAC and CANDECOMP quickly hit memory limits on tensors exceeding 6.4 million nodes. This paper outlines a topological framework for linear mapping between concepts in KG space and GE space that preserve cardinality. Most importantly we introduce a traceable framework for composing dense linguistic strcutures. We demonstrate performance on WN18 benchmark this model hits. This model does not rely on Large Langauge Models (LLM) though the applications are certainy relevant here as well.Keywords: representation theory, large language models, graph embeddings, applied algebraic topology, applied knot theory, combinatorics
Procedia PDF Downloads 68539 Multi-Stage Classification for Lung Lesion Detection on CT Scan Images Applying Medical Image Processing Technique
Authors: Behnaz Sohani, Sahand Shahalinezhad, Amir Rahmani, Aliyu Aliyu
Abstract:
Recently, medical imaging and specifically medical image processing is becoming one of the most dynamically developing areas of medical science. It has led to the emergence of new approaches in terms of the prevention, diagnosis, and treatment of various diseases. In the process of diagnosis of lung cancer, medical professionals rely on computed tomography (CT) scans, in which failure to correctly identify masses can lead to incorrect diagnosis or sampling of lung tissue. Identification and demarcation of masses in terms of detecting cancer within lung tissue are critical challenges in diagnosis. In this work, a segmentation system in image processing techniques has been applied for detection purposes. Particularly, the use and validation of a novel lung cancer detection algorithm have been presented through simulation. This has been performed employing CT images based on multilevel thresholding. The proposed technique consists of segmentation, feature extraction, and feature selection and classification. More in detail, the features with useful information are selected after featuring extraction. Eventually, the output image of lung cancer is obtained with 96.3% accuracy and 87.25%. The purpose of feature extraction applying the proposed approach is to transform the raw data into a more usable form for subsequent statistical processing. Future steps will involve employing the current feature extraction method to achieve more accurate resulting images, including further details available to machine vision systems to recognise objects in lung CT scan images.Keywords: lung cancer detection, image segmentation, lung computed tomography (CT) images, medical image processing
Procedia PDF Downloads 101538 Vehicle Speed Estimation Using Image Processing
Authors: Prodipta Bhowmik, Poulami Saha, Preety Mehra, Yogesh Soni, Triloki Nath Jha
Abstract:
In India, the smart city concept is growing day by day. So, for smart city development, a better traffic management and monitoring system is a very important requirement. Nowadays, road accidents increase due to more vehicles on the road. Reckless driving is mainly responsible for a huge number of accidents. So, an efficient traffic management system is required for all kinds of roads to control the traffic speed. The speed limit varies from road to road basis. Previously, there was a radar system but due to high cost and less precision, the radar system is unable to become favorable in a traffic management system. Traffic management system faces different types of problems every day and it has become a researchable topic on how to solve this problem. This paper proposed a computer vision and machine learning-based automated system for multiple vehicle detection, tracking, and speed estimation of vehicles using image processing. Detection of vehicles and estimating their speed from a real-time video is tough work to do. The objective of this paper is to detect vehicles and estimate their speed as accurately as possible. So for this, a real-time video is first captured, then the frames are extracted from that video, then from that frames, the vehicles are detected, and thereafter, the tracking of vehicles starts, and finally, the speed of the moving vehicles is estimated. The goal of this method is to develop a cost-friendly system that can able to detect multiple types of vehicles at the same time.Keywords: OpenCV, Haar Cascade classifier, DLIB, YOLOV3, centroid tracker, vehicle detection, vehicle tracking, vehicle speed estimation, computer vision
Procedia PDF Downloads 85537 Combining Diffusion Maps and Diffusion Models for Enhanced Data Analysis
Authors: Meng Su
Abstract:
High-dimensional data analysis often presents challenges in capturing the complex, nonlinear relationships and manifold structures inherent to the data. This article presents a novel approach that leverages the strengths of two powerful techniques, Diffusion Maps and Diffusion Probabilistic Models (DPMs), to address these challenges. By integrating the dimensionality reduction capability of Diffusion Maps with the data modeling ability of DPMs, the proposed method aims to provide a comprehensive solution for analyzing and generating high-dimensional data. The Diffusion Map technique preserves the nonlinear relationships and manifold structure of the data by mapping it to a lower-dimensional space using the eigenvectors of the graph Laplacian matrix. Meanwhile, DPMs capture the dependencies within the data, enabling effective modeling and generation of new data points in the low-dimensional space. The generated data points can then be mapped back to the original high-dimensional space, ensuring consistency with the underlying manifold structure. Through a detailed example implementation, the article demonstrates the potential of the proposed hybrid approach to achieve more accurate and effective modeling and generation of complex, high-dimensional data. Furthermore, it discusses possible applications in various domains, such as image synthesis, time-series forecasting, and anomaly detection, and outlines future research directions for enhancing the scalability, performance, and integration with other machine learning techniques. By combining the strengths of Diffusion Maps and DPMs, this work paves the way for more advanced and robust data analysis methods.Keywords: diffusion maps, diffusion probabilistic models (DPMs), manifold learning, high-dimensional data analysis
Procedia PDF Downloads 111536 Evaluating the Satisfaction of Chinese Consumers toward Influencers at TikTok
Authors: Noriyuki Suyama
Abstract:
The progress and spread of digitalization have led to the provision of a variety of new services. The recent progress in digitization can be attributed to rapid developments in science and technology. First, the research and diffusion of artificial intelligence (AI) has made dramatic progress. Around 2000, the third wave of AI research, which had been underway for about 50 years, arrived. Specifically, machine learning and deep learning were made possible in AI, and the ability of AI to acquire knowledge, define the knowledge, and update its own knowledge in a quantitative manner made the use of big data practical even for commercial PCs. On the other hand, with the spread of social media, information exchange has become more common in our daily lives, and the lending and borrowing of goods and services, in other words, the sharing economy, has become widespread. The scope of this trend is not limited to any industry, and its momentum is growing as the SDGs take root. In addition, the Social Network Service (SNS), a part of social media, has brought about the evolution of the retail business. In the past few years, social network services (SNS) involving users or companies have especially flourished. The People's Republic of China (hereinafter referred to as "China") is a country that is stimulating enormous consumption through its own unique SNS, which is different from the SNS used in developed countries around the world. This paper focuses on the effectiveness and challenges of influencer marketing by focusing on the influence of influencers on users' behavior and satisfaction with Chinese SNSs. Specifically, Conducted was the quantitative survey of Tik Tok users living in China, with the aim of gaining new insights from the analysis and discussions. As a result, we found several important findings and knowledge.Keywords: customer satisfaction, social networking services, influencer marketing, Chinese consumers’ behavior
Procedia PDF Downloads 89535 The Development of a Nanofiber Membrane for Outdoor and Activity Related Purposes
Authors: Roman Knizek, Denisa Knizkova
Abstract:
This paper describes the development of a nanofiber membrane for sport and outdoor use at the Technical University of Liberec (TUL) and the following cooperation with a private Czech company which launched this product onto the market. For making this membrane, Polyurethan was electrospun on the Nanospider spinning machine, and a wire string electrode was used. The created nanofiber membrane with a nanofiber diameter of 150 nm was subsequently hydrophobisied using a low vacuum plasma and Fluorocarbon monomer C6 type. After this hydrophobic treatment, the nanofiber membrane contact angle was higher than 125o, and its oleophobicity was 6. The last step was a lamination of this nanofiber membrane with a woven or knitted fabric to create a 3-layer laminate. Gravure printing technology and polyurethane hot-melt adhesive were used. The gravure roller has a mesh of 17. The resulting 3-layer laminate has a water vapor permeability Ret of 1.6 [Pa.m2.W-1] (– measured in compliance with ISO 11092), it is 100% windproof (– measured in compliance with ISO 9237), and the water column is above 10 000 mm (– measured in compliance with ISO 20811). This nanofiber membrane which was developed in the laboratories of the Technical University of Liberec was then produced industrially by a private company. A low vacuum plasma line and a lamination line were needed for industrial production, and the process had to be fine-tuned to achieve the same parameters as those achieved in the TUL laboratories. The result of this work is a newly developed nanofiber membrane which offers much better properties, especially water vapor permeability, than other competitive membranes. It is an example of product development and the consequent fine-tuning for industrial production; it is also an example of the cooperation between a Czech state university and a private company.Keywords: nanofiber membrane, start-up, state university, private company, product
Procedia PDF Downloads 143534 Potential Use of Leaching Gravel as a Raw Material in the Preparation of Geo Polymeric Material as an Alternative to Conventional Cement Materials
Authors: Arturo Reyes Roman, Daniza Castillo Godoy, Francisca Balarezo Olivares, Francisco Arriagada Castro, Miguel Maulen Tapia
Abstract:
Mining waste–based geopolymers are a sustainable alternative to conventional cement materials due to their contribution to the valorization of mining wastes as well as to the new construction materials with reduced fingerprints. The objective of this study was to determine the potential of leaching gravel (LG) from hydrometallurgical copper processing to be used as a raw material in the manufacture of geopolymer. NaOH, Na2SiO3 (modulus 1.5), and LG were mixed and then wetted with an appropriate amount of tap water, then stirred until a homogenous paste was obtained. A liquid/solid ratio of 0.3 was used for preparing mixtures. The paste was then cast in cubic moulds of 50 mm for the determination of compressive strengths. The samples were left to dry for 24h at room temperature, then unmoulded before analysis after 28 days of curing time. The compressive test was conducted in a compression machine (15/300 kN). According to the laser diffraction spectroscopy (LDS) analysis, 90% of LG particles were below 500 μm. The X-ray diffraction (XRD) analysis identified crystalline phases of albite (30 %), Quartz (16%), Anorthite (16 %), and Phillipsite (14%). The X-ray fluorescence (XRF) determinations showed mainly 55% of SiO2, 13 % of Al2O3, and 9% of CaO. ICP (OES) concentrations of Fe, Ca, Cu, Al, As, V, Zn, Mo, and Ni were 49.545; 24.735; 6.172; 14.152, 239,5; 129,6; 41,1;15,1, and 13,1 mg kg-1, respectively. The geopolymer samples showed resistance ranging between 2 and 10 MPa. In comparison with the raw material composition, the amorphous percentage of materials in the geopolymer was 35 %, whereas the crystalline percentage of main mineral phases decreased. Further studies are needed to find the optimal combinations of materials to produce a more resistant and environmentally safe geopolymer. Particularly are necessary compressive resistance higher than 15 MPa are necessary to be used as construction unit such as bricks.Keywords: mining waste, geopolymer, construction material, alkaline activation
Procedia PDF Downloads 96533 Monitor Student Concentration Levels on Online Education Sessions
Authors: M. K. Wijayarathna, S. M. Buddika Harshanath
Abstract:
Monitoring student engagement has become a crucial part of the educational process and a reliable indicator of the capacity to retain information. As online learning classrooms are now more common these days, students' attention levels have become increasingly important, making it more difficult to check each student's concentration level in an online classroom setting. To profile student attention to various gradients of engagement, a study is a plan to conduct using machine learning models. Using a convolutional neural network, the findings and confidence score of the high accuracy model are obtained. In this research, convolutional neural networks are using to help discover essential emotions that are critical in defining various levels of participation. Students' attention levels were shown to be influenced by emotions such as calm, enjoyment, surprise, and fear. An improved virtual learning system was created as a result of these data, which allowed teachers to focus their support and advise on those students who needed it. Student participation has formed as a crucial component of the learning technique and a consistent predictor of a student's capacity to retain material in the classroom. Convolutional neural networks have a plan to implement the platform. As a preliminary step, a video of the pupil would be taken. In the end, researchers used a convolutional neural network utilizing the Keras toolkit to take pictures of the recordings. Two convolutional neural network methods are planned to use to determine the pupils' attention level. Finally, those predicted student attention level results plan to display on the graphical user interface of the System.Keywords: HTML5, JavaScript, Python flask framework, AI, graphical user
Procedia PDF Downloads 101532 Auditory Rehabilitation via an VR Serious Game for Children with Cochlear Implants: Bio-Behavioral Outcomes
Authors: Areti Okalidou, Paul D. Hatzigiannakoglou, Aikaterini Vatou, George Kyriafinis
Abstract:
Young children are nowadays adept at using technology. Hence, computer-based auditory training programs (CBATPs) have become increasingly popular in aural rehabilitation for children with hearing loss and/or with cochlear implants (CI). Yet, their clinical utility for prognostic, diagnostic, and monitoring purposes has not been explored. The purposes of the study were: a) to develop an updated version of the auditory rehabilitation tool for Greek-speaking children with cochlear implants, b) to develop a database for behavioral responses, and c) to compare accuracy rates and reaction times in children differing in hearing status and other medical and demographic characteristics, in order to assess the tool’s clinical utility in prognosis, diagnosis, and progress monitoring. The updated version of the auditory rehabilitation tool was developed on a tablet, retaining the User-Centered Design approach and the elements of the Virtual Reality (VR) serious game. The visual stimuli were farm animals acting in simple game scenarios designed to trigger children’s responses to animal sounds, names, and relevant sentences. Based on an extended version of Erber’s auditory development model, the VR game consisted of six stages, i.e., sound detection, sound discrimination, word discrimination, identification, comprehension of words in a carrier phrase, and comprehension of sentences. A familiarization stage (learning) was set prior to the game. Children’s tactile responses were recorded as correct, false, or impulsive, following a child-dependent set up of a valid delay time after stimulus offset for valid responses. Reaction times were also recorded, and the database was in Εxcel format. The tablet version of the auditory rehabilitation tool was piloted in 22 preschool children with Νormal Ηearing (ΝΗ), which led to improvements. The study took place in clinical settings or at children’s homes. Fifteen children with CI, aged 5;7-12;3 years with post-implantation 0;11-5;1 years used the auditory rehabilitation tool. Eight children with CI were monolingual, two were bilingual and five had additional disabilities. The control groups consisted of 13 children with ΝΗ, aged 2;6-9;11 years. A comparison of both accuracy rates, as percent correct, and reaction times (in sec) was made at each stage, across hearing status, age, and also, within the CI group, based on presence of additional disability and bilingualism. Both monolingual Greek-speaking children with CI with no additional disabilities and hearing peers showed high accuracy rates at all stages, with performances falling above the 3rd quartile. However, children with normal hearing scored higher than the children with CI, especially in the detection and word discrimination tasks. The reaction time differences between the two groups decreased in language-based tasks. Results for children with CI with additional disability or bilingualism varied. Finally, older children scored higher than younger ones in both groups (CI, NH), but larger differences occurred in children with CI. The interactions between familiarization of the software, age, hearing status and demographic characteristics are discussed. Overall, the VR game is a promising tool for tracking the development of auditory skills, as it provides multi-level longitudinal empirical data. Acknowledgment: This work is part of a project that has received funding from the Research Committee of the University of Macedonia under the Basic Research 2020-21 funding programme.Keywords: VR serious games, auditory rehabilitation, auditory training, children with cochlear implants
Procedia PDF Downloads 89531 Voting Representation in Social Networks Using Rough Set Techniques
Authors: Yasser F. Hassan
Abstract:
Social networking involves use of an online platform or website that enables people to communicate, usually for a social purpose, through a variety of services, most of which are web-based and offer opportunities for people to interact over the internet, e.g. via e-mail and ‘instant messaging’, by analyzing the voting behavior and ratings of judges in a popular comments in social networks. While most of the party literature omits the electorate, this paper presents a model where elites and parties are emergent consequences of the behavior and preferences of voters. The research in artificial intelligence and psychology has provided powerful illustrations of the way in which the emergence of intelligent behavior depends on the development of representational structure. As opposed to the classical voting system (one person – one decision – one vote) a new voting system is designed where agents with opposed preferences are endowed with a given number of votes to freely distribute them among some issues. The paper uses ideas from machine learning, artificial intelligence and soft computing to provide a model of the development of voting system response in a simulated agent. The modeled development process involves (simulated) processes of evolution, learning and representation development. The main value of the model is that it provides an illustration of how simple learning processes may lead to the formation of structure. We employ agent-based computer simulation to demonstrate the formation and interaction of coalitions that arise from individual voter preferences. We are interested in coordinating the local behavior of individual agents to provide an appropriate system-level behavior.Keywords: voting system, rough sets, multi-agent, social networks, emergence, power indices
Procedia PDF Downloads 395530 Drilling Quantification and Bioactivity of Machinable Hydroxyapatite : Yttrium phosphate Bioceramic Composite
Authors: Rupita Ghosh, Ritwik Sarkar, Sumit K. Pal, Soumitra Paul
Abstract:
The use of Hydroxyapatite bioceramics as restorative implants is widely known. These materials can be manufactured by pressing and sintering route to a particular shape. However machining processes are still a basic requirement to give a near net shape to those implants for ensuring dimensional and geometrical accuracy. In this context, optimising the machining parameters is an important factor to understand the machinability of the materials and to reduce the production cost. In the present study a method has been optimized to produce true particulate drilled composite of Hydroxyapatite Yttrium Phosphate. The phosphates are used in varying ratio for a comparative study on the effect of flexural strength, hardness, machining (drilling) parameters and bioactivity.. The maximum flexural strength and hardness of the composite that could be attained are 46.07 MPa and 1.02 GPa respectively. Drilling is done with a conventional radial drilling machine aided with dynamometer with high speed steel (HSS) and solid carbide (SC) drills. The effect of variation in drilling parameters (cutting speed and feed), cutting tool, batch composition on torque, thrust force and tool wear are studied. It is observed that the thrust force and torque varies greatly with the increase in the speed, feed and yttrium phosphate content in the composite. Significant differences in the thrust and torque are noticed due to the change of the drills as well. Bioactivity study is done in simulated body fluid (SBF) upto 28 days. The growth of the bone like apatite has become denser with the increase in the number of days for all the composition of the composites and it is comparable to that of the pure hydroxyapatite.Keywords: Bioactivity, Drilling, Hydroxyapatite, Yttrium Phosphate
Procedia PDF Downloads 301