Search results for: sensory processing sensitivity
4829 Tc-99m MIBI Scintigraphy to Differentiate Malignant from Benign Lesions, Detected on Planar Bone Scan
Authors: Aniqa Jabeen
Abstract:
The aim of this study was to evaluate the effectiveness of Tc-99m MIBI (Technetium 99-methoxy-iso-butyl-isonitrile) scintigraphy to differentiate malignancies from benign lesions, which were detected on planar bone scans. Materials and Methods: 59 patients with bone lesions were enrolled in the study. The scintigraphic findings were compared with the clinical, radiological and the histological findings. Each patient initially underwent a three-phase bone scan with Tc-99m MDP (Methylene Diphosphonate) and if evidence of lesion found, the patient then underwent a dynamic and static MIBI scintigraphy after three to four days. The MDP and MIBI scans were evaluated visually and quantitatively. For quantitative analysis count ratios of lesions and contralateral normal side (L/C) were taken by region of interests drawn on scans. The Student T test was applied to assess the significant difference between benign and malignant lesions p-value < 0.05 was considered significant. Result: The MDP scans showed the increase tracer uptake, but there was no significant difference between benign and malignant uptake of the radiotracer. However significant difference (p-value 0.015), in uptake was seen in malignant (L/C = 3.51 ± 1.02) and benign lesion (L/C = 2.50±0.42) on MIBI scan. Three of thirty benign lesions did not show significant MIBI uptake. Seven malignant appeared as false negatives. Specificity of the scan was 86.66%, and its Negative Predictive Value (NPV) was 81.25% whereas the sensitivity of scan was 79.31%. In excluding the axial metastasis from the lesions, the sensitivity of MIBI scan increased to 91.66% and the NPV also increased to 92.85%. Conclusion: MIBI scintigraphy provides its usefulness by distinguishing malignant from benign lesions. MIBI also correctly identifies metastatic lesions. The negative predictive value of the scan points towards its ability to accurately diagnose the normal (benign) cases. However, biopsy remains the gold standard and a definitive diagnostic modality in musculoskeletal tumors. MIBI scan provides useful information in preoperative assessment and in distinguishing between malignant and benign lesions.Keywords: benign, malignancies, MDP bone scan, MIBI scintigraphy
Procedia PDF Downloads 4044828 Modal FDTD Method for Wave Propagation Modeling Customized for Parallel Computing
Authors: H. Samadiyeh, R. Khajavi
Abstract:
A new FD-based procedure, modal finite difference method (MFDM), is proposed for seismic wave propagation modeling, in which simulation is dealt with in the modal space. The method employs eigenvalues of a characteristic matrix formed by appropriate time-space FD stencils. Since MFD runs for different modes are totally independent of each other, MFDM can easily be parallelized while considerable simplicity in parallel-algorithm is also achieved. There is no requirement to any domain-decomposition procedure and inter-core data exchange. More important is the possibility to skip processing of less-significant modes, which enables one to adjust the procedure up to the level of accuracy needed. Thus, in addition to considerable ease of parallel programming, computation and storage costs are significantly reduced. The method is qualified for its efficiency by some numerical examples.Keywords: Finite Difference Method, Graphics Processing Unit (GPU), Message Passing Interface (MPI), Modal, Wave propagation
Procedia PDF Downloads 2964827 Processing Studies and Challenges Faced in Development of High-Pressure Titanium Alloy Cryogenic Gas Bottles
Authors: Bhanu Pant, Sanjay H. Upadhyay
Abstract:
Frequently, the upper stage of high-performance launch vehicles utilizes cryogenic tank-submerged pressurization gas bottles with high volume-to-weight efficiency to achieve a direct gain in the satellite payload. Titanium alloys, owing to their high specific strength coupled with excellent compatibility with various fluids, are the materials of choice for these applications. Amongst the Titanium alloys, there are two alloys suitable for cryogenic applications, namely Ti6Al4V-ELI and Ti5Al2.5Sn-ELI. The two-phase alpha-beta alloy Ti6Al4V-ELI is usable up to LOX temperature of 90K, while the single-phase alpha alloy Ti5Al2.5Sn-ELI can be used down to LHe temperature of 4 K. The high-pressure gas bottles submerged in the LH2 (20K) can store more amount of gas in as compared to those submerged in LOX (90K) bottles the same volume. Thus, the use of these alpha alloy gas bottles stored at 20K gives a distinct advantage with respect to the need for a lesser number of gas bottles to store the same amount of high-pressure gas, which in turn leads to a one-to-one advantage in the payload in the satellite. The cost advantage to the tune of 15000$/ kg of weight is saved in the upper stages, and, thereby, the satellite payload gain is expected by this change. However, the processing of alpha Ti5Al2.5Sn-ELI alloy gas bottles poses challenges due to the lower forgeability of the alloy and mode of qualification for the critical severe application environment. The present paper describes the processing and challenges/ solutions during the development of these advanced gas bottles for LH2 (20K) applications.Keywords: titanium alloys, cryogenic gas bottles, alpha titanium alloy, alpha-beta titanium alloy
Procedia PDF Downloads 574826 The Effect of Fly Ash in Dewatering of Marble Processing Wastewaters
Authors: H. A. Taner, V. Önen
Abstract:
In the thermal power plants established to meet the energy need, lignite with low calorie and high ash content is used. Burning of these coals results in wastes such as fly ash, slag and flue gas. This constitutes a significant economic and environmental problems. However, fly ash can find evaluation opportunities in various sectors. In this study, the effectiveness of fly ash on suspended solid removal from marble processing wastewater containing high concentration of suspended solids was examined. Experiments were carried out for two different suspensions, marble and travertine. In the experiments, FeCl3, Al2(SO4)3 and anionic polymer A130 were used also to compare with fly ash. Coagulant/flocculant type/dosage, mixing time/speed and pH were the experimental parameters. The performances in the experimental studies were assessed with the change in the interface height during sedimentation resultant and turbidity values of treated water. The highest sedimentation efficiency was achieved with anionic flocculant. However, it was determined that fly ash can be used instead of FeCl3 and Al2(SO4)3 in the travertine plant as a coagulant.Keywords: dewatering, flocculant, fly ash, marble plant wastewater
Procedia PDF Downloads 1524825 Deep Learning-Based Classification of 3D CT Scans with Real Clinical Data; Impact of Image format
Authors: Maryam Fallahpoor, Biswajeet Pradhan
Abstract:
Background: Artificial intelligence (AI) serves as a valuable tool in mitigating the scarcity of human resources required for the evaluation and categorization of vast quantities of medical imaging data. When AI operates with optimal precision, it minimizes the demand for human interpretations and, thereby, reduces the burden on radiologists. Among various AI approaches, deep learning (DL) stands out as it obviates the need for feature extraction, a process that can impede classification, especially with intricate datasets. The advent of DL models has ushered in a new era in medical imaging, particularly in the context of COVID-19 detection. Traditional 2D imaging techniques exhibit limitations when applied to volumetric data, such as Computed Tomography (CT) scans. Medical images predominantly exist in one of two formats: neuroimaging informatics technology initiative (NIfTI) and digital imaging and communications in medicine (DICOM). Purpose: This study aims to employ DL for the classification of COVID-19-infected pulmonary patients and normal cases based on 3D CT scans while investigating the impact of image format. Material and Methods: The dataset used for model training and testing consisted of 1245 patients from IranMehr Hospital. All scans shared a matrix size of 512 × 512, although they exhibited varying slice numbers. Consequently, after loading the DICOM CT scans, image resampling and interpolation were performed to standardize the slice count. All images underwent cropping and resampling, resulting in uniform dimensions of 128 × 128 × 60. Resolution uniformity was achieved through resampling to 1 mm × 1 mm × 1 mm, and image intensities were confined to the range of (−1000, 400) Hounsfield units (HU). For classification purposes, positive pulmonary COVID-19 involvement was designated as 1, while normal images were assigned a value of 0. Subsequently, a U-net-based lung segmentation module was applied to obtain 3D segmented lung regions. The pre-processing stage included normalization, zero-centering, and shuffling. Four distinct 3D CNN models (ResNet152, ResNet50, DensNet169, and DensNet201) were employed in this study. Results: The findings revealed that the segmentation technique yielded superior results for DICOM images, which could be attributed to the potential loss of information during the conversion of original DICOM images to NIFTI format. Notably, ResNet152 and ResNet50 exhibited the highest accuracy at 90.0%, and the same models achieved the best F1 score at 87%. ResNet152 also secured the highest Area under the Curve (AUC) at 0.932. Regarding sensitivity and specificity, DensNet201 achieved the highest values at 93% and 96%, respectively. Conclusion: This study underscores the capacity of deep learning to classify COVID-19 pulmonary involvement using real 3D hospital data. The results underscore the significance of employing DICOM format 3D CT images alongside appropriate pre-processing techniques when training DL models for COVID-19 detection. This approach enhances the accuracy and reliability of diagnostic systems for COVID-19 detection.Keywords: deep learning, COVID-19 detection, NIFTI format, DICOM format
Procedia PDF Downloads 884824 Development of Loop Mediated Isothermal Amplification (Lamp) Assay for the Diagnosis of Ovine Theileriosis
Authors: Muhammad Fiaz Qamar, Uzma Mehreen, Muhammad Arfan Zaman, Kazim Ali
Abstract:
Ovine Theileriosis is a world-wide concern, especially in tropical and subtropical areas, due to having tick abundance that has received less awareness in different developed and developing areas due to less worth of sheep, low to the middle level of infection in different small ruminants herd. Across Asia, the prevalence reports have been conducted to provide equivalent calculation of flock and animal level prevalence of Theileriosisin animals. It is a challenge for veterinarians to timely diagnosis & control of Theileriosis and famers because of the nature of the organism and inadequacy of restricted plans to control. All most work is based upon the development of such a technique which should be farmer-friendly, less expensive, and easy to perform into the field. By the timely diagnosis of this disease will decrease the irrational use of the drugs, and other plan was to determine the prevalence of Theileriosis in District Jhang by using the conventional method, PCR and qPCR, and LAMP. We quantify the molecular epidemiology of T.lestoquardiin sheep from Jhang districts, Punjab, Pakistan. In this study, we concluded that the overall prevalence of Theileriosis was (32/350*100= 9.1%) in sheep by using Giemsa staining technique, whereas (48/350*100= 13%) is observed by using PCR technique (56/350*100=16%) in qPCR and the LAMP technique have shown up to this much prevalence percentage (60/350*100= 17.1%). The specificity and sensitivity also calculated in comparison with the PCR and LAMP technique. Means more positive results have been shown when the diagnosis has been done with the help of LAMP. And there is little bit of difference between the positive results of PCR and qPCR, and the least positive animals was by using Giemsa staining technique/conventional method. If we talk about the specificity and sensitivity of the LAMP as compared to PCR, The cross tabulation shows that the results of sensitivity of LAMP counted was 94.4%, and specificity of LAMP counted was 78%. Advances in scientific field must be upon reality based ideas which can lessen the gaps and hurdles in the way of scientific research; the lamp is one of such techniques which have done wonders in adding value and helping human at large. It is such a great biological diagnostic tools and has helped a lot in the proper diagnosis and treatment of certain diseases. Other methods for diagnosis, such as culture techniques and serological techniques, have exposed humans with great danger. However, with the help of molecular diagnostic technique like LAMP, exposure to such pathogens is being avoided in the current era Most prompt and tentative diagnosis can be made using LAMP. Other techniques like PCR has many disadvantages when compared to LAMP as PCR is a relatively expensive, time consuming, and very complicated procedure while LAMP is relatively cheap, easy to perform, less time consuming, and more accurate. LAMP technique has removed hurdles in the way of scientific research and molecular diagnostics, making it approachable to poor and developing countries.Keywords: distribution, thelaria, LAMP, primer sequences, PCR
Procedia PDF Downloads 1034823 The Impact of Dispatching with Rolling Horizon Control in Sizing Thermal Storage for Solar Tower Plant Participating in Wholesale Spot Electricity Market
Authors: Navid Mohammadzadeh, Huy Truong-Ba, Michael Cholette
Abstract:
The solar tower (ST) plant is a promising technology to exploit large-scale solar irradiation. With thermal energy storage, ST plant has the potential to shift generation to high electricity price periods. However, the size of storage limits the dispatchability of the plant, particularly when it should compete with uncertainty in forecasts of solar irradiation and electricity prices. The purpose of this study is to explore the size of storage when Rolling Horizon Control (RHC) is employed for dispatch scheduling. To this end, RHC is benchmarked against perfect knowledge (PK) forecast and two day-ahead dispatching policies. With optimisation of dispatch planning using PK policy, the optimal achievable profit for a specific size of the storage is determined. A sensitivity analysis using Monte-Carlo simulation is conducted, and the size of storage for RHC and day-ahead policies is determined with the objective of reaching the profit obtained from the PK policy. A case study is conducted for a hypothetical ST plant with thermal storage located in South Australia and intends to dispatch under two market scenarios: 1) fixed price and 2) wholesale spot price. The impact of each individual source of uncertainty on storage size is examined for January and August. The exploration of results shows that dispatching with RH controller reaches optimal achievable profit with ~15% smaller storage compared to that in day-ahead policies. The results of this study may be applied to the CSP plant design procedure.Keywords: solar tower plant, spot market, thermal storage system, optimized dispatch planning, sensitivity analysis, Monte Carlo simulation
Procedia PDF Downloads 1254822 Multivariate Analysis of Spectroscopic Data for Agriculture Applications
Authors: Asmaa M. Hussein, Amr Wassal, Ahmed Farouk Al-Sadek, A. F. Abd El-Rahman
Abstract:
In this study, a multivariate analysis of potato spectroscopic data was presented to detect the presence of brown rot disease or not. Near-Infrared (NIR) spectroscopy (1,350-2,500 nm) combined with multivariate analysis was used as a rapid, non-destructive technique for the detection of brown rot disease in potatoes. Spectral measurements were performed in 565 samples, which were chosen randomly at the infection place in the potato slice. In this study, 254 infected and 311 uninfected (brown rot-free) samples were analyzed using different advanced statistical analysis techniques. The discrimination performance of different multivariate analysis techniques, including classification, pre-processing, and dimension reduction, were compared. Applying a random forest algorithm classifier with different pre-processing techniques to raw spectra had the best performance as the total classification accuracy of 98.7% was achieved in discriminating infected potatoes from control.Keywords: Brown rot disease, NIR spectroscopy, potato, random forest
Procedia PDF Downloads 1904821 Chemical vs Visual Perception in Food Choice Ability of Octopus vulgaris (Cuvier, 1797)
Authors: Al Sayed Al Soudy, Valeria Maselli, Gianluca Polese, Anna Di Cosmo
Abstract:
Cephalopods are considered as a model organism with a rich behavioral repertoire. Sophisticated behaviors were widely studied and described in different species such as Octopus vulgaris, who has evolved the largest and more complex nervous system among invertebrates. In O. vulgaris, cognitive abilities in problem-solving tasks and learning abilities are associated with long-term memory and spatial memory, mediated by highly developed sensory organs. They are equipped with sophisticated eyes, able to discriminate colors even with a single photoreceptor type, vestibular system, ‘lateral line analogue’, primitive ‘hearing’ system and olfactory organs. They can recognize chemical cues either through direct contact with odors sources using suckers or by distance through the olfactory organs. Cephalopods are able to detect widespread waterborne molecules by the olfactory organs. However, many volatile odorant molecules are insoluble or have a very low solubility in water, and must be perceived by direct contact. O. vulgaris, equipped with many chemosensory neurons located in their suckers, exhibits a peculiar behavior that can be provocatively described as 'smell by touch'. The aim of this study is to establish the priority given to chemical vs. visual perception in food choice. Materials and methods: Three different types of food (anchovies, clams, and mussels) were used, and all sessions were recorded with a digital camera. During the acclimatization period, Octopuses were exposed to the three types of food to test their natural food preferences. Later, to verify if food preference is maintained, food was provided in transparent screw-jars with pierced lids to allow both visual and chemical recognition of the food inside. Subsequently, we tested alternatively octopuses with food in sealed transparent screw-jars and food in blind screw-jars with pierced lids. As a control, we used blind sealed jars with the same lid color to verify a random choice among food types. Results and discussion: During the acclimatization period, O. vulgaris shows a higher preference for anchovies (60%) followed by clams (30%), then mussels (10%). After acclimatization, using the transparent and pierced screw jars octopus’s food choices resulted in 50-50 between anchovies and clams, avoiding mussels. Later, guided by just visual sense, with transparent but not pierced jars, their food preferences resulted in 100% anchovies. With pierced but not transparent jars their food preference resulted in 100% anchovies as first food choice, the clams as a second food choice result (33.3%). With no possibility to select food, neither by vision nor by chemoreception, the results were 20% anchovies, 20% clams, and 60% mussels. We conclude that O. vulgaris uses both chemical and visual senses in an integrative way in food choice, but if we exclude one of them, it appears clear that its food preference relies on chemical sense more than on visual perception.Keywords: food choice, Octopus vulgaris, olfaction, sensory organs, visual sense
Procedia PDF Downloads 2214820 Big Data Analysis with Rhipe
Authors: Byung Ho Jung, Ji Eun Shin, Dong Hoon Lim
Abstract:
Rhipe that integrates R and Hadoop environment made it possible to process and analyze massive amounts of data using a distributed processing environment. In this paper, we implemented multiple regression analysis using Rhipe with various data sizes of actual data. Experimental results for comparing the performance of our Rhipe with stats and biglm packages available on bigmemory, showed that our Rhipe was more fast than other packages owing to paralleling processing with increasing the number of map tasks as the size of data increases. We also compared the computing speeds of pseudo-distributed and fully-distributed modes for configuring Hadoop cluster. The results showed that fully-distributed mode was faster than pseudo-distributed mode, and computing speeds of fully-distributed mode were faster as the number of data nodes increases.Keywords: big data, Hadoop, Parallel regression analysis, R, Rhipe
Procedia PDF Downloads 4974819 Temporal Progression of Episodic Memory as Function of Encoding Condition and Age: Further Investigation of Action Memory in School-Aged Children
Authors: Farzaneh Badinlou, Reza Kormi-Nouri, Monika Knopf
Abstract:
Studies of adults' episodic memory have found that enacted encoding not only improve recall performance but also retrieve faster during the recall period. The current study focused on exploring the temporal progression of different encoding conditions in younger and older school children. 204 students from two age group of 8 and 14 participated in this study. During the study phase, we studied action encoding in two forms; participants performed the phrases by themselves (SPT), and observed the performance of the experimenter (EPT), which were compared with verbal encoding; participants listened to verbal action phrases (VT). At test phase, we used immediate and delayed free recall tests. We observed significant differences in memory performance as function of age group, and encoding conditions in both immediate and delayed free recall tests. Moreover, temporal progression of recall was faster in older children when compared with younger ones. The interaction of age-group and encoding condition was only significant in delayed recall displaying that younger children performed better in EPT whereas older children outperformed in SPT. It was proposed that enactment effect in form of SPT enhances item-specific processing, whereas EPT improves relational information processing and this differential processes are responsible for the results achieved in younger and older children. The role of memory strategies and information processing methods in younger and older children were considered in this study. Moreover, the temporal progression of recall was faster in action encoding in the form of SPT and EPT compared with verbal encoding in both immediate and delayed free recall and size of enactment effect was constantly increased throughout the recall period. The results of the present study provide further evidence that the action memory is explained with an emphasis on the notion of information processing and strategic views. These results also reveal the temporal progression of recall as a new dimension of episodic memory in children.Keywords: action memory, enactment effect, episodic memory, school-aged children, temporal progression
Procedia PDF Downloads 2744818 Controlling the Process of a Chicken Dressing Plant through Statistical Process Control
Authors: Jasper Kevin C. Dionisio, Denise Mae M. Unsay
Abstract:
In a manufacturing firm, controlling the process ensures that optimum efficiency, productivity, and quality in an organization are achieved. An operation with no standardized procedure yields a poor productivity, inefficiency, and an out of control process. This study focuses on controlling the small intestine processing of a chicken dressing plant through the use of Statistical Process Control (SPC). Since the operation does not employ a standard procedure and does not have an established standard time, the process through the assessment of the observed time of the overall operation of small intestine processing, through the use of X-Bar R Control Chart, is found to be out of control. In the solution of this problem, the researchers conduct a motion and time study aiming to establish a standard procedure for the operation. The normal operator was picked through the use of Westinghouse Rating System. Instead of utilizing the traditional motion and time study, the researchers used the X-Bar R Control Chart in determining the process average of the process that is used for establishing the standard time. The observed time of the normal operator was noted and plotted to the X-Bar R Control Chart. Out of control points that are due to assignable cause were removed and the process average, or the average time the normal operator conducted the process, which was already in control and free form any outliers, was obtained. The process average was then used in determining the standard time of small intestine processing. As a recommendation, the researchers suggest the implementation of the standard time established which is with consonance to the standard procedure which was adopted from the normal operator. With that recommendation, the whole operation will induce a 45.54 % increase in their productivity.Keywords: motion and time study, process controlling, statistical process control, X-Bar R Control chart
Procedia PDF Downloads 2174817 Sensitivity Analysis and Solitary Wave Solutions to the (2+1)-Dimensional Boussinesq Equation in Dispersive Media
Authors: Naila Nasreen, Dianchen Lu
Abstract:
This paper explores the dynamical behavior of the (2+1)-dimensional Boussinesq equation, which is a nonlinear water wave equation and is used to model wave packets in dispersive media with weak nonlinearity. This equation depicts how long wave made in shallow water propagates due to the influence of gravity. The (2+1)- dimensional Boussinesq equation combines the two-way propagation of the classical Boussinesq equation with the dependence on a second spatial variable, as that occurs in the two-dimensional Kadomstev- Petviashvili equation. This equation provides a description of head- on collision of oblique waves and it possesses some interesting properties. The governing model is discussed by the assistance of Ricatti equation mapping method, a relatively integration tool. The solutions have been extracted in different forms the solitary wave solutions as well as hyperbolic and periodic solutions. Moreover, the sensitivity analysis is demonstrated for the designed dynamical structural system’s wave profiles, where the soliton wave velocity and wave number parameters regulate the water wave singularity. In addition to being helpful for elucidating nonlinear partial differential equations, the method in use gives previously extracted solutions and extracts fresh exact solutions. Assuming the right values for the parameters, various graph in different shapes are sketched to provide information about the visual format of the earned results. This paper’s findings support the efficacy of the approach taken in enhancing nonlinear dynamical behavior. We believe this research will be of interest to a wide variety of engineers that work with engineering models. Findings show the effectiveness simplicity, and generalizability of the chosen computational approach, even when applied to complicated systems in a variety of fields, especially in ocean engineering.Keywords: (2+1)-dimensional Boussinesq equation, solitary wave solutions, Ricatti equation mapping approach, nonlinear phenomena
Procedia PDF Downloads 1014816 The Analgesic Effect of Electroacupuncture in a Murine Fibromyalgia Model
Authors: Bernice Jeanne Lottering, Yi-Wen Lin
Abstract:
Introduction: Chronic pain has a definitive lack of objective parameters in the measurement and treatment efficacy of diseases such as Fibromyalgia (FM). Persistent widespread pain and generalized tenderness are the characteristic symptoms affecting a large majority of the global population, particularly females. This disease has indicated a refractory tendency to conventional treatment ventures, largely resultant from a lack of etiological and pathogenic understanding of the disease development. Emerging evidence indicates that the central nervous system (CNS) plays a critical role in the amplification of pain signals and the neurotransmitters associated therewith. Various stimuli have been found to activate the channels existent on nociceptor terminals, thereby actuating nociceptive impulses along the pain pathways. The transient receptor potential vanalloid 1 (TRPV1) channel functions as a molecular integrator for numerous sensory inputs, such as nociception, and was explored in the current study. Current intervention approaches face a multitude challenges, ranging from effective therapeutic interventions to the limitation of pathognomonic criteria resultant from incomplete understanding and partial evidence on the mechanisms of action of FM. It remains unclear whether electroacupuncture (EA) plays an integral role in the functioning of the TRPV1 pathway, and whether or not it can reduce the chronic pain induced by FM. Aims: The aim of this study was to explore the mechanisms underlying the activation and modulation of the TRPV1 channel pathway in a cold stress model of FM applied to a murine model. Furthermore, the effect of EA in the treatment of mechanical and thermal pain, as expressed in FM was also to be investigated. Methods: 18 C57BL/6 wild type and 6 TRPV1 knockout (KO) mice, aged 8-12 weeks, were exposed to an intermittent cold stress-induced fibromyalgia-like pain model, with or without EA treatment at ZusanLi ST36 (2Hz/20min) on day 3 to 5. Von Frey and Hargreaves behaviour tests were implemented in order to analyze the mechanical and thermal pain thresholds on day 0, 3 and 5 in control group (C), FM group (FM), FM mice with EA treated group (FM + EA) and FM in KO group. Results: An increase in mechanical and thermal hyperalgesia was observed in the FM, EA and KO groups when compared to the control group. This initial increase was reduced in the EA group, which directs focus at the treatment efficacy of EA in nociceptive sensitization, and the analgesic effect EA has attenuating FM associated pain. Discussion: An increase in the nociceptive sensitization was observed through higher withdrawal thresholds in the von Frey mechanical test and the Hargreaves thermal test. TRPV1 function in mice has been scientifically associated with these nociceptive conduits, and the increased behaviour test results suggest that TRPV1 upregulation is central to the FM induced hyperalgesia. This data was supported by the decrease in sensitivity observed in results of the TRPV1 KO group. Moreover, the treatment of EA showed a decrease in this FM induced nociceptive sensitization, suggesting TRPV1 upregulation and overexpression can be attenuated by EA at bilateral ST36. This evidence compellingly implies that the analgesic effect of EA is associated with TRPV1 downregulation.Keywords: fibromyalgia, electroacupuncture, TRPV1, nociception
Procedia PDF Downloads 1394815 A Similarity Measure for Classification and Clustering in Image Based Medical and Text Based Banking Applications
Authors: K. P. Sandesh, M. H. Suman
Abstract:
Text processing plays an important role in information retrieval, data-mining, and web search. Measuring the similarity between the documents is an important operation in the text processing field. In this project, a new similarity measure is proposed. To compute the similarity between two documents with respect to a feature the proposed measure takes the following three cases into account: (1) The feature appears in both documents; (2) The feature appears in only one document and; (3) The feature appears in none of the documents. The proposed measure is extended to gauge the similarity between two sets of documents. The effectiveness of our measure is evaluated on several real-world data sets for text classification and clustering problems, especially in banking and health sectors. The results show that the performance obtained by the proposed measure is better than that achieved by the other measures.Keywords: document classification, document clustering, entropy, accuracy, classifiers, clustering algorithms
Procedia PDF Downloads 5184814 A Control Model for the Dismantling of Industrial Plants
Authors: Florian Mach, Eric Hund, Malte Stonis
Abstract:
The dismantling of disused industrial facilities such as nuclear power plants or refineries is an enormous challenge for the planning and control of the logistic processes. Existing control models do not meet the requirements for a proper dismantling of industrial plants. Therefore, the paper presents an approach for the control of dismantling and post-processing processes (e.g. decontamination) in plant decommissioning. In contrast to existing approaches, the dismantling sequence and depth are selected depending on the capacity utilization of required post-processing processes by also considering individual characteristics of respective dismantling tasks (e.g. decontamination success rate, uncertainties regarding the process times). The results can be used in the dismantling of industrial plants (e.g. nuclear power plants) to reduce dismantling time and costs by avoiding bottlenecks such as capacity constraints.Keywords: dismantling management, logistics planning and control models, nuclear power plant dismantling, reverse logistics
Procedia PDF Downloads 3044813 Comparison of Security Challenges and Issues of Mobile Computing and Internet of Things
Authors: Aabiah Nayeem, Fariha Shafiq, Mustabshra Aftab, Rabia Saman Pirzada, Samia Ghazala
Abstract:
In this modern era of technology, the concept of Internet of Things is very popular in every domain. It is a widely distributed system of things in which the data collected from sensory devices is transmitted, analyzed locally/collectively then broadcasted to network where action can be taken remotely via mobile/web apps. Today’s mobile computing is also gaining importance as the services are provided during mobility. Through mobile computing, data are transmitted via computer without physically connected to a fixed point. The challenge is to provide services with high speed and security. Also, the data gathered from the mobiles must be processed in a secured way. Mobile computing is strongly influenced by internet of things. In this paper, we have discussed security issues and challenges of internet of things and mobile computing and we have compared both of them on the basis of similarities and dissimilarities.Keywords: embedded computing, internet of things, mobile computing, wireless technologies
Procedia PDF Downloads 3164812 Diagnostic Performance of Mean Platelet Volume in the Diagnosis of Acute Myocardial Infarction: A Meta-Analysis
Authors: Kathrina Aseanne Acapulco-Gomez, Shayne Julieane Morales, Tzar Francis Verame
Abstract:
Mean platelet volume (MPV) is the most accurate measure of the size of platelets and is routinely measured by most automated hematological analyzers. Several studies have shown associations between MPV and cardiovascular risks and outcomes. Although its measurement may provide useful data, MPV remains to be a diagnostic tool that is yet to be included in routine clinical decision making. The aim of this systematic review and meta-analysis is to determine summary estimates of the diagnostic accuracy of mean platelet volume for the diagnosis of myocardial infarction among adult patients with angina and/or its equivalents in terms of sensitivity, specificity, diagnostic odds ratio, and likelihood ratios, and to determine the difference of the mean MPV values between those with MI and those in the non-MI controls. The primary search was done through search in electronic databases PubMed, Cochrane Review CENTRAL, HERDIN (Health Research and Development Information Network), Google Scholar, Philippine Journal of Pathology, and Philippine College of Physicians Philippine Journal of Internal Medicine. The reference list of original reports was also searched. Cross-sectional, cohort, and case-control articles studying the diagnostic performance of mean platelet volume in the diagnosis of acute myocardial infarction in adult patients were included in the study. Studies were included if: (1) CBC was taken upon presentation to the ER or upon admission (within 24 hours of symptom onset); (2) myocardial infarction was diagnosed with serum markers, ECG, or according to accepted guidelines by the Cardiology societies (American Heart Association (AHA), American College of Cardiology (ACC), European Society of Cardiology (ESC); and, (3) if outcomes were measured as significant difference AND/OR sensitivity and specificity. The authors independently screened for inclusion of all the identified potential studies as a result of the search. Eligible studies were appraised using well-defined criteria. Any disagreement between the reviewers was resolved through discussion and consensus. The overall mean MPV value of those with MI (9.702 fl; 95% CI 9.07 – 10.33) was higher than in those of the non-MI control group (8.85 fl; 95% CI 8.23 – 9.46). Interpretation of the calculated t-value of 2.0827 showed that there was a significant difference in the mean MPV values of those with MI and those of the non-MI controls. The summary sensitivity (Se) and specificity (Sp) for MPV were 0.66 (95% CI; 0.59 - 0.73) and 0.60 (95% CI; 0.43 – 0.75), respectively. The pooled diagnostic odds ratio (DOR) was 2.92 (95% CI; 1.90 – 4.50). The positive likelihood ratio of MPV in the diagnosis of myocardial infarction was 1.65 (95% CI; 1.20 – 22.27), and the negative likelihood ratio was 0.56 (95% CI; 0.50 – 0.64). The intended role for MPV in the diagnostic pathway of myocardial infarction would perhaps be best as a triage tool. With a DOR of 2.92, MPV values can discriminate between those who have MI and those without. For a patient with angina presenting with elevated MPV values, it is 1.65 times more likely that he has MI. Thus, it is implied that the decision to treat a patient with angina or its equivalents as a case of MI could be supported by an elevated MPV value.Keywords: mean platelet volume, MPV, myocardial infarction, angina, chest pain
Procedia PDF Downloads 874811 Administrators' Information Management Capacity and Decision-Making Effectiveness on Staff Promotion in the Teaching Service Commissions in South – West, Nigeria
Authors: Olatunji Sabitu Alimi
Abstract:
This study investigated the extent to which administrators’ information storage, retrieval and processing capacities influence decisions on staff promotion in the Teaching Service Commissions (TESCOMs) in The South-West, Nigeria. One research question and two research hypotheses were formulated and tested respectively at 0.05 level of significance. The study used the descriptive research of the survey type. One hundred (100) staff on salary grade level 09 constituted the sample. Multi- stage, stratified and simple random sampling techniques were used to select 100 staff from the TESCOMs in The South-West, Nigeria. Two questionnaires titled Administrators’ Information Storage, Retrieval and Processing Capacities (AISRPC), and Staff Promotion Effectiveness (SPE) were used for data collection. The inventory was validated and subjected to test-re-test and reliability coefficient of r = 0.79 was obtained. The data were collected and analyzed using Pearson Product Moment Correlation coefficient and simple percentage. The study found that Administrators at TESCOM stored their information in files, hard copies, soft copies, open registry and departmentally in varying degrees while they also processed information manually and through electronics for decision making. In addition, there is a significant relationship between administrators’ information storage and retrieval capacities in the TESCOMs in South – West, Nigeria, (r cal = 0.598 > r table = 0.195). Furthermore, administrators’ information processing capacity and staff promotion effectiveness were found to be significantly related (r cal = 0.209 > r table = 0.195 at 0.05 level of significance). The study recommended that training, seminars, workshops should be organized for administrators on information management, while educational organizations should provide Information Management Technology (ICT) equipment for the administrators in the TESCOMs. The staff of TESCOM should be promoted having satisfied the promotion criteria such as spending required number of years on a grade level, a clean record of service and vacancy.Keywords: information processing capacity, staff promotion effectiveness, teaching service commission, Nigeria
Procedia PDF Downloads 5334810 A Cost Effective Approach to Develop Mid-Size Enterprise Software Adopted the Waterfall Model
Authors: Mohammad Nehal Hasnine, Md Kamrul Hasan Chayon, Md Mobasswer Rahman
Abstract:
Organizational tendencies towards computer-based information processing have been observed noticeably in the third-world countries. Many enterprises are taking major initiatives towards computerized working environment because of massive benefits of computer-based information processing. However, designing and developing information resource management software for small and mid-size enterprises under budget costs and strict deadline is always challenging for software engineers. Therefore, we introduced an approach to design mid-size enterprise software by using the Waterfall model, which is one of the SDLC (Software Development Life Cycles), in a cost effective way. To fulfill research objectives, in this study, we developed mid-sized enterprise software named “BSK Management System” that assists enterprise software clients with information resource management and perform complex organizational tasks. Waterfall model phases have been applied to ensure that all functions, user requirements, strategic goals, and objectives are met. In addition, Rich Picture, Structured English, and Data Dictionary have been implemented and investigated properly in engineering manner. Furthermore, an assessment survey with 20 participants has been conducted to investigate the usability and performance of the proposed software. The survey results indicated that our system featured simple interfaces, easy operation and maintenance, quick processing, and reliable and accurate transactions.Keywords: end-user application development, enterprise software design, information resource management, usability
Procedia PDF Downloads 4384809 Prediction of Product Size Distribution of a Vertical Stirred Mill Based on Breakage Kinetics
Authors: C. R. Danielle, S. Erik, T. Patrick, M. Hugh
Abstract:
In the last decade there has been an increase in demand for fine grinding due to the depletion of coarse-grained orebodies and an increase of processing fine disseminated minerals and complex orebodies. These ores have provided new challenges in concentrator design because fine and ultra-fine grinding is required to achieve acceptable recovery rates. Therefore, the correct design of a grinding circuit is important for minimizing unit costs and increasing product quality. The use of ball mills for grinding in fine size ranges is inefficient and, therefore, vertical stirred grinding mills are becoming increasingly popular in the mineral processing industry due to its already known high energy efficiency. This work presents a hypothesis of a methodology to predict the product size distribution of a vertical stirred mill using a Bond ball mill. The Population Balance Model (PBM) was used to empirically analyze the performance of a vertical mill and a Bond ball mill. The breakage parameters obtained for both grinding mills are compared to determine the possibility of predicting the product size distribution of a vertical mill based on the results obtained from the Bond ball mill. The biggest advantage of this methodology is that most of the minerals processing laboratories already have a Bond ball mill to perform the tests suggested in this study. Preliminary results show the possibility of predicting the performance of a laboratory vertical stirred mill using a Bond ball mill.Keywords: bond ball mill, population balance model, product size distribution, vertical stirred mill
Procedia PDF Downloads 2924808 Distribution of Putative Dopaminergic Neurons and Identification of D2 Receptors in the Brain of Fish
Authors: Shweta Dhindhwal
Abstract:
Dopamine is an essential neurotransmitter in the central nervous system of all vertebrates and plays an important role in many processes such as motor function, learning and behavior, and sensory activity. One of the important functions of dopamine is release of pituitary hormones. It is synthesized from the amino acid tyrosine. Two types of dopamine receptors, D1-like and D2-like, have been reported in fish. The dopamine containing neurons are located in the olfactory bulbs, the ventral regions of the pre-optic area and tuberal hypothalamus. Distribution of the dopaminergic system has not been studied in the murrel, Channa punctatus. The present study deals with identification of D2 receptors in the brain of murrel. A phylogenetic tree has been constructed using partial sequence of D2 receptor. Distribution of putative dopaminergic neurons in the brain has been investigated. Also, formalin induced hypertrophy of neurosecretory cells in murrel has been studied.Keywords: dopamine, fish, pre-optic area, murrel
Procedia PDF Downloads 4214807 A Network of Nouns and Their Features :A Neurocomputational Study
Authors: Skiker Kaoutar, Mounir Maouene
Abstract:
Neuroimaging studies indicate that a large fronto-parieto-temporal network support nouns and their features, with some areas store semantic knowledge (visual, auditory, olfactory, gustatory,…), other areas store lexical representation and other areas are implicated in general semantic processing. However, it is not well understood how this fronto-parieto-temporal network can be modulated by different semantic tasks and different semantic relations between nouns. In this study, we combine a behavioral semantic network, functional MRI studies involving object’s related nouns and brain network studies to explain how different semantic tasks and different semantic relations between nouns can modulate the activity within the brain network of nouns and their features. We first describe how nouns and their features form a large scale brain network. For this end, we examine the connectivities between areas recruited during the processing of nouns to know which configurations of interaction areas are possible. We can thus identify if, for example, brain areas that store semantic knowledge communicate via functional/structural links with areas that store lexical representations. Second, we examine how this network is modulated by different semantic tasks involving nouns and finally, we examine how category specific activation may result from the semantic relations among nouns. The results indicate that brain network of nouns and their features is highly modulated and flexible by different semantic tasks and semantic relations. At the end, this study can be used as a guide to help neurosientifics to interpret the pattern of fMRI activations detected in the semantic processing of nouns. Specifically; this study can help to interpret the category specific activations observed extensively in a large number of neuroimaging studies and clinical studies.Keywords: nouns, features, network, category specificity
Procedia PDF Downloads 5214806 Phonological Processing and Its Role in Pseudo-Word Decoding in Children Learning to Read Kannada Language between 5.6 to 8.6 Years
Authors: Vangmayee. V. Subban, Somashekara H. S, Shwetha Prabhu, Jayashree S. Bhat
Abstract:
Introduction and Need: Phonological processing is critical in learning to read alphabetical and non-alphabetical languages. However, its role in learning to read Kannada an alphasyllabary is equivocal. The literature has focused on the developmental role of phonological awareness on reading. To the best of authors knowledge, the role of phonological memory and phonological naming has not been addressed in alphasyllabary Kannada language. Therefore, there is a need to evaluate the comprehensive role of the phonological processing skills in Kannada on word decoding skills during the early years of schooling. Aim and Objectives: The present study aimed to explore the phonological processing abilities and their role in learning to decode pseudowords in children learning to read the Kannada language during initial years of formal schooling between 5.6 to 8.6 years. Method: In this cross sectional study, 60 typically developing Kannada speaking children, 20 each from Grade I, Grade II, and Grade III between the age range of 5.6 to 6.6 years, 6.7 to 7.6 years and 7.7 to 8.6 years respectively were selected from Kannada medium schools. Phonological processing abilities were assessed using an assessment tool specifically developed to address the objectives of the present research. The assessment tool was content validated by subject experts and had good inter and intra-subject reliability. Phonological awareness was assessed at syllable level using syllable segmentation, blending, and syllable stripping at initial, medial and final position. Phonological memory was assessed using pseudoword repetition task and phonological naming was assessed using rapid automatized naming of objects. Both phonological awareneness and phonological memory measures were scored for the accuracy of the response, whereas Rapid Automatized Naming (RAN) was scored for total naming speed. Results: The mean scores comparison using one-way ANOVA revealed a significant difference (p ≤ 0.05) between the groups on all the measures of phonological awareness, pseudoword repetition, rapid automatized naming, and pseudoword reading. Subsequent post-hoc grade wise comparison using Bonferroni test revealed significant differences (p ≤ 0.05) between each of the grades for all the tasks except (p ≥ 0.05) for syllable blending, syllable stripping, and pseudoword repetition between Grade II and Grade III. The Pearson correlations revealed a highly significant positive correlation (p=0.000) between all the variables except phonological naming which had significant negative correlations. However, the correlation co-efficient was higher for phonological awareness measures compared to others. Hence, phonological awareness was chosen a first independent variable to enter in the hierarchical regression equation followed by rapid automatized naming and finally, pseudoword repetition. The regression analysis revealed syllable awareness as a single most significant predictor of pseudoword reading by explaining the unique variance of 74% and there was no significant change in R² when RAN and pseudoword repetition were added subsequently to the regression equation. Conclusion: Present study concluded that syllable awareness matures completely by Grade II, whereas the phonological memory and phonological naming continue to develop beyond Grade III. Amongst phonological processing skills, phonological awareness, especially syllable awareness is crucial for word decoding than phonological memory and naming during initial years of schooling.Keywords: phonological awareness, phonological memory, phonological naming, phonological processing, pseudo-word decoding
Procedia PDF Downloads 1754805 Evaluating Seismic Earth Pressure Effects on Building Lateral Stability: Sensitivity to Retention Height Differences and Sloped Site Conditions
Authors: Rod Davis, Sara Saminfar
Abstract:
Earthquakes can induce dynamic earth pressures on retaining walls, which are in addition to the static earth pressures. This raises questions about how to effectively combine the seismic lateral earth pressure with other loads on buildings, including static lateral earth pressure. When basement walls retain soil with differing exterior grades on opposite sides, the seismic increment of active earth pressure should be considered. Additionally, buildings situated on sloped sites with stepped retention may experience unique dynamic effects due to soil-structure interactions, potentially amplifying the lateral pressures exerted on the retaining walls and influencing the building's response during seismic events. To account for the dynamic effects of the retained soil on the building's responses, it is essential to interconnect the building structure with the surrounding soil to facilitate their interaction as the embedded structure and the surrounding soil move together during an earthquake. Consequently, a finite element model of the building is developed, with the rigid retaining walls and restrained to the floor diaphragms. This paper aims to explore the dynamic effects of retained soil on the lateral stability of buildings and the sensitivity of the building's responses to differences in the retained heights on opposite sides of the building basement. Furthermore, the results are compared with those from a sloped site to evaluate the impact of stepped retention on dynamic soil pressure. These findings will help establish a minimum threshold for differences in retained heights on opposite sides of a building that necessitates the inclusion of dynamic soil pressure in the building's lateral stability analysis.Keywords: dynamic earth pressures, soil-structure interaction, stepped retention, building retention
Procedia PDF Downloads 124804 System Identification of Timber Masonry Walls Using Shaking Table Test
Authors: Timir Baran Roy, Luis Guerreiro, Ashutosh Bagchi
Abstract:
Dynamic study is important in order to design, repair and rehabilitation of structures. It has played an important role in the behavior characterization of structures; such as bridges, dams, high-rise buildings etc. There had been a substantial development in this area over the last few decades, especially in the field of dynamic identification techniques of structural systems. Frequency Domain Decomposition (FDD) and Time Domain Decomposition are most commonly used methods to identify modal parameters; such as natural frequency, modal damping, and mode shape. The focus of the present research is to study the dynamic characteristics of typical timber masonry walls commonly used in Portugal. For that purpose, a multi-storey structural prototypes of such walls have been tested on a seismic shake table at the National Laboratory for Civil Engineering, Portugal (LNEC). Signal processing has been performed of the output response, which is collected from the shaking table experiment of the prototype using accelerometers. In the present work signal processing of the output response, based on the input response has been done in two ways: FDD and Stochastic Subspace Identification (SSI). In order to estimate the values of the modal parameters, algorithms for FDD are formulated, and parametric functions for the SSI are computed. Finally, estimated values from both the methods are compared to measure the accuracy of both the techniques.Keywords: frequency domain decomposition (fdd), modal parameters, signal processing, stochastic subspace identification (ssi), time domain decomposition
Procedia PDF Downloads 2644803 Studying the Spatial Aspects of Visual Attention Processing in Global Precedence Paradigm
Authors: Shreya Borthakur, Aastha Vartak
Abstract:
This behavioral experiment aimed to investigate the global precedence phenomenon in a South Asian sample and its correlation with mobile screen time. The global precedence effect refers to the tendency to process overall structure before attending to specific details. Participants completed attention tasks involving global and local stimuli with varying consistencies. The results showed a tendency towards local precedence, but no significant differences in reaction times were found between consistency levels or attention conditions. However, the correlation analysis revealed that participants with higher screen time exhibited a stronger negative correlation with local attention, suggesting that excessive screen usage may impact perceptual organization. Further research is needed to explore this relationship and understand the influence of screen time on cognitive processing.Keywords: global precedence, visual attention, perceptual organization, screen time, cognition
Procedia PDF Downloads 684802 Correlation between Electromyographic and Textural Parameters for Different Textured Indian Foods Using Principal Component Analysis
Authors: S. Rustagi, N. S. Sodhi, B. Dhillon, T. Kaur
Abstract:
The objective of this study was to check whether there is any relationship between electromyographic (EMG) and textural parameters during food texture evaluation. In this study, a total of eighteen mastication variables were measured for entire mastication, per chew mastication and three different stages of mastication (viz. early, middle and late) by EMG for five different foods using eight human subjects. Cluster analysis was used to reduce the number of mastication variables from 18 to 5, so that principal component analysis (PCA) could be applied on them. The PCA further resulted in two meaningful principal components. The principal component scores for each food were measured and correlated with five textural parameters (viz. hardness, cohesiveness, chewiness, gumminess and adhesiveness). Correlation coefficients were found to be statistically significant (p < 0.10) for cohesiveness and adhesiveness while if we reduce the significance level (p < 0.20) then chewiness also showed correlation with mastication parameters.Keywords: electromyography, mastication, sensory, texture
Procedia PDF Downloads 3414801 Fabrication of Antimicrobial Dental Model Using Digital Light Processing (DLP) Integrated with 3D-Bioprinting Technology
Authors: Rana Mohamed, Ahmed E. Gomaa, Gehan Safwat, Ayman Diab
Abstract:
Background: Bio-fabrication is a multidisciplinary research field that combines several principles, fabrication techniques, and protocols from different fields. The open-source-software movement is a movement that supports the use of open-source licenses for some or all software as part of the broader notion of open collaboration. Additive manufacturing is the concept of 3D printing, where it is a manufacturing method through adding layer-by-layer using computer-aided designs (CAD). There are several types of AM system used, and they can be categorized by the type of process used. One of these AM technologies is Digital light processing (DLP) which is a 3D printing technology used to rapidly cure a photopolymer resin to create hard scaffolds. DLP uses a projected light source to cure (Harden or crosslinking) the entire layer at once. Current applications of DLP are focused on dental and medical applications. Other developments have been made in this field, leading to the revolutionary field 3D bioprinting. The open-source movement was started to spread the concept of open-source software to provide software or hardware that is cheaper, reliable, and has better quality. Objective: Modification of desktop 3D printer into 3D bio-printer and the integration of DLP technology and bio-fabrication to produce an antibacterial dental model. Method: Modification of a desktop 3D printer into a 3D bioprinter. Gelatin hydrogel and sodium alginate hydrogel were prepared with different concentrations. Rhizome of Zingiber officinale, Flower buds of Syzygium aromaticum, and Bulbs of Allium sativum were extracted, and extractions were selected on different levels (Powder, aqueous extracts, total oils, and Essential oils) prepared for antibacterial bioactivity. Agar well diffusion method along with the E. coli have been used to perform the sensitivity test for the antibacterial activity of the extracts acquired by Zingiber officinale, Syzygium aromaticum, and Allium sativum. Lastly, DLP printing was performed to produce several dental models with the natural extracted combined with hydrogel to represent and simulate the Hard and Soft tissues. Result: The desktop 3D printer was modified into 3D bioprinter using open-source software Marline and modified custom-made 3D printed parts. Sodium alginate hydrogel and gelatin hydrogel were prepared at 5% (w/v), 10% (w/v), and 15%(w/v). Resin integration with the natural extracts of Rhizome of Zingiber officinale, Flower buds of Syzygium aromaticum, and Bulbs of Allium sativum was done following the percentage 1- 3% for each extract. Finally, the Antimicrobial dental model was printed; exhibits the antimicrobial activity, followed by merging with sodium alginate hydrogel. Conclusion: The open-source movement was successful in modifying and producing a low-cost Desktop 3D Bioprinter showing the potential of further enhancement in such scope. Additionally, the potential of integrating the DLP technology with bioprinting is a promising step toward the usage of the antimicrobial activity using natural products.Keywords: 3D printing, 3D bio-printing, DLP, hydrogel, antibacterial activity, zingiber officinale, syzygium aromaticum, allium sativum, panax ginseng, dental applications
Procedia PDF Downloads 944800 The Influence of Machine Tool Composite Stiffness to the Surface Waviness When Processing Posture Constantly Switching
Authors: Song Zhiyong, Zhao Bo, Du Li, Wang Wei
Abstract:
Aircraft structures generally have complex surface. Because of constantly switching postures of motion axis, five-axis CNC machine’s composite stiffness changes during CNC machining. It gives rise to different amplitude of vibration of processing system, which further leads to the different effects on surface waviness. In order to provide a solution for this problem, we take the “S” shape test specimen’s CNC machining for the object, through calculate the five axis CNC machine’s composite stiffness and establish vibration model, we analysis of the influence mechanism between vibration amplitude and surface waviness. Through carry out the surface quality measurement experiments, verify the validity and accuracy of the theoretical analysis. This paper’s research results provide a theoretical basis for surface waviness control.Keywords: five axis CNC machine, “S” shape test specimen, composite stiffness, surface waviness
Procedia PDF Downloads 390