Search results for: complexity measurement
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4274

Search results for: complexity measurement

3104 Effect of the Orifice Plate Specifications on Coefficient of Discharge

Authors: Abulbasit G. Abdulsayid, Zinab F. Abdulla, Asma A. Omer

Abstract:

On the ground that the orifice plate is relatively inexpensive, requires very little maintenance and only calibrated during the occasion of plant turnaround, the orifice plate has turned to be in a real prevalent use in gas industry. Inaccuracy of measurement in the fiscal metering stations may highly be accounted to be the most vital factor for mischarges in the natural gas industry in Libya. A very trivial error in measurement can add up a fast escalating financial burden to the custodian transactions. The unaccounted gas quantity transferred annually via orifice plates in Libya, could be estimated in an extent of multi-million dollars. As the oil and gas wealth is the solely source of income to Libya, every effort is now being exerted to improve the accuracy of existing orifice metering facilities. Discharge coefficient has become pivotal in current researches undertaken in this regard. Hence, increasing the knowledge of the flow field in a typical orifice meter is indispensable. Recently and in a drastic pace, the CFD has become the most time and cost efficient versatile tool for in-depth analysis of fluid mechanics, heat and mass transfer of various industrial applications. Getting deeper into the physical phenomena lied beneath and predicting all relevant parameters and variables with high spatial and temporal resolution have been the greatest weighing pros counting for CFD. In this paper, flow phenomena for air passing through an orifice meter were numerically analyzed with CFD code based modeling, giving important information about the effect of orifice plate specifications on the discharge coefficient for three different tappings locations, i.e., flange tappings, D and D/2 tappings compared with vena contracta tappings. Discharge coefficients were paralleled with discharge coefficients estimated by ISO 5167. The influences of orifice plate bore thickness, orifice plate thickness, beveled angle, perpendicularity and buckling of the orifice plate, were all duly investigated. A case of an orifice meter whose pipe diameter of 2 in, beta ratio of 0.5 and Reynolds number of 91100, was taken as a model. The results highlighted that the discharge coefficients were highly responsive to the variation of plate specifications and under all cases, the discharge coefficients for D and D/2 tappings were very close to that of vena contracta tappings which were believed as an ideal arrangement. Also, in general sense, it was appreciated that the standard equation in ISO 5167, by which the discharge coefficient was calculated, cannot capture the variation of the plate specifications and thus further thorough considerations would be still needed.

Keywords: CFD, discharge coefficients, orifice meter, orifice plate specifications

Procedia PDF Downloads 119
3103 The Design of the Questionnaire of Attitudes in Physics Teaching

Authors: Ricardo Merlo

Abstract:

Attitude is a hypothetical construct that can be significantly measured to know the favorable or unfavorable predisposition that students have towards the teaching of sciences such as Physics. Although the state-of-the-art attitude test used in Physics teaching indicated different design and validation models in different groups of students, the analysis of the weight given to each dimension that supported the attitude was scarcely evaluated. Then, in this work, a methodology of attitude questionnaire construction process was proposed that allowed the teacher to design and validate the measurement instrument for different subjects of Physics at the university level developed in the classroom according to the weight considered to the affective, knowledge, and behavioural dimensions. Finally, questionnaire models were tested for the case of incoming university students, achieving significant results in the improvement of Physics teaching.

Keywords: attitude, physics teaching, motivation, academic performance

Procedia PDF Downloads 71
3102 Evaluation of Reliability, Availability and Maintainability for Automotive Manufacturing Process

Authors: Hamzeh Soltanali, Abbas Rohani, A. H. S. Garmabaki, Mohammad Hossein Abbaspour-Fard, Adithya Thaduri

Abstract:

Toward continuous innovation and high complexity of technological systems, the automotive manufacturing industry is also under pressure to implement adequate management strategies regarding availability and productivity. In this context, evaluation of system’s performance by considering reliability, availability and maintainability (RAM) methodologies can constitute for resilient operation, identifying the bottlenecks of manufacturing process and optimization of maintenance actions. In this paper, RAM parameters are evaluated for improving the operational performance of the fluid filling process. To evaluate the RAM factors through the behavior of states defined for such process, a systematic decision framework was developed. The results of RAM analysis revealed that that the improving reliability and maintainability of main bottlenecks for each filling workstation need to be considered as a priority. The results could be useful to improve operational performance and sustainability of production process.

Keywords: automotive, performance, reliability, RAM, fluid filling process

Procedia PDF Downloads 353
3101 Minimization of Propagation Delay in Multi Unmanned Aerial Vehicle Network

Authors: Purva Joshi, Rohit Thanki, Omar Hanif

Abstract:

Unmanned aerial vehicles (UAVs) are becoming increasingly important in various industrial applications and sectors. Nowadays, a multi UAV network is used for specific types of communication (e.g., military) and monitoring purposes. Therefore, it is critical to reducing propagation delay during communication between UAVs, which is essential in a multi UAV network. This paper presents how the propagation delay between the base station (BS) and the UAVs is reduced using a searching algorithm. Furthermore, the iterative-based K-nearest neighbor (k-NN) algorithm and Travelling Salesmen Problem (TSP) algorthm were utilized to optimize the distance between BS and individual UAV to overcome the problem of propagation delay in multi UAV networks. The simulation results show that this proposed method reduced complexity, improved reliability, and reduced propagation delay in multi UAV networks.

Keywords: multi UAV network, optimal distance, propagation delay, K - nearest neighbor, traveling salesmen problem

Procedia PDF Downloads 203
3100 Transferring of Digital DIY Potentialities through a Co-Design Tool

Authors: Marita Canina, Carmen Bruno

Abstract:

Digital Do It Yourself (DIY) is a contemporary socio-technological phenomenon, enabled by technological tools. The nature and potential long-term effects of this phenomenon have been widely studied within the framework of the EU funded project ‘Digital Do It Yourself’, in which the authors have created and experimented a specific Digital Do It Yourself (DiDIY) co-design process. The phenomenon was first studied through a literature research to understand its multiple dimensions and complexity. Therefore, co-design workshops were used to investigate the phenomenon by involving people to achieve a complete understanding of the DiDIY practices and its enabling factors. These analyses allowed the definition of the DiDIY fundamental factors that were then translated into a design tool. The objective of the tool is to shape design concepts by transferring these factors into different environments to achieve innovation. The aim of this paper is to present the ‘DiDIY Factor Stimuli’ tool, describing the research path and the findings behind it.

Keywords: co-design process, digital DIY, innovation, toolkit

Procedia PDF Downloads 178
3099 Auto Surgical-Emissive Hand

Authors: Abhit Kumar

Abstract:

The world is full of master slave Telemanipulator where the doctor’s masters the console and the surgical arm perform the operations, i.e. these robots are passive robots, what the world needs to focus is that in use of these passive robots we are acquiring doctors for operating these console hence the utilization of the concept of robotics is still not fully utilized ,hence the focus should be on active robots, Auto Surgical-Emissive Hand use the similar concept of active robotics where this anthropomorphic hand focuses on the autonomous surgical, emissive and scanning operation, enabled with the vision of 3 way emission of Laser Beam/-5°C < ICY Steam < 5°C/ TIC embedded in palm of the anthropomorphic hand and structured in a form of 3 way disc. Fingers of AS-EH (Auto Surgical-Emissive Hand) as called, will have tactile, force, pressure sensor rooted to it so that the mechanical mechanism of force, pressure and physical presence on the external subject can be maintained, conversely our main focus is on the concept of “emission” the question arises how all the 3 non related methods will work together that to merged in a single programmed hand, all the 3 methods will be utilized according to the need of the external subject, the laser if considered will be emitted via a pin sized outlet, this radiation is channelized via a thin channel which further connect to the palm of the surgical hand internally leading to the pin sized outlet, here the laser is used to emit radiation enough to cut open the skin for removal of metal scrap or any other foreign material while the patient is in under anesthesia, keeping the complexity of the operation very low, at the same time the TIC fitted with accurate temperature compensator will be providing us the real time feed of the surgery in the form of heat image, this gives us the chance to analyze the level, also ATC will help us to determine the elevated body temperature while the operation is being proceeded, the thermal imaging camera in rooted internally in the AS-EH while also being connected to the real time software externally to provide us live feedback. The ICY steam will provide the cooling effect before and after the operation, however for more utilization of this concept we can understand the working of simple procedure in which If a finger remain in icy water for a long time it freezes the blood flow stops and the portion become numb and isolated hence even if you try to pinch it will not provide any sensation as the nerve impulse did not coordinated with the brain hence sensory receptor did not got active which means no sense of touch was observed utilizing the same concept we can use the icy stem to be emitted via a pin sized hole on the area of concern ,temperature below 273K which will frost the area after which operation can be done, this steam can also be use to desensitized the pain while the operation in under process. The mathematical calculation, algorithm, programming of working and movement of this hand will be installed in the system prior to the procedure, since this AS-EH is a programmable hand it comes with the limitation hence this AS-EH robot will perform surgical process of low complexity only.

Keywords: active robots, algorithm, emission, icy steam, TIC, laser

Procedia PDF Downloads 356
3098 Functional Neurocognitive Imaging (fNCI): A Diagnostic Tool for Assessing Concussion Neuromarker Abnormalities and Treating Post-Concussion Syndrome in Mild Traumatic Brain Injury Patients

Authors: Parker Murray, Marci Johnson, Tyson S. Burnham, Alina K. Fong, Mark D. Allen, Bruce McIff

Abstract:

Purpose: Pathological dysregulation of Neurovascular Coupling (NVC) caused by mild traumatic brain injury (mTBI) is the predominant source of chronic post-concussion syndrome (PCS) symptomology. fNCI has the ability to localize dysregulation in NVC by measuring blood-oxygen-level-dependent (BOLD) signaling during the performance of fMRI-adapted neuropsychological evaluations. With fNCI, 57 brain areas consistently affected by concussion were identified as PCS neural markers, which were validated on large samples of concussion patients and healthy controls. These neuromarkers provide the basis for a computation of PCS severity which is referred to as the Severity Index Score (SIS). The SIS has proven valuable in making pre-treatment decisions, monitoring treatment efficiency, and assessing long-term stability of outcomes. Methods and Materials: After being scanned while performing various cognitive tasks, 476 concussed patients received an SIS score based on the neural dysregulation of the 57 previously identified brain regions. These scans provide an objective measurement of attentional, subcortical, visual processing, language processing, and executive functioning abilities, which were used as biomarkers for post-concussive neural dysregulation. Initial SIS scores were used to develop individualized therapy incorporating cognitive, occupational, and neuromuscular modalities. These scores were also used to establish pre-treatment benchmarks and measure post-treatment improvement. Results: Changes in SIS were calculated in percent change from pre- to post-treatment. Patients showed a mean improvement of 76.5 percent (σ= 23.3), and 75.7 percent of patients showed at least 60 percent improvement. Longitudinal reassessment of 24 of the patients, measured an average of 7.6 months post-treatment, shows that SIS improvement is maintained and improved, with an average of 90.6 percent improvement from their original scan. Conclusions: fNCI provides a reliable measurement of NVC allowing for identification of concussion pathology. Additionally, fNCI derived SIS scores direct tailored therapy to restore NVC, subsequently resolving chronic PCS resulting from mTBI.

Keywords: concussion, functional magnetic resonance imaging (fMRI), neurovascular coupling (NVC), post-concussion syndrome (PCS)

Procedia PDF Downloads 357
3097 Multilevel Gray Scale Image Encryption through 2D Cellular Automata

Authors: Rupali Bhardwaj

Abstract:

Cryptography is the science of using mathematics to encrypt and decrypt data; the data are converted into some other gibberish form, and then the encrypted data are transmitted. The primary purpose of this paper is to provide two levels of security through a two-step process, rather than transmitted the message bits directly, first encrypted it using 2D cellular automata and then scrambled with Arnold Cat Map transformation; it provides an additional layer of protection and reduces the chance of the transmitted message being detected. A comparative analysis on effectiveness of scrambling technique is provided by scrambling degree measurement parameters i.e. Gray Difference Degree (GDD) and Correlation Coefficient.

Keywords: scrambling, cellular automata, Arnold cat map, game of life, gray difference degree, correlation coefficient

Procedia PDF Downloads 377
3096 Linkage Disequilibrium and Haplotype Blocks Study from Two High-Density Panels and a Combined Panel in Nelore Beef Cattle

Authors: Priscila A. Bernardes, Marcos E. Buzanskas, Luciana C. A. Regitano, Ricardo V. Ventura, Danisio P. Munari

Abstract:

Genotype imputation has been used to reduce genomic selections costs. In order to increase haplotype detection accuracy in methods that considers the linkage disequilibrium, another approach could be used, such as combined genotype data from different panels. Therefore, this study aimed to evaluate the linkage disequilibrium and haplotype blocks in two high-density panels before and after the imputation to a combined panel in Nelore beef cattle. A total of 814 animals were genotyped with the Illumina BovineHD BeadChip (IHD), wherein 93 animals (23 bulls and 70 progenies) were also genotyped with the Affymetrix Axion Genome-Wide BOS 1 Array Plate (AHD). After the quality control, 809 IHD animals (509,107 SNPs) and 93 AHD (427,875 SNPs) remained for analyses. The combined genotype panel (CP) was constructed by merging both panels after quality control, resulting in 880,336 SNPs. Imputation analysis was conducted using software FImpute v.2.2b. The reference (CP) and target (IHD) populations consisted of 23 bulls and 786 animals, respectively. The linkage disequilibrium and haplotype blocks studies were carried out for IHD, AHD, and imputed CP. Two linkage disequilibrium measures were considered; the correlation coefficient between alleles from two loci (r²) and the |D’|. Both measures were calculated using the software PLINK. The haplotypes' blocks were estimated using the software Haploview. The r² measurement presented different decay when compared to |D’|, wherein AHD and IHD had almost the same decay. For r², even with possible overestimation by the sample size for AHD (93 animals), the IHD presented higher values when compared to AHD for shorter distances, but with the increase of distance, both panels presented similar values. The r² measurement is influenced by the minor allele frequency of the pair of SNPs, which can cause the observed difference comparing the r² decay and |D’| decay. As a sum of the combinations between Illumina and Affymetrix panels, the CP presented a decay equivalent to a mean of these combinations. The estimated haplotype blocks detected for IHD, AHD, and CP were 84,529, 63,967, and 140,336, respectively. The IHD were composed by haplotype blocks with mean of 137.70 ± 219.05kb, the AHD with mean of 102.10kb ± 155.47, and the CP with mean of 107.10kb ± 169.14. The majority of the haplotype blocks of these three panels were composed by less than 10 SNPs, with only 3,882 (IHD), 193 (AHD) and 8,462 (CP) haplotype blocks composed by 10 SNPs or more. There was an increase in the number of chromosomes covered with long haplotypes when CP was used as well as an increase in haplotype coverage for short chromosomes (23-29), which can contribute for studies that explore haplotype blocks. In general, using CP could be an alternative to increase density and number of haplotype blocks, increasing the probability to obtain a marker close to a quantitative trait loci of interest.

Keywords: Bos taurus indicus, decay, genotype imputation, single nucleotide polymorphism

Procedia PDF Downloads 280
3095 A Linear Relation for Voltage Unbalance Factor Evaluation in Three-Phase Electrical Power System Using Space Vector

Authors: Dana M. Ragab, Jasim A Ghaeb

Abstract:

The Voltage Unbalance Factor (VUF) index is recommended to evaluate system performance under unbalanced operation. However, its calculation requires complex algebra which limits its use in the field. Furthermore, one system cycle is required at least to detect unbalance using the VUF. Ideally unbalance mitigation must be performed within 10 ms for 50 Hz systems. In this work, a linear relation for VUF evaluation in three-phase electrical power system using space vector (SV) is derived. It is proposed to determine the voltage unbalance quickly and accurately and to overcome the constraints associated with the traditional methods of VUF evaluation. Aqaba-Qatrana-South Amman (AQSA) power system is considered to study the system performance under unbalanced conditions. The results show that both the complexity of calculations and the time required to evaluate VUF are reduced significantly.

Keywords: power quality, space vector, unbalance evaluation, three-phase power system

Procedia PDF Downloads 189
3094 A Semi-Automated GIS-Based Implementation of Slope Angle Design Reconciliation Process at Debswana Jwaneng Mine, Botswana

Authors: K. Mokatse, O. M. Barei, K. Gabanakgosi, P. Matlhabaphiri

Abstract:

The mining of pit slopes is often associated with some level of deviation from design recommendations, and this may translate to associated changes in the stability of the excavated pit slopes. Therefore slope angle design reconciliations are essential for assessing and monitoring compliance of excavated pit slopes to accepted slope designs. These associated changes in slope stability may be reflected by changes in the calculated factors of safety and/or probabilities of failure. Reconciliations of as-mined and slope design profiles are conducted periodically to assess the implications of these deviations on pit slope stability. Currently, the slope design reconciliation process being implemented in Jwaneng Mine involves the measurement of as-mined and design slope angles along vertical sections cut along the established geotechnical design section lines on the GEOVIA GEMS™ software. Bench retentions are calculated as a percentage of the available catchment area, less over-mined and under-mined areas, to that of the designed catchment area. This process has proven to be both tedious and requires a lot of manual effort and time to execute. Consequently, a new semi-automated mine-to-design reconciliation approach that utilizes laser scanning and GIS-based tools is being proposed at Jwaneng Mine. This method involves high-resolution scanning of targeted bench walls, subsequent creation of 3D surfaces from point cloud data and the derivation of slope toe lines and crest lines on the Maptek I-Site Studio software. The toe lines and crest lines are then exported to the ArcGIS software where distance offsets between the design and actual bench toe lines and crest lines are calculated. Retained bench catchment capacity is measured as distances between the toe lines and crest lines on the same bench elevations. The assessment of the performance of the inter-ramp and overall slopes entails the measurement of excavated and design slope angles along vertical sections on the ArcGIS software. Excavated and design toe-to-toe or crest-to-crest slope angles are measured for inter-ramp stack slope reconciliations. Crest-to-toe slope angles are also measured for overall slope angle design reconciliations. The proposed approach allows for a more automated, accurate, quick and easier workflow for carrying out slope angle design reconciliations. This process has proved highly effective and timeous in the assessment of slope performance in Jwaneng Mine. This paper presents a newly proposed process for assessing compliance to slope angle designs for Jwaneng Mine.

Keywords: slope angle designs, slope design recommendations, slope performance, slope stability

Procedia PDF Downloads 237
3093 Governance of Inter-Organizational Research Cooperation

Authors: Guenther Schuh, Sebastian Woelk

Abstract:

Companies face increasing challenges in research due to higher costs and risks. The intensifying technology complexity and interdisciplinarity require unique know-how. Therefore, companies need to decide whether research shall be conducted internally or externally with partners. On the other hand, research institutes meet increasing efforts to achieve good financing and to maintain high research reputation. Therefore, relevant research topics need to be identified and specialization of competency is necessary. However, additional competences for solving interdisciplinary research projects are also often required. Secured financing can be achieved by bonding industry partners as well as public fundings. The realization of faster and better research drives companies and research institutes to cooperate in organized research networks, which are managed by an administrative organization. For an effective and efficient cooperation, necessary processes, roles, tools and a set of rules need to be determined. The goal of this paper is to show the state-of-art research and to propose a governance framework for organized research networks.

Keywords: interorganizational cooperation, design of network governance, research network

Procedia PDF Downloads 367
3092 Fuzzy Logic Control for Flexible Joint Manipulator: An Experimental Implementation

Authors: Sophia Fry, Mahir Irtiza, Alexa Hoffman, Yousef Sardahi

Abstract:

This study presents an intelligent control algorithm for a flexible robotic arm. Fuzzy control is used to control the motion of the arm to maintain the arm tip at the desired position while reducing vibration and increasing the system speed of response. The Fuzzy controller (FC) is based on adding the tip angular position to the arm deflection angle and using their sum as a feedback signal to the control algorithm. This reduces the complexity of the FC in terms of the input variables, number of membership functions, fuzzy rules, and control structure. Also, the design of the fuzzy controller is model-free and uses only our knowledge about the system. To show the efficacy of the FC, the control algorithm is implemented on the flexible joint manipulator (FJM) developed by Quanser. The results show that the proposed control method is effective in terms of response time, overshoot, and vibration amplitude.

Keywords: fuzzy logic control, model-free control, flexible joint manipulators, nonlinear control

Procedia PDF Downloads 118
3091 Drama in the Classroom: Work and Experience with Standardized Patients and Classroom Simulation of Difficult Clinical Scenarios

Authors: Aliyah Dosani, Kerri Alderson

Abstract:

Two different simulations using standardized patients were developed to reinforce content and foster undergraduate nursing students’ practice and development of interpersonal skills in difficult clinical situations in the classroom. The live actor simulations focused on fostering interpersonal skills, traditionally considered by students to be simple and easy. However, seemingly straightforward interactions can be very stressful, particularly in women’s complex social/emotional situations. Supporting patients in these contexts is fraught with complexity and high emotion, requiring skillful support, assessment and intervention by a registered nurse. In this presentation, the personal and professional perspectives of the development, incorporation, and execution of the live actor simulations will be discussed, as well as the inclusion of student perceptions, and the learning gained by the involved faculty.

Keywords: adult learning, interpersonal skill development, simulation learning, teaching and learning

Procedia PDF Downloads 143
3090 Subarray Based Multiuser Massive MIMO Design Adopting Large Transmit and Receive Arrays

Authors: Tetsiki Taniguchi, Yoshio Karasawa

Abstract:

This paper describes a subarray based low computational design method of multiuser massive multiple input multiple output (MIMO) system. In our previous works, use of large array is assumed only in transmitter, but this study considers the case both of transmitter and receiver sides are equipped with large array antennas. For this aim, receive arrays are also divided into several subarrays, and the former proposed method is modified for the synthesis of a large array from subarrays in both ends. Through computer simulations, it is verified that the performance of the proposed method is degraded compared with the original approach, but it can achieve the improvement in the aspect of complexity, namely, significant reduction of the computational load to the practical level.

Keywords: large array, massive multiple input multiple output (MIMO), multiuser, singular value decomposition, subarray, zero forcing

Procedia PDF Downloads 402
3089 Use of EPR in Experimental Mechanics

Authors: M. Sikoń, E. Bidzińska

Abstract:

An attempt to apply EPR (Electron Paramagnetic Resonance) spectroscopy to experimental analysis of the mechanical state of the loaded material is considered in this work. Theory concerns the participation of electrons in transfer of mechanical action. The model of measurement is shown by applying classical mechanics and quantum mechanics. Theoretical analysis is verified using EPR spectroscopy twice, once for the free spacemen and once for the mechanical loaded spacemen. Positive results in the form of different spectra for free and loaded materials are used to describe the mechanical state in continuum based on statistical mechanics. Perturbation of the optical electrons in the field of the mechanical interactions inspires us to propose new optical properties of the materials with mechanical stresses.

Keywords: Cosserat medium, EPR spectroscopy, optical active electrons, optical activity

Procedia PDF Downloads 380
3088 Development of a Pain Detector Using Microwave Radiometry Method

Authors: Nanditha Rajamani, Anirudhaa R. Rao, Divya Sriram

Abstract:

One of the greatest difficulties in treating patients with pain is the highly subjective nature of pain sensation. The measurement of pain intensity is primarily dependent on the patient’s report, often with little physical evidence to provide objective corroboration. This is also complicated by the fact that there are only few and expensive existing technologies (Functional Magnetic Resonance Imaging-fMRI). The need is thus clear and urgent for a reliable, non-invasive, non-painful, objective, readily adoptable, and coefficient diagnostic platform that provides additional diagnostic information to supplement its current regime with more information to assist doctors in diagnosing these patients. Thus, our idea of developing a pain detector was conceived to take a step further the detection and diagnosis of chronic and acute pain.

Keywords: pain sensor, microwave radiometery, pain sensation, fMRI

Procedia PDF Downloads 456
3087 Benzimidazole as Corrosion Inhibitor for Heat Treated 6061 Al-SiCp Composite in Acetic Acid

Authors: Melby Chacko, Jagannath Nayak

Abstract:

6061 Al-SiCp composite was solutionized at 350 °C for 30 minutes and water quenched. It was then underaged at 140 °C (T6 treatment). The aging behaviour of the composite was studied using Rockwell B hardness measurement. Corrosion behaviour of the underaged sample was studied in different concentrations of acetic acid and at different temperatures. Benzimidazole at different concentrations was used for the inhibition studies. Inhibition efficiency of benzimidazole was calculated for different experimental conditions. Thermodynamic parameters were found out which suggested benzimidazole is an efficient inhibitor and it adsorbed onto the surface of composite by mixed adsorption where chemisorption is predominant.

Keywords: 6061 Al-SiCp composite, T6 treatment, corrosion inhibition, chemisorption

Procedia PDF Downloads 398
3086 The Impact of Using Flattening Filter-Free Energies on Treatment Efficiency for Prostate SBRT

Authors: T. Al-Alawi, N. Shorbaji, E. Rashaidi, M.Alidrisi

Abstract:

Purpose/Objective(s): The main purpose of this study is to analyze the planning of SBRT treatments for localized prostate cancer with 6FFF and 10FFF energies to see if there is a dosimetric difference between the two energies and how we can increase the plan efficiency and reduce its complexity. Also, to introduce a planning method in our department to treat prostate cancer by utilizing high energy photons without increasing patient toxicity and fulfilled all dosimetric constraints for OAR (an organ at risk). Then toevaluate the target 95% coverage PTV95, V5%, V2%, V1%, low dose volume for OAR (V1Gy, V2Gy, V5Gy), monitor unit (beam-on time), and estimate the values of homogeneity index HI, conformity index CI a Gradient index GI for each treatment plan.Materials/Methods: Two treatment plans were generated for15 patients with localized prostate cancer retrospectively using the CT planning image acquired for radiotherapy purposes. Each plan contains two/three complete arcs with two/three different collimator angle sets. The maximum dose rate available is 1400MU/min for the energy 6FFF and 2400MU/min for 10FFF. So in case, we need to avoid changing the gantry speed during the rotation, we tend to use the third arc in the plan with 6FFF to accommodate the high dose per fraction. The clinical target volume (CTV) consists of the entire prostate for organ-confined disease. The planning target volume (PTV) involves a margin of 5 mm. A 3-mm margin is favored posteriorly. Organs at risk identified and contoured include the rectum, bladder, penile bulb, femoral heads, and small bowel. The prescription dose is to deliver 35Gyin five fractions to the PTV and apply constraints for organ at risk (OAR) derived from those reported in references. Results: In terms of CI=0.99, HI=0.7, and GI= 4.1, it was observed that they are all thesame for both energies 6FFF and 10FFF with no differences, but the total delivered MUs are much less for the 10FFF plans (2907 for 6FFF vs.2468 for 10FFF) and the total delivery time is 124Sc for 6FFF vs. 61Sc for 10FFF beams. There were no dosimetric differences between 6FFF and 10FFF in terms of PTV coverage and mean doses; the mean doses for the bladder, rectum, femoral heads, penile bulb, and small bowel were collected, and they were in favor of the 10FFF. Also, we got lower V1Gy, V2Gy, and V5Gy doses for all OAR with 10FFF plans. Integral dosesID in (Gy. L) were recorded for all OAR, and they were lower with the 10FFF plans. Conclusion: High energy 10FFF has lower treatment time and lower delivered MUs; also, 10FFF showed lower integral and meant doses to organs at risk. In this study, we suggest usinga 10FFF beam for SBRTprostate treatment, which has the advantage of lowering the treatment time and that lead to lessplan complexity with respect to 6FFF beams.

Keywords: FFF beam, SBRT prostate, VMAT, prostate cancer

Procedia PDF Downloads 84
3085 A Measurement Device of Condensing Flow Rate, an Order of MilliGrams per Second

Authors: Hee Joon Lee

Abstract:

There are many difficulties in measuring a small flow rate of an order of milli grams per minute (LPM) or less using a conventional flowmeter. Therefore, a flow meter with minimal loss and based on a new concept was designed as part of this paper. A chamber was manufactured with a level transmitter and an on-off control valve. When the level of the collected condensed water reaches the top of the chamber, the valve opens to allow the collected water to drain back into the tank. To allow the water to continue to drain when the signal is lost, the valve is held open for a few seconds by a time delay switch and then closed. After an examination, the condensing flow rate was successfully measured with the uncertainty of ±5.7% of the full scale for the chamber.

Keywords: chamber, condensation, flow meter, milli-grams

Procedia PDF Downloads 281
3084 Use of Interpretable Evolved Search Query Classifiers for Sinhala Documents

Authors: Prasanna Haddela

Abstract:

Document analysis is a well matured yet still active research field, partly as a result of the intricate nature of building computational tools but also due to the inherent problems arising from the variety and complexity of human languages. Breaking down language barriers is vital in enabling access to a number of recent technologies. This paper investigates the application of document classification methods to new Sinhalese datasets. This language is geographically isolated and rich with many of its own unique features. We will examine the interpretability of the classification models with a particular focus on the use of evolved Lucene search queries generated using a Genetic Algorithm (GA) as a method of document classification. We will compare the accuracy and interpretability of these search queries with other popular classifiers. The results are promising and are roughly in line with previous work on English language datasets.

Keywords: evolved search queries, Sinhala document classification, Lucene Sinhala analyzer, interpretable text classification, genetic algorithm

Procedia PDF Downloads 114
3083 Measurement System for Human Arm Muscle Magnetic Field and Grip Strength

Authors: Shuai Yuan, Minxia Shi, Xu Zhang, Jianzhi Yang, Kangqi Tian, Yuzheng Ma

Abstract:

The precise measurement of muscle activities is essential for understanding the function of various body movements. This work aims to develop a muscle magnetic field signal detection system based on mathematical analysis. Medical research has underscored that early detection of muscle atrophy, coupled with lifestyle adjustments such as dietary control and increased exercise, can significantly enhance muscle-related diseases. Currently, surface electromyography (sEMG) is widely employed in research as an early predictor of muscle atrophy. Nonetheless, the primary limitation of using sEMG to forecast muscle strength is its inability to directly measure the signals generated by muscles. Challenges arise from potential skin-electrode contact issues due to perspiration, leading to inaccurate signals or even signal loss. Additionally, resistance and phase are significantly impacted by adipose layers. The recent emergence of optically pumped magnetometers introduces a fresh avenue for bio-magnetic field measurement techniques. These magnetometers possess high sensitivity and obviate the need for a cryogenic environment unlike superconducting quantum interference devices (SQUIDs). They detect muscle magnetic field signals in the range of tens to thousands of femtoteslas (fT). The utilization of magnetometers for capturing muscle magnetic field signals remains unaffected by issues of perspiration and adipose layers. Since their introduction, optically pumped atomic magnetometers have found extensive application in exploring the magnetic fields of organs such as cardiac and brain magnetism. The optimal operation of these magnetometers necessitates an environment with an ultra-weak magnetic field. To achieve such an environment, researchers usually utilize a combination of active magnetic compensation technology with passive magnetic shielding technology. Passive magnetic shielding technology uses a magnetic shielding device built with high permeability materials to attenuate the external magnetic field to a few nT. Compared with more layers, the coils that can generate a reverse magnetic field to precisely compensate for the residual magnetic fields are cheaper and more flexible. To attain even lower magnetic fields, compensation coils designed by Biot-Savart law are involved to generate a counteractive magnetic field to eliminate residual magnetic fields. By solving the magnetic field expression of discrete points in the target region, the parameters that determine the current density distribution on the plane can be obtained through the conventional target field method. The current density is obtained from the partial derivative of the stream function, which can be represented by the combination of trigonometric functions. Optimization algorithms in mathematics are introduced into coil design to obtain the optimal current density distribution. A one-dimensional linear regression analysis was performed on the collected data, obtaining a coefficient of determination R2 of 0.9349 with a p-value of 0. This statistical result indicates a stable relationship between the peak-to-peak value (PPV) of the muscle magnetic field signal and the magnitude of grip strength. This system is expected to be a widely used tool for healthcare professionals to gain deeper insights into the muscle health of their patients.

Keywords: muscle magnetic signal, magnetic shielding, compensation coils, trigonometric functions.

Procedia PDF Downloads 57
3082 Linking Soil Spectral Behavior and Moisture Content for Soil Moisture Content Retrieval at Field Scale

Authors: Yonwaba Atyosi, Moses Cho, Abel Ramoelo, Nobuhle Majozi, Cecilia Masemola, Yoliswa Mkhize

Abstract:

Spectroscopy has been widely used to understand the hyperspectral remote sensing of soils. Accurate and efficient measurement of soil moisture is essential for precision agriculture. The aim of this study was to understand the spectral behavior of soil at different soil water content levels and identify the significant spectral bands for soil moisture content retrieval at field-scale. The study consisted of 60 soil samples from a maize farm, divided into four different treatments representing different moisture levels. Spectral signatures were measured for each sample in laboratory under artificial light using an Analytical Spectral Device (ASD) spectrometer, covering a wavelength range from 350 nm to 2500 nm, with a spectral resolution of 1 nm. The results showed that the absorption features at 1450 nm, 1900 nm, and 2200 nm were particularly sensitive to soil moisture content and exhibited strong correlations with the water content levels. Continuum removal was developed in the R programming language to enhance the absorption features of soil moisture and to precisely understand its spectral behavior at different water content levels. Statistical analysis using partial least squares regression (PLSR) models were performed to quantify the correlation between the spectral bands and soil moisture content. This study provides insights into the spectral behavior of soil at different water content levels and identifies the significant spectral bands for soil moisture content retrieval. The findings highlight the potential of spectroscopy for non-destructive and rapid soil moisture measurement, which can be applied to various fields such as precision agriculture, hydrology, and environmental monitoring. However, it is important to note that the spectral behavior of soil can be influenced by various factors such as soil type, texture, and organic matter content, and caution should be taken when applying the results to other soil systems. The results of this study showed a good agreement between measured and predicted values of Soil Moisture Content with high R2 and low root mean square error (RMSE) values. Model validation using independent data was satisfactory for all the studied soil samples. The results has significant implications for developing high-resolution and precise field-scale soil moisture retrieval models. These models can be used to understand the spatial and temporal variation of soil moisture content in agricultural fields, which is essential for managing irrigation and optimizing crop yield.

Keywords: soil moisture content retrieval, precision agriculture, continuum removal, remote sensing, machine learning, spectroscopy

Procedia PDF Downloads 99
3081 The Determinants of Corporate Social Responsibility Disclosure Extent and Quality: The Case of Jordan

Authors: Hani Alkayed, Belal Omar, Eileen Roddy

Abstract:

This study focuses on investigating the determinants of Corporate Social Responsibility Disclosure (CSRD) extent and quality in Jordan. The study examines factors that influence CSR disclosure extent and quality, such as corporate characteristics (size, gearing, firm’s age, and industry type), corporate governance (board size, number of meetings, non-executive directors, female directors in the board, family directors in the board, foreign members, audit committee, type of external auditors, and CEO duality) and ownership structure (government ownership, institutional ownership, and ownership concentration). Legitimacy theory is utilised as the main theory for our theoretical framework. A quantitative approach is adopted for this research and content analysis technique is used to gather CSR disclosure extent and quality from the annual reports. The sample is withdrawn from the annual reports of 118 Jordanian companies over the period of 2010-2015. A CSRD index is constructed, and includes the disclosures of the following categories; environmental, human resources, product and consumers, and community involvement. A 7 point-scale measurement was developed to examine the quality of disclosure, were 0= No Disclosures, 1= General disclosures, (Non-monetary), 2= General disclosures, (Non-monetary) with pictures, charts, and graphs 3= Descriptive/ qualitative disclosures, specific details (Non-monetary), 4= Descriptive/ qualitative disclosures, specific details with pictures, charts, and graphs, 5= Numeric disclosures, full descriptions with supporting numbers, 6= Numeric disclosures, full descriptions with supporting numbers, pictures, and Charts. This study fills the gap in the literature regarding CSRD in Jordan, and the fact that all the previous studies have ignored a clear categorisation as a measurement of quality. The result shows that the extent of CSRD is higher than the quality in Jordan. Regarding the determinants of CSR disclosures, the followings were found to have a significant relationship with both extent and quality of CSRD except non-executives, were the significant relationship was found just with the extent of CSRD: board size, non-executive directors, firm’s age, foreign members on the board, number of boards meetings, the presence of audit committees, big 4, government ownership, firm’s size, industry type.

Keywords: content analysis, corporate governance, corporate social responsibility disclosure, Jordan, quality of disclosure

Procedia PDF Downloads 230
3080 Working Mode and Key Technology of Thermal Vacuum Test Software for Spacecraft Test

Authors: Zhang Lei, Zhan Haiyang, Gu Miao

Abstract:

A universal software platform is developed for improving the defects in the practical one. This software platform has distinct advantages in modularization, information management, and the interfaces. Several technologies such as computer technology, virtualization technology, network technology, etc. are combined together in this software platform, and four working modes are introduced in this article including single mode, distributed mode, cloud mode, and the centralized mode. The application area of the software platform is extended through the switch between these working modes. The software platform can arrange the thermal vacuum test process automatically. This function can improve the reliability of thermal vacuum test.

Keywords: software platform, thermal vacuum test, control and measurement, work mode

Procedia PDF Downloads 414
3079 A Topological Study of an Urban Street Network and Its Use in Heritage Areas

Authors: Jose L. Oliver, Taras Agryzkov, Leandro Tortosa, Jose F. Vicent, Javier Santacruz

Abstract:

This paper aims to demonstrate how a topological study of an urban street network can be used as a tool to be applied to some heritage conservation areas in a city. In the last decades, we find different kinds of approaches in the discipline of Architecture and Urbanism based in the so-called Sciences of Complexity. In this context, this paper uses mathematics from the Network Theory. Hence, it proposes a methodology based in obtaining information from a graph, which is created from a network of urban streets. Then, it is used an algorithm that establishes a ranking of importance of the nodes of that network, from its topological point of view. The results are applied to a heritage area in a particular city, confronting the data obtained from the mathematical model, with the ones from the field work in the case study. As a result of this process, we may conclude the necessity of implementing some actions in the area, and where those actions would be more effective for the whole heritage site.

Keywords: graphs, heritage cities, spatial analysis, urban networks

Procedia PDF Downloads 397
3078 Supply Chain Management Practices in Thailand Palm Oil Industry

Authors: Athirat Intajorn

Abstract:

According to the ASEAN free trade areas (AFTA), Thailand has applied the AFTA agreement for reducing tariffs and reflecting changes in business processes. The reflection of changes in agribusiness processes, in particular, has accumulated as production costs for producers. Palm Oil industry has become an important industry to Thailand economic. Thailand currently ranks the 3rd in the world for Crude Palm Oil CPO. Therefore, the scope of this paper presents a research framework to investigate the supply chain management practices in Thailand palm oil industry. This research is limit to literature review. And the proposed framework identifies the criteria of supply chain management for Thailand palm oil industry in order for linkage among entities within logistics management involving plantation, mill, collection port, refinery and cookie from the data utilization. The Supply Chain Management Practices in Thailand Palm Oil Industry framework has a somewhat different view due to the high complexity of agribusiness logistics management.

Keywords: supply chain management, practice, palm oil industry, Thailand palm oil industry

Procedia PDF Downloads 309
3077 Quick Similarity Measurement of Binary Images via Probabilistic Pixel Mapping

Authors: Adnan A. Y. Mustafa

Abstract:

In this paper we present a quick technique to measure the similarity between binary images. The technique is based on a probabilistic mapping approach and is fast because only a minute percentage of the image pixels need to be compared to measure the similarity, and not the whole image. We exploit the power of the Probabilistic Matching Model for Binary Images (PMMBI) to arrive at an estimate of the similarity. We show that the estimate is a good approximation of the actual value, and the quality of the estimate can be improved further with increased image mappings. Furthermore, the technique is image size invariant; the similarity between big images can be measured as fast as that for small images. Examples of trials conducted on real images are presented.

Keywords: big images, binary images, image matching, image similarity

Procedia PDF Downloads 197
3076 Numerical Analysis of Rapid Drawdown in Dams Based on Brazilian Standards

Authors: Renato Santos Paulinelli Raposo, Vinicius Resende Domingues, Manoel Porfirio Cordao Neto

Abstract:

Rapid drawdown is one of the cases referred to ground stability study in dam projects. Due to the complexity generated by the combination of loads and the difficulty in determining the parameters, analyses of rapid drawdown are usually performed considering the immediate reduction of water level upstream. The proposal of a simulation, considering the gradual reduction in water level upstream, requires knowledge of parameters about consolidation and those related to unsaturated soil. In this context, the purpose of this study is to understand the methodology of collection and analysis of parameters to simulate a rapid drawdown in dams. Using a numerical tool, the study is complemented with a hypothetical case study that can assist the practical use of data compiled. The referenced dam presents homogeneous section composed of clay soil, a height of 70 meters, a width of 12 meters, and upstream slope with inclination 1V:3H.

Keywords: dam, GeoStudio, rapid drawdown, stability analysis

Procedia PDF Downloads 253
3075 Understanding Non-Utilization of AI Tools for Research and Academic Writing among Academic Staff in Nigerian Universities: A Paradigm Shift

Authors: Abubakar Abdulkareem, Nasir Haruna Soba

Abstract:

This study investigates the non-utilization of AI tools for research and academic writing among academic staff in Nigerian universities using the perceived attribute of innovation theory by Rogers as a theoretical framework to guide the investigation. This study was framed in an interpretative research paradigm. A qualitative methodology and case study research design was adopted. Interviews were conducted with 20 academic staff. The study used a thematic analysis process to identify 115 narratives. The narratives are organized into five major categories and further collapsed into five theoretical constructs explaining the non-use of AI tools for research and academic writing. Findings from this study revealed some of the reasons for the non-utilization of AI tools for research and academic writing as lack of Awareness, perceived Complexity, trust and Reliability Concerns, cost and accessibility, ethical and Privacy concerns and, cultural and institutional factors, etc.

Keywords: non-utilization, AI tools, research and academic writing, academic staff

Procedia PDF Downloads 47