Search results for: dimensional accuracy
854 Arabic Light Word Analyser: Roles with Deep Learning Approach
Authors: Mohammed Abu Shquier
Abstract:
This paper introduces a word segmentation method using the novel BP-LSTM-CRF architecture for processing semantic output training. The objective of web morphological analysis tools is to link a formal morpho-syntactic description to a lemma, along with morpho-syntactic information, a vocalized form, a vocalized analysis with morpho-syntactic information, and a list of paradigms. A key objective is to continuously enhance the proposed system through an inductive learning approach that considers semantic influences. The system is currently under construction and development based on data-driven learning. To evaluate the tool, an experiment on homograph analysis was conducted. The tool also encompasses the assumption of deep binary segmentation hypotheses, the arbitrary choice of trigram or n-gram continuation probabilities, language limitations, and morphology for both Modern Standard Arabic (MSA) and Dialectal Arabic (DA), which provide justification for updating this system. Most Arabic word analysis systems are based on the phonotactic morpho-syntactic analysis of a word transmitted using lexical rules, which are mainly used in MENA language technology tools, without taking into account contextual or semantic morphological implications. Therefore, it is necessary to have an automatic analysis tool taking into account the word sense and not only the morpho-syntactic category. Moreover, they are also based on statistical/stochastic models. These stochastic models, such as HMMs, have shown their effectiveness in different NLP applications: part-of-speech tagging, machine translation, speech recognition, etc. As an extension, we focus on language modeling using Recurrent Neural Network (RNN); given that morphological analysis coverage was very low in dialectal Arabic, it is significantly important to investigate deeply how the dialect data influence the accuracy of these approaches by developing dialectal morphological processing tools to show that dialectal variability can support to improve analysis.Keywords: NLP, DL, ML, analyser, MSA, RNN, CNN
Procedia PDF Downloads 44853 Simulating Studies on Phosphate Removal from Laundry Wastewater Using Biochar: Dudinin Approach
Authors: Eric York, James Tadio, Silas Owusu Antwi
Abstract:
Laundry wastewater contains a diverse range of chemical pollutants that can have detrimental effects on human health and the environment. In this study, simulation studies by Spyder Python software v 3.2 to assess the efficacy of biochar in removing PO₄³⁻ from wastewater were conducted. Through modeling and simulation, the mechanisms involved in the adsorption process of phosphate by biochar were studied by altering variables which is specific to the phosphate from common laundry phosphate detergents, such as the aqueous solubility, initial concentration, and temperature using the Dudinin Approach (DA). Results showed that the concentration equilibrate at near the highest concentrations for Sugar beet-120 mgL⁻¹, Tailing-85 mgL⁻¹, CaO- rich-50 mgL⁻¹, Eggshell and rice straw-48 mgL⁻¹, Undaria Pinnatifida Roots-190 mgL⁻¹, Ca-Alginate Granular Beads -240 mgL⁻¹, Laminaria Japonica Powder -900 mgL⁻¹, Pinesaw dust-57 mgL⁻¹, Ricehull-190 mgL⁻¹, sesame straw- 470 mgL⁻¹, Sugar Bagasse-380 mgL⁻¹, Miscanthus Giganteus-240 mgL⁻¹, Wood Bc-130 mgL⁻¹, Pine-25 mgL⁻¹, Sawdust-6.8 mgL⁻¹, Sewage Sludge-, Rice husk-12 mgL⁻¹, Corncob-117 mgL⁻¹, Maize straw- 1800 mgL⁻¹ while Peanut -Eucalyptus polybractea-, Crawfish equilibrated at near concentration. CO₂ activated Thalia, sewage sludge biochar, Broussonetia Papyrifera Leaves equilibrated just at the lower concentration. Only Soyer bean Stover exhibited a sharp rise and fall peak in mid-concentration at 2 mgL⁻¹ volume. The modelling results were consistent with experimental findings from the literature, ensuring the accuracy, repeatability, and reliability of the simulation study. The simulation study provided insights into adsorption for PO₄³⁻ from wastewater by biochar using concentration per volume that can be adsorbed ideally under the given conditions. Studies showed that applying the principle experimentally in real wastewater with all its complexity is warranted and not far-fetched.Keywords: simulation studies, phosphate removal, biochar, adsorption, wastewater treatment
Procedia PDF Downloads 142852 Reliability of Movement Assessment Battery for Children-2 Age Band 3 Using Multiple Testers
Authors: Jernice S. Y. Tan
Abstract:
Introduction: Reliability within and between testers is vital to ensure the accuracy of any motor assessment instrument. However, reliability checks of the Movement Assessment Battery for Children-2 (MABC-2) age band 3 using multiple testers assigned to different MABC-2 tasks for the same group of participants are uncommon. Multiple testers were not stated as a choice in the MABC-2 manual. Therefore, the purpose of this study was to determine the inter- and intra-tester reliability for using multiple testers to administer the test protocols of MABC-2 age band 3. Methods: Thirty volunteered adolescents (n = 30; 15 males, 15 females; age range: 13 – 16 years) performed the eight tasks in a randomised sequence at three different test stations for the MABC-2 task components (Manual Dexterity, Aiming and Catching, Balance). Ethics approval and parental consent were obtained. The participants were videotaped while performing the test protocols of MABC-2 age band 3. Five testers were involved in the data collection process. They were Sports Science graduating students doing their final year project and were supervised by experienced motor assessor. Inter- and intra-tester reliability checks using intra-class coefficient (ICC) were carried out using the videotaped data. Results: The inter-tester reliability between the five testers for the eight tasks ranged from rᵢcc = 0.705 to rᵢcc = 0.995. This suggests that the average agreement between them was considered good to excellent. With the exception of one tester who had rᵢcc = 0.687 for one of the eight tasks (i.e. zip-zap hopping), the intra-tester reliability within each tester ranged from rᵢcc = 0.728 to rᵢcc = 1.000, and this also suggested good to excellent consistency within testers. Discussion: The use of multiple testers with good intra-tester reliability for different test stations is feasible. This method allows several participants to be assessed concurrently at different test stations and saves overall data collection time. Therefore, it is recommended that the administering of MABC-2 with multiple testers should be extended to other age bands ensuring the feasibility of such method for other age bands.Keywords: adolescents, MABC, motor assessment, motor skills, reliability
Procedia PDF Downloads 323851 Empirical Analysis of Forensic Accounting Practices for Tackling Persistent Fraud and Financial Irregularities in the Nigerian Public Sector
Authors: Sani AbdulRahman Bala
Abstract:
This empirical study delves into the realm of forensic accounting practices within the Nigerian Public Sector, seeking to quantitatively analyze their efficacy in addressing the persistent challenges of fraud and financial irregularities. With a focus on empirical data, this research employs a robust methodology to assess the current state of fraud in the Nigerian Public Sector and evaluate the performance of existing forensic accounting measures. Through quantitative analyses, including statistical models and data-driven insights, the study aims to identify patterns, trends, and correlations associated with fraudulent activities. The research objectives include scrutinizing documented fraud cases, examining the effectiveness of established forensic accounting practices, and proposing data-driven strategies for enhancing fraud detection and prevention. Leveraging quantitative methodologies, the study seeks to measure the impact of technological advancements on forensic accounting accuracy and efficiency. Additionally, the research explores collaborative mechanisms among government agencies, regulatory bodies, and the private sector by quantifying the effects of information sharing on fraud prevention. The empirical findings from this study are expected to provide a nuanced understanding of the challenges and opportunities in combating fraud within the Nigerian Public Sector. The quantitative insights derived from real-world data will contribute to the refinement of forensic accounting strategies, ensuring their effectiveness in addressing the unique complexities of financial irregularities in the public sector. The study's outcomes aim to inform policymakers, practitioners, and stakeholders, fostering evidence-based decision-making and proactive measures for a more resilient and fraud-resistant financial governance system in Nigeria.Keywords: fraud, financial irregularities, nigerian public sector, quantitative investigation
Procedia PDF Downloads 64850 The Fantasy of the Media and the Sexual World of Adolescents: The Relationship between Viewing Sexual Content on Television and Sexual Behaviour of Adolescents
Authors: Ifeanyi Adigwe
Abstract:
The influence of television on adolescents is prevalent and widespread because television is a powerful sex educator for adolescents. This study examined the relationship between viewing sexual content on television and sexual behaviour of adolescents in public senior secondary schools in Lagos, Nigeria. The study employed a survey research design with a structured questionnaire as instrument. The multi-stage sampling technique was adopted. Firstly, purposive sampling was adopted in selecting 3 educational districts namely: Agege, Maryland, and Agboju. These educational districts were chosen for convenience and its wide coverage area of public senior secondary schools in Lagos State. Secondly, the researcher adopted systematic sampling to select the schools. The schools were listed in alphabetical order in each district and every 10th school were selected, yielding 13 schools altogether. A total of 501 copies of questionnaire were administered to the students and a total 491 copies of the questionnaire were retrieved. Only 453 copies of the questionnaire met the inclusion criteria and were used for analysis. Data were analyzed using descriptive statistics, Pearson Correlation, Principal components analysis, and regression analysis. Results of correlation analysis showed a positive and significant relationship between adolescent sexual belief and their preference for sexual content in television (r =0.117, N =453, p=0.13), viewing sexual content on television and adolescent sexual behavior, (r =-0.112, N =453, p<0.05), adolescent television preference and their preference for sexual content in television (r =0.328, N =453, p<0.05), adolescent television preference and adolescent’s sexual behavior (r=0.093, N =453, p<0.05). However, a negative but significant relationship exists between adolescent’s sexual knowledge and their sexual behavior (r=-122, N=453, p=0.0009). Pearson’s correlation between adolescents’ sexual knowledge and sexual behavior shows that there is a positive significant but strong relationship between adolescent’s sexual knowledge and their sexual behavior (r=0.967, N=453, p<0.05). The results also show that adolescent’s preference for sexual content in television informs them about their sexuality, development and sexual health. The descriptive and inferential analysis of data revealed that the interaction among adolescent sexual belief, knowledge and adolescents’ preference of sexual in television and its resultant effect on adolescent sexual behavior is apparent because sexual belief and norms about sex of an adolescent can induce his television preference of sexual content on television. The study concludes that exposure to sexual content in television can impact on adolescent sexual behaviour. There is no doubt that the actual outcome of television viewing and adolescent sexual behavior remains controversial because adolescent sexual behavior is multifaceted and multi-dimensional. Since behavior is learned overtime, the frequency of exposure and nature of sexual content viewed overtime induces and hastens sexual activity.Keywords: adolescent sexual behavior, Nigeria, sexual belief, sexual content, sexual knowledge, television preference
Procedia PDF Downloads 395849 Finite Element Analysis of Layered Composite Plate with Elastic Pin Under Uniaxial Load Using ANSYS
Authors: R. M. Shabbir Ahmed, Mohamed Haneef, A. R. Anwar Khan
Abstract:
Analysis of stresses plays important role in the optimization of structures. Prior stress estimation helps in better design of the products. Composites find wide usage in the industrial and home applications due to its strength to weight ratio. Especially in the air craft industry, the usage of composites is more due to its advantages over the conventional materials. Composites are mainly made of orthotropic materials having unequal strength in the different directions. Composite materials have the drawback of delamination and debonding due to the weaker bond materials compared to the parent materials. So proper analysis should be done to the composite joints before using it in the practical conditions. In the present work, a composite plate with elastic pin is considered for analysis using finite element software Ansys. Basically the geometry is built using Ansys software using top down approach with different Boolean operations. The modelled object is meshed with three dimensional layered element solid46 for composite plate and solid element (Solid45) for pin material. Various combinations are considered to find the strength of the composite joint under uniaxial loading conditions. Due to symmetry of the problem, only quarter geometry is built and results are presented for full model using Ansys expansion options. The results show effect of pin diameter on the joint strength. Here the deflection and load sharing of the pin are increasing and other parameters like overall stress, pin stress and contact pressure are reducing due to lesser load on the plate material. Further material effect shows, higher young modulus material has little deflection, but other parameters are increasing. Interference analysis shows increasing of overall stress, pin stress, contact stress along with pin bearing load. This increase should be understood properly for increasing the load carrying capacity of the joint. Generally every structure is preloaded to increase the compressive stress in the joint to increase the load carrying capacity. But the stress increase should be properly analysed for composite due to its delamination and debonding effects due to failure of the bond materials. When results for an isotropic combination is compared with composite joint, isotropic joint shows uniformity of the results with lesser values for all parameters. This is mainly due to applied layer angle combinations. All the results are represented with necessasary pictorial plots.Keywords: bearing force, frictional force, finite element analysis, ANSYS
Procedia PDF Downloads 334848 Landslide and Liquefaction Vulnerability Analysis Using Risk Assessment Analysis and Analytic Hierarchy Process Implication: Suitability of the New Capital of the Republic of Indonesia on Borneo Island
Authors: Rifaldy, Misbahudin, Khalid Rizky, Ricky Aryanto, M. Alfiyan Bagus, Fahri Septianto, Firman Najib Wibisana, Excobar Arman
Abstract:
Indonesia is a country that has a high level of disaster because it is on the ring of fire, and there are several regions with three major plates meeting in the world. So that disaster analysis must always be done to see the potential disasters that might always occur, especially in this research are landslides and liquefaction. This research was conducted to analyze areas that are vulnerable to landslides and liquefaction hazards and their relationship with the assessment of the issue of moving the new capital of the Republic of Indonesia to the island of Kalimantan with a total area of 612,267.22 km². The method in this analysis uses the Analytical Hierarchy Process and consistency ratio testing as a complex and unstructured problem-solving process into several parameters by providing values. The parameters used in this analysis are the slope, land cover, lithology distribution, wetness index, earthquake data, peak ground acceleration. Weighted overlay was carried out from all these parameters using the percentage value obtained from the Analytical Hierarchy Process and confirmed its accuracy with a consistency ratio so that a percentage of the area obtained with different vulnerability classification values was obtained. Based on the analysis results obtained vulnerability classification from very high to low vulnerability. There are (0.15%) 918.40083 km² of highly vulnerable, medium (20.75%) 127,045,44815 km², low (56.54%) 346,175.886188 km², very low (22.56%) 138,127.484832 km². This research is expected to be able to map landslides and liquefaction disasters on the island of Kalimantan and provide consideration of the suitability of regional development of the new capital of the Republic of Indonesia. Also, this research is expected to provide input or can be applied to all regions that are analyzing the vulnerability of landslides and liquefaction or the suitability of the development of certain regions.Keywords: analytic hierarchy process, Borneo Island, landslide and liquefaction, vulnerability analysis
Procedia PDF Downloads 178847 An Advanced Approach to Detect and Enumerate Soil-Transmitted Helminth Ova from Wastewater
Authors: Vivek B. Ravindran, Aravind Surapaneni, Rebecca Traub, Sarvesh K. Soni, Andrew S. Ball
Abstract:
Parasitic diseases have a devastating, long-term impact on human health and welfare. More than two billion people are infected with soil-transmitted helminths (STHs), including the roundworms (Ascaris), hookworms (Necator and Ancylostoma) and whipworm (Trichuris) with majority occurring in the tropical and subtropical regions of the world. Despite its low prevalence in developed countries, the removal of STHs from wastewater remains crucial to allow the safe use of sludge or recycled water in agriculture. Conventional methods such as incubation and optical microscopy are cumbersome; consequently, the results drastically vary from person-to-person observing the ova (eggs) under microscope. Although PCR-based methods are an alternative to conventional techniques, it lacks the ability to distinguish between viable and non-viable helminth ova. As a result, wastewater treatment industries are in major need for radically new and innovative tools to detect and quantify STHs eggs with precision, accuracy and being cost-effective. In our study, we focus on the following novel and innovative techniques: -Recombinase polymerase amplification and Surface enhanced Raman spectroscopy (RPA-SERS) based detection of helminth ova. -Use of metal nanoparticles and their relative nanozyme activity. -Colorimetric detection, differentiation and enumeration of genera of helminth ova using hydrolytic enzymes (chitinase and lipase). -Propidium monoazide (PMA)-qPCR to detect viable helminth ova. -Modified assay to recover and enumerate helminth eggs from fresh raw sewage. -Transcriptome analysis of ascaris ova in fresh raw sewage. The aforementioned techniques have the potential to replace current conventional and molecular methods thereby producing a standard protocol for the determination and enumeration of helminth ova in sewage sludge.Keywords: colorimetry, helminth, PMA-QPCR, nanoparticles, RPA, viable
Procedia PDF Downloads 299846 Calibration of 2D and 3D Optical Measuring Instruments in Industrial Environments at Submillimeter Range
Authors: Alberto Mínguez-Martínez, Jesús de Vicente y Oliva
Abstract:
Modern manufacturing processes have led to the miniaturization of systems and, as a result, parts at the micro-and nanoscale are produced. This trend seems to become increasingly important in the near future. Besides, as a requirement of Industry 4.0, the digitalization of the models of production and processes makes it very important to ensure that the dimensions of newly manufactured parts meet the specifications of the models. Therefore, it is possible to reduce the scrap and the cost of non-conformities, ensuring the stability of the production at the same time. To ensure the quality of manufactured parts, it becomes necessary to carry out traceable measurements at scales lower than one millimeter. Providing adequate traceability to the SI unit of length (the meter) to 2D and 3D measurements at this scale is a problem that does not have a unique solution in industrial environments. Researchers in the field of dimensional metrology all around the world are working on this issue. A solution for industrial environments, even if it is not complete, will enable working with some traceability. At this point, we believe that the study of the surfaces could provide us with a first approximation to a solution. Among the different options proposed in the literature, the areal topography methods may be the most relevant because they could be compared to those measurements performed using Coordinate Measuring Machines (CMM’s). These measuring methods give (x, y, z) coordinates for each point, expressing it in two different ways, either expressing the z coordinate as a function of x, denoting it as z(x), for each Y-axis coordinate, or as a function of the x and y coordinates, denoting it as z (x, y). Between others, optical measuring instruments, mainly microscopes, are extensively used to carry out measurements at scales lower than one millimeter because it is a non-destructive measuring method. In this paper, the authors propose a calibration procedure for the scales of optical measuring instruments, particularizing for a confocal microscope, using material standards easy to find and calibrate in metrology and quality laboratories in industrial environments. Confocal microscopes are measuring instruments capable of filtering the out-of-focus reflected light so that when it reaches the detector, it is possible to take pictures of the part of the surface that is focused. Varying and taking pictures at different Z levels of the focus, a specialized software interpolates between the different planes, and it could reconstruct the surface geometry into a 3D model. As it is easy to deduce, it is necessary to give traceability to each axis. As a complementary result, the roughness Ra parameter will be traced to the reference. Although the solution is designed for a confocal microscope, it may be used for the calibration of other optical measuring instruments by applying minor changes.Keywords: industrial environment, confocal microscope, optical measuring instrument, traceability
Procedia PDF Downloads 157845 Technology Futures in Global Militaries: A Forecasting Method Using Abstraction Hierarchies
Authors: Mark Andrew
Abstract:
Geopolitical tensions are at a thirty-year high, and the pace of technological innovation is driving asymmetry in force capabilities between nation states and between non-state actors. Technology futures are a vital component of defence capability growth, and investments in technology futures need to be informed by accurate and reliable forecasts of the options for ‘systems of systems’ innovation, development, and deployment. This paper describes a method for forecasting technology futures developed through an analysis of four key systems’ development stages, namely: technology domain categorisation, scanning results examining novel systems’ signals and signs, potential system-of systems’ implications in warfare theatres, and political ramifications in terms of funding and development priorities. The method has been applied to several technology domains, including physical systems (e.g., nano weapons, loitering munitions, inflight charging, and hypersonic missiles), biological systems (e.g., molecular virus weaponry, genetic engineering, brain-computer interfaces, and trans-human augmentation), and information systems (e.g., sensor technologies supporting situation awareness, cyber-driven social attacks, and goal-specification challenges to proliferation and alliance testing). Although the current application of the method has been team-centred using paper-based rapid prototyping and iteration, the application of autonomous language models (such as GPT-3) is anticipated as a next-stage operating platform. The importance of forecasting accuracy and reliability is considered a vital element in guiding technology development to afford stronger contingencies as ideological changes are forecast to expand threats to ecology and earth systems, possibly eclipsing the traditional vulnerabilities of nation states. The early results from the method will be subjected to ground truthing using longitudinal investigation.Keywords: forecasting, technology futures, uncertainty, complexity
Procedia PDF Downloads 115844 The Relationship between Spindle Sound and Tool Performance in Turning
Authors: N. Seemuang, T. McLeay, T. Slatter
Abstract:
Worn tools have a direct effect on the surface finish and part accuracy. Tool condition monitoring systems have been developed over a long period and used to avoid a loss of productivity resulting from using a worn tool. However, the majority of tool monitoring research has applied expensive sensing systems not suitable for production. In this work, the cutting sound in turning machine was studied using microphone. Machining trials using seven cutting conditions were conducted until the observable flank wear width (FWW) on the main cutting edge exceeded 0.4 mm. The cutting inserts were removed from the tool holder and the flank wear width was measured optically. A microphone with built-in preamplifier was used to record the machining sound of EN24 steel being face turned by a CNC lathe in a wet cutting condition using constant surface speed control. The sound was sampled at 50 kS/s and all sound signals recorded from microphone were transformed into the frequency domain by FFT in order to establish the frequency content in the audio signature that could be then used for tool condition monitoring. The extracted feature from audio signal was compared to the flank wear progression on the cutting inserts. The spectrogram reveals a promising feature, named as ‘spindle noise’, which emits from the main spindle motor of turning machine. The spindle noise frequency was detected at 5.86 kHz of regardless of cutting conditions used on this particular CNC lathe. Varying cutting speed and feed rate have an influence on the magnitude of power spectrum of spindle noise. The magnitude of spindle noise frequency alters in conjunction with the tool wear progression. The magnitude increases significantly in the transition state between steady-state wear and severe wear. This could be used as a warning signal to prepare for tool replacement or adapt cutting parameters to extend tool life.Keywords: tool wear, flank wear, condition monitoring, spindle noise
Procedia PDF Downloads 339843 Flow Field Optimization for Proton Exchange Membrane Fuel Cells
Authors: Xiao-Dong Wang, Wei-Mon Yan
Abstract:
The flow field design in the bipolar plates affects the performance of the proton exchange membrane (PEM) fuel cell. This work adopted a combined optimization procedure, including a simplified conjugate-gradient method and a completely three-dimensional, two-phase, non-isothermal fuel cell model, to look for optimal flow field design for a single serpentine fuel cell of size 9×9 mm with five channels. For the direct solution, the two-fluid method was adopted to incorporate the heat effects using energy equations for entire cells. The model assumes that the system is steady; the inlet reactants are ideal gases; the flow is laminar; and the porous layers such as the diffusion layer, catalyst layer and PEM are isotropic. The model includes continuity, momentum and species equations for gaseous species, liquid water transport equations in the channels, gas diffusion layers, and catalyst layers, water transport equation in the membrane, electron and proton transport equations. The Bulter-Volumer equation was used to describe electrochemical reactions in the catalyst layers. The cell output power density Pcell is maximized subjected to an optimal set of channel heights, H1-H5, and channel widths, W2-W5. The basic case with all channel heights and widths set at 1 mm yields a Pcell=7260 Wm-2. The optimal design displays a tapered characteristic for channels 1, 3 and 4, and a diverging characteristic in height for channels 2 and 5, producing a Pcell=8894 Wm-2, about 22.5% increment. The reduced channel heights of channels 2-4 significantly increase the sub-rib convection and widths for effectively removing liquid water and oxygen transport in gas diffusion layer. The final diverging channel minimizes the leakage of fuel to outlet via sub-rib convection from channel 4 to channel 5. Near-optimal design without huge loss in cell performance but is easily manufactured is tested. The use of a straight, final channel of 0.1 mm height has led to 7.37% power loss, while the design with all channel widths to be 1 mm with optimal channel heights obtained above yields only 1.68% loss of current density. The presence of a final, diverging channel has greater impact on cell performance than the fine adjustment of channel width at the simulation conditions set herein studied.Keywords: optimization, flow field design, simplified conjugate-gradient method, serpentine flow field, sub-rib convection
Procedia PDF Downloads 297842 Semantic Search Engine Based on Query Expansion with Google Ranking and Similarity Measures
Authors: Ahmad Shahin, Fadi Chakik, Walid Moudani
Abstract:
Our study is about elaborating a potential solution for a search engine that involves semantic technology to retrieve information and display it significantly. Semantic search engines are not used widely over the web as the majorities are still in Beta stage or under construction. Many problems face the current applications in semantic search, the major problem is to analyze and calculate the meaning of query in order to retrieve relevant information. Another problem is the ontology based index and its updates. Ranking results according to concept meaning and its relation with query is another challenge. In this paper, we are offering a light meta-engine (QESM) which uses Google search, and therefore Google’s index, with some adaptations to its returned results by adding multi-query expansion. The mission was to find a reliable ranking algorithm that involves semantics and uses concepts and meanings to rank results. At the beginning, the engine finds synonyms of each query term entered by the user based on a lexical database. Then, query expansion is applied to generate different semantically analogous sentences. These are generated randomly by combining the found synonyms and the original query terms. Our model suggests the use of semantic similarity measures between two sentences. Practically, we used this method to calculate semantic similarity between each query and the description of each page’s content generated by Google. The generated sentences are sent to Google engine one by one, and ranked again all together with the adapted ranking method (QESM). Finally, our system will place Google pages with higher similarities on the top of the results. We have conducted experimentations with 6 different queries. We have observed that most ranked results with QESM were altered with Google’s original generated pages. With our experimented queries, QESM generates frequently better accuracy than Google. In some worst cases, it behaves like Google.Keywords: semantic search engine, Google indexing, query expansion, similarity measures
Procedia PDF Downloads 426841 Structural Health Assessment of a Masonry Bridge Using Wireless
Authors: Nalluri Lakshmi Ramu, C. Venkat Nihit, Narayana Kumar, Dillep
Abstract:
Masonry bridges are the iconic heritage transportation infrastructure throughout the world. Continuous increase in traffic loads and speed have kept engineers in dilemma about their structural performance and capacity. Henceforth, research community has an urgent need to propose an effective methodology and validate on real-time bridges. The presented research aims to assess the structural health of an Eighty-year-old masonry railway bridge in India using wireless accelerometer sensors. The bridge consists of 44 spans with length of 24.2 m each and individual pier is 13 m tall laid on well foundation. To calculate the dynamic characteristic properties of the bridge, ambient vibrations were recorded from the moving traffic at various speeds and the same are compared with the developed three-dimensional numerical model using finite element-based software. The conclusions about the weaker or deteriorated piers are drawn from the comparison of frequencies obtained from the experimental tests conducted on alternative spans. Masonry is a heterogeneous anisotropic material made up of incoherent materials (such as bricks, stones, and blocks). It is most likely the earliest largely used construction material. Masonry bridges, which were typically constructed of brick and stone, are still a key feature of the world's highway and railway networks. There are 1,47,523 railway bridges across India and about 15% of these bridges are built by masonry, which are around 80 to 100 year old. The cultural significance of masonry bridges cannot be overstated. These bridges are considered to be complicated due to the presence of arches, spandrel walls, piers, foundations, and soils. Due to traffic loads and vibrations, wind, rain, frost attack, high/low temperature cycles, moisture, earthquakes, river overflows, floods, scour, and soil under their foundations may cause material deterioration, opening of joints and ring separation in arch barrels, cracks in piers, loss of brick-stones and mortar joints, distortion of the arch profile. Few NDT tests like Flat jack Tests are being employed to access the homogeneity, durability of masonry structure, however there are many drawbacks because of the test. A modern approach of structural health assessment of masonry structures by vibration analysis, frequencies and stiffness properties is being explored in this paper.Keywords: masonry bridges, condition assessment, wireless sensors, numerical analysis modal frequencies
Procedia PDF Downloads 171840 Determination of Mechanical Properties of Adhesives via Digital Image Correlation (DIC) Method
Authors: Murat Demir Aydin, Elanur Celebi
Abstract:
Adhesively bonded joints are used as an alternative to traditional joining methods due to the important advantages they provide. The most important consideration in the use of adhesively bonded joints is that these joints have appropriate requirements for their use in terms of safety. In order to ensure control of this condition, damage analysis of the adhesively bonded joints should be performed by determining the mechanical properties of the adhesives. When the literature is investigated; it is generally seen that the mechanical properties of adhesives are determined by traditional measurement methods. In this study, to determine the mechanical properties of adhesives, the Digital Image Correlation (DIC) method, which can be an alternative to traditional measurement methods, has been used. The DIC method is a new optical measurement method which is used to determine the parameters of displacement and strain in an appropriate and correct way. In this study, tensile tests of Thick Adherent Shear Test (TAST) samples formed using DP410 liquid structural adhesive and steel materials and bulk tensile specimens formed using and DP410 liquid structural adhesive was performed. The displacement and strain values of the samples were determined by DIC method and the shear stress-strain curves of the adhesive for TAST specimens and the tensile strain curves of the bulk adhesive specimens were obtained. Various methods such as numerical methods are required as conventional measurement methods (strain gauge, mechanic extensometer, etc.) are not sufficient in determining the strain and displacement values of the very thin adhesive layer such as TAST samples. As a result, the DIC method removes these requirements and easily achieves displacement measurements with sufficient accuracy.Keywords: structural adhesive, adhesively bonded joints, digital image correlation, thick adhered shear test (TAST)
Procedia PDF Downloads 322839 Deep Reinforcement Learning Approach for Trading Automation in The Stock Market
Authors: Taylan Kabbani, Ekrem Duman
Abstract:
The design of adaptive systems that take advantage of financial markets while reducing the risk can bring more stagnant wealth into the global market. However, most efforts made to generate successful deals in trading financial assets rely on Supervised Learning (SL), which suffered from various limitations. Deep Reinforcement Learning (DRL) offers to solve these drawbacks of SL approaches by combining the financial assets price "prediction" step and the "allocation" step of the portfolio in one unified process to produce fully autonomous systems capable of interacting with its environment to make optimal decisions through trial and error. In this paper, a continuous action space approach is adopted to give the trading agent the ability to gradually adjust the portfolio's positions with each time step (dynamically re-allocate investments), resulting in better agent-environment interaction and faster convergence of the learning process. In addition, the approach supports the managing of a portfolio with several assets instead of a single one. This work represents a novel DRL model to generate profitable trades in the stock market, effectively overcoming the limitations of supervised learning approaches. We formulate the trading problem, or what is referred to as The Agent Environment as Partially observed Markov Decision Process (POMDP) model, considering the constraints imposed by the stock market, such as liquidity and transaction costs. More specifically, we design an environment that simulates the real-world trading process by augmenting the state representation with ten different technical indicators and sentiment analysis of news articles for each stock. We then solve the formulated POMDP problem using the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm, which can learn policies in high-dimensional and continuous action spaces like those typically found in the stock market environment. From the point of view of stock market forecasting and the intelligent decision-making mechanism, this paper demonstrates the superiority of deep reinforcement learning in financial markets over other types of machine learning such as supervised learning and proves its credibility and advantages of strategic decision-making.Keywords: the stock market, deep reinforcement learning, MDP, twin delayed deep deterministic policy gradient, sentiment analysis, technical indicators, autonomous agent
Procedia PDF Downloads 178838 Study of Evaluation Model Based on Information System Success Model and Flow Theory Using Web-scale Discovery System
Authors: June-Jei Kuo, Yi-Chuan Hsieh
Abstract:
Because of the rapid growth of information technology, more and more libraries introduce the new information retrieval systems to enhance the users’ experience, improve the retrieval efficiency, and increase the applicability of the library resources. Nevertheless, few of them are discussed the usability from the users’ aspect. The aims of this study are to understand that the scenario of the information retrieval system utilization, and to know why users are willing to continuously use the web-scale discovery system to improve the web-scale discovery system and promote their use of university libraries. Besides of questionnaires, observations and interviews, this study employs both Information System Success Model introduced by DeLone and McLean in 2003 and the flow theory to evaluate the system quality, information quality, service quality, use, user satisfaction, flow, and continuing to use web-scale discovery system of students from National Chung Hsing University. Then, the results are analyzed through descriptive statistics and structural equation modeling using AMOS. The results reveal that in web-scale discovery system, the user’s evaluation of system quality, information quality, and service quality is positively related to the use and satisfaction; however, the service quality only affects user satisfaction. User satisfaction and the flow show a significant impact on continuing to use. Moreover, user satisfaction has a significant impact on user flow. According to the results of this study, to maintain the stability of the information retrieval system, to improve the information content quality, and to enhance the relationship between subject librarians and students are recommended for the academic libraries. Meanwhile, to improve the system user interface, to minimize layer from system-level, to strengthen the data accuracy and relevance, to modify the sorting criteria of the data, and to support the auto-correct function are required for system provider. Finally, to establish better communication with librariana commended for all users.Keywords: web-scale discovery system, discovery system, information system success model, flow theory, academic library
Procedia PDF Downloads 104837 Discrete PID and Discrete State Feedback Control of a Brushed DC Motor
Authors: I. Valdez, J. Perdomo, M. Colindres, N. Castro
Abstract:
Today, digital servo systems are extensively used in industrial manufacturing processes, robotic applications, vehicles and other areas. In such control systems, control action is provided by digital controllers with different compensation algorithms, which are designed to meet specific requirements for a given application. Due to the constant search for optimization in industrial processes, it is of interest to design digital controllers that offer ease of realization, improved computational efficiency, affordable return rates, and ease of tuning that ultimately improve the performance of the controlled actuators. There is a vast range of options of compensation algorithms that could be used, although in the industry, most controllers used are based on a PID structure. This research article compares different types of digital compensators implemented in a servo system for DC motor position control. PID compensation is evaluated on its two most common architectures: PID position form (1 DOF), and PID speed form (2 DOF). State feedback algorithms are also evaluated, testing two modern control theory techniques: discrete state observer for non-measurable variables tracking, and a linear quadratic method which allows a compromise between the theoretical optimal control and the realization that most closely matches it. The compared control systems’ performance is evaluated through simulations in the Simulink platform, in which it is attempted to model accurately each of the system’s hardware components. The criteria by which the control systems are compared are reference tracking and disturbance rejection. In this investigation, it is considered that the accurate tracking of the reference signal for a position control system is particularly important because of the frequency and the suddenness in which the control signal could change in position control applications, while disturbance rejection is considered essential because the torque applied to the motor shaft due to sudden load changes can be modeled as a disturbance that must be rejected, ensuring reference tracking. Results show that 2 DOF PID controllers exhibit high performance in terms of the benchmarks mentioned, as long as they are properly tuned. As for controllers based on state feedback, due to the nature and the advantage which state space provides for modelling MIMO, it is expected that such controllers evince ease of tuning for disturbance rejection, assuming that the designer of such controllers is experienced. An in-depth multi-dimensional analysis of preliminary research results indicate that state feedback control method is more satisfactory, but PID control method exhibits easier implementation in most control applications.Keywords: control, DC motor, discrete PID, discrete state feedback
Procedia PDF Downloads 268836 Numerical Investigation of Gas Leakage in RCSW-Soil Combinations
Authors: Mahmoud Y. M. Ahmed, Ahmed Konsowa, Mostafa Sami, Ayman Mosallam
Abstract:
Fukushima nuclear accident (Japan 2011) has drawn attention to the issue of gas leakage from hazardous facilities through building boundaries. The rapidly increasing investments in nuclear stations have made the ability to predict, and prevent, gas leakage a rather crucial issue both environmentally and economically. Leakage monitoring for underground facilities is rather complicated due to the combination of Reinforced Concrete Shear Wall (RCSW) and soil. In the framework of a recent research conducted by the authors, the gas insulation capabilities of RCSW-soil combination have been investigated via a lab-scale experimental work. Despite their accuracy, experimental investigations are expensive, time-consuming, hazardous, and lack for flexibility. Numerically simulating the gas leakage as a fluid flow problem based on Computational Fluid Dynamics (CFD) modeling approach can provide a potential alternative. This novel implementation of CFD approach is the topic of the present paper. The paper discusses the aspects of modeling the gas flow through porous media that resemble the RCSW both isolated and combined with the normal soil. A commercial CFD package is utilized in simulating this fluid flow problem. A fixed RCSW layer thickness is proposed, air is taken as the leaking gas, whereas the soil layer is represented as clean sand with variable properties. The variable sand properties include sand layer thickness, fine fraction ratio, and moisture content. The CFD simulation results almost demonstrate what has been found experimentally. A soil layer attached next to a cracked reinforced concrete section plays a significant role in reducing the gas leakage from that cracked section. This role is found to be strongly dependent on the soil specifications.Keywords: RCSW, gas leakage, Pressure Decay Method, hazardous underground facilities, CFD
Procedia PDF Downloads 419835 Prospectivity Mapping of Orogenic Lode Gold Deposits Using Fuzzy Models: A Case Study of Saqqez Area, Northwestern Iran
Authors: Fanous Mohammadi, Majid H. Tangestani, Mohammad H. Tayebi
Abstract:
This research aims to evaluate and compare Geographical Information Systems (GIS)-based fuzzy models for producing orogenic gold prospectivity maps in the Saqqez area, NW of Iran. Gold occurrences are hosted in sericite schist and mafic to felsic meta-volcanic rocks in this area and are associated with hydrothermal alterations that extend over ductile to brittle shear zones. The predictor maps, which represent the Pre-(Source/Trigger/Pathway), syn-(deposition/physical/chemical traps) and post-mineralization (preservation/distribution of indicator minerals) subsystems for gold mineralization, were generated using empirical understandings of the specifications of known orogenic gold deposits and gold mineral systems and were then pre-processed and integrated to produce mineral prospectivity maps. Five fuzzy logic operators, including AND, OR, Fuzzy Algebraic Product (FAP), Fuzzy Algebraic Sum (FAS), and GAMMA, were applied to the predictor maps in order to find the most efficient prediction model. Prediction-Area (P-A) plots and field observations were used to assess and evaluate the accuracy of prediction models. Mineral prospectivity maps generated by AND, OR, FAP, and FAS operators were inaccurate and, therefore, unable to pinpoint the exact location of discovered gold occurrences. The GAMMA operator, on the other hand, produced acceptable results and identified potentially economic target sites. The P-A plot revealed that 68 percent of known orogenic gold deposits are found in high and very high potential regions. The GAMMA operator was shown to be useful in predicting and defining cost-effective target sites for orogenic gold deposits, as well as optimizing mineral deposit exploitation.Keywords: mineral prospectivity mapping, fuzzy logic, GIS, orogenic gold deposit, Saqqez, Iran
Procedia PDF Downloads 124834 Plasmodium knowlesi Zoonotic Malaria: An Emerging Challenge of Health Problems in Thailand
Authors: Surachart Koyadun
Abstract:
Currently, Plasmodium knowlesi malaria has spread to almost all countries in Southeast Asia. This research aimed to 1) describe the epidemiology of Plasmodium knowlesi malaria, 2) examine the clinical symptoms of P. knowlesi malaria patients 3) analyze the ecology, animal reservoir and entomology of P. knowlesi malaria. 4) summarize the diagnosis, blood parasites, and treatment of P. knowlesi malaria. The study design was a case report combined with retrospective descriptive survey research. A total of 34 study subjects were patients with a confirmed diagnosis of P. knowlesi malaria who received treatment at hospitals and vector-borne disease control units in Songkhla Province during 2021 – 2022. The results of the epidemiological study unveiled the majority of the samples were male, had a history of staying overnight in the forest before becoming sick, the source of the infection was in the forest, and the season during which they were sick was mostly summer. The average length of time from the onset of illness until receiving a blood test was 3.8 days. The average length of hospital stay was 4 days. Patients were treated with Chloroquine Phosphate, Primaquine, Artesunate, Quinine, and Dihydroartemisinin-piperaquine (40 mg DHA-320 mg PPQ). One death was seen in 34 P. knowlesi malaria patients. All remaining patients recovered and responded to treatment. All symptoms improved after drug administration. No treatment failures were found. Analyses of ecological, zoonotic and entomological data revealed an association between infected patients and forested, monkey-hosted and mosquito-transmitted areas. The recommendation from this study was that the Polymerase Chain Reaction (PCR) method should be used in conjunction with the Thick/Thin Film test and blood parasite test (Parasitaemia) for the specificity of the infection, accuracy of diagnosis, leading to treatment of disease in a timely manner and be effective in disease control.Keywords: human malaria, Plasmodium knowlesi, zoonotic disease, diagnosis and treatment, epidemiology, ecology
Procedia PDF Downloads 28833 A Dynamic Cardiac Single Photon Emission Computer Tomography Using Conventional Gamma Camera to Estimate Coronary Flow Reserve
Authors: Maria Sciammarella, Uttam M. Shrestha, Youngho Seo, Grant T. Gullberg, Elias H. Botvinick
Abstract:
Background: Myocardial perfusion imaging (MPI) is typically performed with static imaging protocols and visually assessed for perfusion defects based on the relative intensity distribution. Dynamic cardiac SPECT, on the other hand, is a new imaging technique that is based on time varying information of radiotracer distribution, which permits quantification of myocardial blood flow (MBF). In this abstract, we report a progress and current status of dynamic cardiac SPECT using conventional gamma camera (Infinia Hawkeye 4, GE Healthcare) for estimation of myocardial blood flow and coronary flow reserve. Methods: A group of patients who had high risk of coronary artery disease was enrolled to evaluate our methodology. A low-dose/high-dose rest/pharmacologic-induced-stress protocol was implemented. A standard rest and a standard stress radionuclide dose of ⁹⁹ᵐTc-tetrofosmin (140 keV) was administered. The dynamic SPECT data for each patient were reconstructed using the standard 4-dimensional maximum likelihood expectation maximization (ML-EM) algorithm. Acquired data were used to estimate the myocardial blood flow (MBF). The correspondence between flow values in the main coronary vasculature with myocardial segments defined by the standardized myocardial segmentation and nomenclature were derived. The coronary flow reserve, CFR, was defined as the ratio of stress to rest MBF values. CFR values estimated with SPECT were also validated with dynamic PET. Results: The range of territorial MBF in LAD, RCA, and LCX was 0.44 ml/min/g to 3.81 ml/min/g. The MBF between estimated with PET and SPECT in the group of independent cohort of 7 patients showed statistically significant correlation, r = 0.71 (p < 0.001). But the corresponding CFR correlation was moderate r = 0.39 yet statistically significant (p = 0.037). The mean stress MBF value was significantly lower for angiographically abnormal than that for the normal (Normal Mean MBF = 2.49 ± 0.61, Abnormal Mean MBF = 1.43 ± 0. 0.62, P < .001). Conclusions: The visually assessed image findings in clinical SPECT are subjective, and may not reflect direct physiologic measures of coronary lesion. The MBF and CFR measured with dynamic SPECT are fully objective and available only with the data generated from the dynamic SPECT method. A quantitative approach such as measuring CFR using dynamic SPECT imaging is a better mode of diagnosing CAD than visual assessment of stress and rest images from static SPECT images Coronary Flow Reserve.Keywords: dynamic SPECT, clinical SPECT/CT, selective coronary angiograph, ⁹⁹ᵐTc-Tetrofosmin
Procedia PDF Downloads 152832 Data Mining in Healthcare for Predictive Analytics
Authors: Ruzanna Muradyan
Abstract:
Medical data mining is a crucial field in contemporary healthcare that offers cutting-edge tactics with enormous potential to transform patient care. This abstract examines how sophisticated data mining techniques could transform the healthcare industry, with a special focus on how they might improve patient outcomes. Healthcare data repositories have dynamically evolved, producing a rich tapestry of different, multi-dimensional information that includes genetic profiles, lifestyle markers, electronic health records, and more. By utilizing data mining techniques inside this vast library, a variety of prospects for precision medicine, predictive analytics, and insight production become visible. Predictive modeling for illness prediction, risk stratification, and therapy efficacy evaluations are important points of focus. Healthcare providers may use this abundance of data to tailor treatment plans, identify high-risk patient populations, and forecast disease trajectories by applying machine learning algorithms and predictive analytics. Better patient outcomes, more efficient use of resources, and early treatments are made possible by this proactive strategy. Furthermore, data mining techniques act as catalysts to reveal complex relationships between apparently unrelated data pieces, providing enhanced insights into the cause of disease, genetic susceptibilities, and environmental factors. Healthcare practitioners can get practical insights that guide disease prevention, customized patient counseling, and focused therapies by analyzing these associations. The abstract explores the problems and ethical issues that come with using data mining techniques in the healthcare industry. In order to properly use these approaches, it is essential to find a balance between data privacy, security issues, and the interpretability of complex models. Finally, this abstract demonstrates the revolutionary power of modern data mining methodologies in transforming the healthcare sector. Healthcare practitioners and researchers can uncover unique insights, enhance clinical decision-making, and ultimately elevate patient care to unprecedented levels of precision and efficacy by employing cutting-edge methodologies.Keywords: data mining, healthcare, patient care, predictive analytics, precision medicine, electronic health records, machine learning, predictive modeling, disease prognosis, risk stratification, treatment efficacy, genetic profiles, precision health
Procedia PDF Downloads 63831 Determinants of Utilization of Information and Communication Technology by Lecturers at Kenya Medical Training College, Nairobi
Authors: Agnes Anyango Andollo, Jane Achieng Achola
Abstract:
The use of Information and Communication Technologies (ICTs) has become one of the driving forces in facilitation of learning in most colleges. The ability to effectively harness the technology varies from college to college. The study objective was to determine the lecturers’, institutional attributes and policies that influence the utilization of ICT by the lecturers’. A cross sectional survey design was employed in order to empirically investigate the extent to which lecturers’ personal, institutional attributes and policies influence the utilization of ICT to facilitate learning. The target population of the study was 295 lecturers who facilitate learning at KMTC-Nairobi. Structured self-administered questionnaire was given to the lecturers. Quantitative data was scrutinized for completeness, accuracy and uniformity then coded. Data were analyzed in frequencies and percentages using Statistical Package for Social Sciences (SPSS) version 19, this was a reliable tool for quantitative data analysis. A total of 155 completed questionnaires administered were obtained from the respondents for the study that were subjected to analysis. The study found out that 93 (60%) of the respondents were male while 62 (40%) of the respondents were female. Individual’s educational level, age, gender and educational experience had the greatest impact on use of ICT. Lecturers’ own beliefs, values, ideas and thinking had moderate impact on use of ICT. And that institutional support by provision of resources for ICT related training such as internet, computers, laptops and projectors had moderate impact (p = 0.049) at 5% significant level on use of ICT. The study concluded that institutional attributes and ICT policy were keys to utilization of ICT by lecturers at KMTC Nairobi also mandatory policy on use of ICT by lecturers to facilitate learning was key. It recommended that policies should be put in place for Technical support to lecturers when in problem during utilization of ICT and also a mechanism should be put in place to make the use of ICT in teaching and learning mandatory.Keywords: policy, computers education, medical training institutions, ICTs
Procedia PDF Downloads 359830 Optimizing The Residential Design Process Using Automated Technologies
Authors: Martin Georgiev, Milena Nanova, Damyan Damov
Abstract:
Architects, engineers, and developers need to analyse and implement a wide spectrum of data in different formats, if they want to produce viable residential developments. Usually, this data comes from a number of different sources and is not well structured. The main objective of this research project is to provide parametric tools working with real geodesic data that can generate residential solutions. Various codes, regulations and design constraints are described by variables and prioritized. In this way, we establish a common workflow for architects, geodesists, and other professionals involved in the building and investment process. This collaborative medium ensures that the generated design variants conform to various requirements, contributing to a more streamlined and informed decision-making process. The quantification of distinctive characteristics inherent to typical residential structures allows a systematic evaluation of the generated variants, focusing on factors crucial to designers, such as daylight simulation, circulation analysis, space utilization, view orientation, etc. Integrating real geodesic data offers a holistic view of the built environment, enhancing the accuracy and relevance of the design solutions. The use of generative algorithms and parametric models offers high productivity and flexibility of the design variants. It can be implemented in more conventional CAD and BIM workflow. Experts from different specialties can join their efforts, sharing a common digital workspace. In conclusion, our research demonstrates that a generative parametric approach based on real geodesic data and collaborative decision-making could be introduced in the early phases of the design process. This gives the designers powerful tools to explore diverse design possibilities, significantly improving the qualities of the building investment during its entire lifecycle.Keywords: architectural design, residential buildings, urban development, geodesic data, generative design, parametric models, workflow optimization
Procedia PDF Downloads 55829 Dosimetric Dependence on the Collimator Angle in Prostate Volumetric Modulated Arc Therapy
Authors: Muhammad Isa Khan, Jalil Ur Rehman, Muhammad Afzal Khan Rao, James Chow
Abstract:
Purpose: This study investigates the dose-volume variations in planning target volume (PTV) and organs-at-risk (OARs) using different collimator angles for smart arc prostate volumetric modulated arc therapy (VMAT). Awareness of the collimator angle for PTV and OARs sparing is essential for the planner because optimization contains numerous treatment constraints producing a complex, unstable and computationally challenging problem throughout its examination of an optimal plan in a rational time. Materials and Methods: Single arc VMAT plans at different collimator angles varied systematically (0°-90°) were performed on a Harold phantom and a new treatment plan is optimized for each collimator angle. We analyzed the conformity index (CI), homogeneity index (HI), gradient index (GI), monitor units (MUs), dose-volume histogram, mean and maximum doses to PTV. We also explored OARs (e.g. bladder, rectum and femoral heads), dose-volume criteria in the treatment plan (e.g. D30%, D50%, V30Gy and V38Gy of bladder and rectum; D5%,V14Gy and V22Gy of femoral heads), dose-volume histogram, mean and maximum doses for smart arc VMAT at different collimator angles. Results: There was no significance difference found in VMAT optimization at all studied collimator angles. However, if 0.5% accuracy is concerned then collimator angle = 45° provides higher CI and lower HI. Collimator angle = 15° also provides lower HI values like collimator angle 45°. It is seen that collimator angle = 75° is established as a good for rectum and right femur sparing. Collimator angle = 90° and collimator angle = 30° were found good for rectum and left femur sparing respectively. The PTV dose coverage statistics for each plan are comparatively independent of the collimator angles. Conclusion: It is concluded that this study will help the planner to have freedom to choose any collimator angle from (0°-90°) for PTV coverage and select a suitable collimator angle to spare OARs.Keywords: VMAT, dose-volume histogram, collimator angle, organs-at-risk
Procedia PDF Downloads 512828 Magnetic Navigation in Underwater Networks
Authors: Kumar Divyendra
Abstract:
Underwater Sensor Networks (UWSNs) have wide applications in areas such as water quality monitoring, marine wildlife management etc. A typical UWSN system consists of a set of sensors deployed randomly underwater which communicate with each other using acoustic links. RF communication doesn't work underwater, and GPS too isn't available underwater. Additionally Automated Underwater Vehicles (AUVs) are deployed to collect data from some special nodes called Cluster Heads (CHs). These CHs aggregate data from their neighboring nodes and forward them to the AUVs using optical links when an AUV is in range. This helps reduce the number of hops covered by data packets and helps conserve energy. We consider the three-dimensional model of the UWSN. Nodes are initially deployed randomly underwater. They attach themselves to the surface using a rod and can only move upwards or downwards using a pump and bladder mechanism. We use graph theory concepts to maximize the coverage volume while every node maintaining connectivity with at least one surface node. We treat the surface nodes as landmarks and each node finds out its hop distance from every surface node. We treat these hop-distances as coordinates and use them for AUV navigation. An AUV intending to move closer to a node with given coordinates moves hop by hop through nodes that are closest to it in terms of these coordinates. In absence of GPS, multiple different approaches like Inertial Navigation System (INS), Doppler Velocity Log (DVL), computer vision-based navigation, etc., have been proposed. These systems have their own drawbacks. INS accumulates error with time, vision techniques require prior information about the environment. We propose a method that makes use of the earth's magnetic field values for navigation and combines it with other methods that simultaneously increase the coverage volume under the UWSN. The AUVs are fitted with magnetometers that measure the magnetic intensity (I), horizontal inclination (H), and Declination (D). The International Geomagnetic Reference Field (IGRF) is a mathematical model of the earth's magnetic field, which provides the field values for the geographical coordinateson earth. Researchers have developed an inverse deep learning model that takes the magnetic field values and predicts the location coordinates. We make use of this model within our work. We combine this with with the hop-by-hop movement described earlier so that the AUVs move in such a sequence that the deep learning predictor gets trained as quickly and precisely as possible We run simulations in MATLAB to prove the effectiveness of our model with respect to other methods described in the literature.Keywords: clustering, deep learning, network backbone, parallel computing
Procedia PDF Downloads 99827 Dynamic Control Theory: A Behavioral Modeling Approach to Demand Forecasting amongst Office Workers Engaged in a Competition on Energy Shifting
Authors: Akaash Tawade, Manan Khattar, Lucas Spangher, Costas J. Spanos
Abstract:
Many grids are increasing the share of renewable energy in their generation mix, which is causing the energy generation to become less controllable. Buildings, which consume nearly 33% of all energy, are a key target for demand response: i.e., mechanisms for demand to meet supply. Understanding the behavior of office workers is a start towards developing demand response for one sector of building technology. The literature notes that dynamic computational modeling can be predictive of individual action, especially given that occupant behavior is traditionally abstracted from demand forecasting. Recent work founded on Social Cognitive Theory (SCT) has provided a promising conceptual basis for modeling behavior, personal states, and environment using control theoretic principles. Here, an adapted linear dynamical system of latent states and exogenous inputs is proposed to simulate energy demand amongst office workers engaged in a social energy shifting game. The energy shifting competition is implemented in an office in Singapore that is connected to a minigrid of buildings with a consistent 'price signal.' This signal is translated into a 'points signal' by a reinforcement learning (RL) algorithm to influence participant energy use. The dynamic model functions at the intersection of the points signals, baseline energy consumption trends, and SCT behavioral inputs to simulate future outcomes. This study endeavors to analyze how the dynamic model trains an RL agent and, subsequently, the degree of accuracy to which load deferability can be simulated. The results offer a generalizable behavioral model for energy competitions that provides the framework for further research on transfer learning for RL, and more broadly— transactive control.Keywords: energy demand forecasting, social cognitive behavioral modeling, social game, transfer learning
Procedia PDF Downloads 109826 Chemical Life Cycle Alternative Assessment as a Green Chemical Substitution Framework: A Feasibility Study
Authors: Sami Ayad, Mengshan Lee
Abstract:
The Sustainable Development Goals (SDGs) were designed to be the best possible blueprint to achieve peace, prosperity, and overall, a better and more sustainable future for the Earth and all its people, and such a blueprint is needed more than ever. The SDGs face many hurdles that will prevent them from becoming a reality, one of such hurdles, arguably, is the chemical pollution and unintended chemical impacts generated through the production of various goods and resources that we consume. Chemical Alternatives Assessment has proven to be a viable solution for chemical pollution management in terms of filtering out hazardous chemicals for a greener alternative. However, the current substitution practice lacks crucial quantitative datasets (exposures and life cycle impacts) to ensure no unintended trade-offs occur in the substitution process. A Chemical Life Cycle Alternative Assessment (CLiCAA) framework is proposed as a reliable and replicable alternative to Life Cycle Based Alternative Assessment (LCAA) as it integrates chemical molecular structure analysis and Chemical Life Cycle Collaborative (CLiCC) web-based tool to fill in data gaps that the former frameworks suffer from. The CLiCAA framework consists of a four filtering layers, the first two being mandatory, with the final two being optional assessment and data extrapolation steps. Each layer includes relevant impact categories of each chemical, ranging from human to environmental impacts, that will be assessed and aggregated into unique scores for overall comparable results, with little to no data. A feasibility study will demonstrate the efficiency and accuracy of CLiCAA whilst bridging both cancer potency and exposure limit data, hoping to provide the necessary categorical impact information for every firm possible, especially those disadvantaged in terms of research and resource management.Keywords: chemical alternative assessment, LCA, LCAA, CLiCC, CLiCAA, chemical substitution framework, cancer potency data, chemical molecular structure analysis
Procedia PDF Downloads 92825 Optimizing Detection Methods for THz Bio-imaging Applications
Authors: C. Bolakis, I. S. Karanasiou, D. Grbovic, G. Karunasiri, N. Uzunoglu
Abstract:
A new approach for efficient detection of THz radiation in biomedical imaging applications is proposed. A double-layered absorber consisting of a 32 nm thick aluminum (Al) metallic layer, located on a glass medium (SiO2) of 1 mm thickness, was fabricated and used to design a fine-tuned absorber through a theoretical and finite element modeling process. The results indicate that the proposed low-cost, double-layered absorber can be tuned based on the metal layer sheet resistance and the thickness of various glass media taking advantage of the diversity of the absorption of the metal films in the desired THz domain (6 to 10 THz). It was found that the composite absorber could absorb up to 86% (a percentage exceeding the 50%, previously shown to be the highest achievable when using single thin metal layer) and reflect less than 1% of the incident THz power. This approach will enable monitoring of the transmission coefficient (THz transmission ‘’fingerprint’’) of the biosample with high accuracy, while also making the proposed double-layered absorber a good candidate for a microbolometer pixel’s active element. Based on the aforementioned promising results, a more sophisticated and effective double-layered absorber is under development. The glass medium has been substituted by diluted poly-si and the results were twofold: An absorption factor of 96% was reached and high TCR properties acquired. In addition, a generalization of these results and properties over the active frequency spectrum was achieved. Specifically, through the development of a theoretical equation having as input any arbitrary frequency in the IR spectrum (0.3 to 405.4 THz) and as output the appropriate thickness of the poly-si medium, the double-layered absorber retains the ability to absorb the 96% and reflects less than 1% of the incident power. As a result, through that post-optimization process and the spread spectrum frequency adjustment, the microbolometer detector efficiency could be further improved.Keywords: bio-imaging, fine-tuned absorber, fingerprint, microbolometer
Procedia PDF Downloads 348