Search results for: one side class algorithm
1226 Effects of Fe Addition and Process Parameters on the Wear and Corrosion Characteristics of Icosahedral Al-Cu-Fe Coatings on Ti-6Al-4V Alloy
Authors: Olawale S. Fatoba, Stephen A. Akinlabi, Esther T. Akinlabi, Rezvan Gharehbaghi
Abstract:
The performance of material surface under wear and corrosion environments cannot be fulfilled by the conventional surface modifications and coatings. Therefore, different industrial sectors need an alternative technique for enhanced surface properties. Titanium and its alloys possess poor tribological properties which limit their use in certain industries. This paper focuses on the effect of hybrid coatings Al-Cu-Fe on a grade five titanium alloy using laser metal deposition (LMD) process. Icosahedral Al-Cu-Fe as quasicrystals is a relatively new class of materials which exhibit unusual atomic structure and useful physical and chemical properties. A 3kW continuous wave ytterbium laser system (YLS) attached to a KUKA robot which controls the movement of the cladding process was utilized for the fabrication of the coatings. The titanium cladded surfaces were investigated for its hardness, corrosion and tribological behaviour at different laser processing conditions. The samples were cut to corrosion coupons, and immersed into 3.65% NaCl solution at 28oC using Electrochemical Impedance Spectroscopy (EIS) and Linear Polarization (LP) techniques. The cross-sectional view of the samples was analysed. It was found that the geometrical properties of the deposits such as width, height and the Heat Affected Zone (HAZ) of each sample remarkably increased with increasing laser power due to the laser-material interaction. It was observed that there are higher number of aluminum and titanium presented in the formation of the composite. The indentation testing reveals that for both scanning speed of 0.8 m/min and 1m/min, the mean hardness value decreases with increasing laser power. The low coefficient of friction, excellent wear resistance and high microhardness were attributed to the formation of hard intermetallic compounds (TiCu, Ti2Cu, Ti3Al, Al3Ti) produced through the in situ metallurgical reactions during the LMD process. The load-bearing capability of the substrate was improved due to the excellent wear resistance of the coatings. The cladded layer showed a uniform crack free surface due to optimized laser process parameters which led to the refinement of the coatings.Keywords: Al-Cu-Fe coating, corrosion, intermetallics, laser metal deposition, Ti-6Al-4V alloy, wear resistance
Procedia PDF Downloads 1781225 Open Source Cloud Managed Enterprise WiFi
Authors: James Skon, Irina Beshentseva, Michelle Polak
Abstract:
Wifi solutions come in two major classes. Small Office/Home Office (SOHO) WiFi, characterized by inexpensive WiFi routers, with one or two service set identifiers (SSIDs), and a single shared passphrase. These access points provide no significant user management or monitoring, and no aggregation of monitoring and control for multiple routers. The other solution class is managed enterprise WiFi solutions, which involve expensive Access Points (APs), along with (also costly) local or cloud based management components. These solutions typically provide portal based login, per user virtual local area networks (VLANs), and sophisticated monitoring and control across a large group of APs. The cost for deploying and managing such managed enterprise solutions is typically about 10 fold that of inexpensive consumer APs. Low revenue organizations, such as schools, non-profits, non-government organizations (NGO's), small businesses, and even homes cannot easily afford quality enterprise WiFi solutions, though they may need to provide quality WiFi access to their population. Using available lower cost Wifi solutions can significantly reduce their ability to provide reliable, secure network access. This project explored and created a new approach for providing secured managed enterprise WiFi based on low cost hardware combined with both new and existing (but modified) open source software. The solution provides a cloud based management interface which allows organizations to aggregate the configuration and management of small, medium and large WiFi solutions. It utilizes a novel approach for user management, giving each user a unique passphrase. It provides unlimited SSID's across an unlimited number of WiFI zones, and the ability to place each user (and all their devices) on their own VLAN. With proper configuration it can even provide user local services. It also allows for users' usage and quality of service to be monitored, and for users to be added, enabled, and disabled at will. As inferred above, the ultimate goal is to free organizations with limited resources from the expense of a commercial enterprise WiFi, while providing them with most of the qualities of such a more expensive managed solution at a fraction of the cost.Keywords: wifi, enterprise, cloud, managed
Procedia PDF Downloads 971224 Analysis of Urban Rail Transit Station's Accessibility Reliability: A Case Study of Hangzhou Metro, China
Authors: Jin-Qu Chen, Jie Liu, Yong Yin, Zi-Qi Ju, Yu-Yao Wu
Abstract:
Increase in travel fare and station’s failure will have huge impact on passengers’ travel. The Urban Rail Transit (URT) station’s accessibility reliability under increasing travel fare and station failure are analyzed in this paper. Firstly, the passenger’s travel path is resumed based on stochastic user equilibrium and Automatic Fare Collection (AFC) data. Secondly, calculating station’s importance by combining LeaderRank algorithm and Ratio of Station Affected Passenger Volume (RSAPV), and then the station’s accessibility evaluation indicators are proposed based on the analysis of passenger’s travel characteristic. Thirdly, station’s accessibility under different scenarios are measured and rate of accessibility change is proposed as station’s accessibility reliability indicator. Finally, the accessibility of Hangzhou metro stations is analyzed by the formulated models. The result shows that Jinjiang station and Liangzhu station are the most important and convenient station in the Hangzhou metro, respectively. Station failure and increase in travel fare and station failure have huge impact on station’s accessibility, except for increase in travel fare. Stations in Hangzhou metro Line 1 have relatively worse accessibility reliability and Fengqi Road station’s accessibility reliability is weakest. For Hangzhou metro operational department, constructing new metro line around Line 1 and protecting Line 1’s station preferentially can effective improve the accessibility reliability of Hangzhou metro.Keywords: automatic fare collection data, AFC, station’s accessibility reliability, stochastic user equilibrium, urban rail transit, URT
Procedia PDF Downloads 1351223 A Picture is worth a Billion Bits: Real-Time Image Reconstruction from Dense Binary Pixels
Authors: Tal Remez, Or Litany, Alex Bronstein
Abstract:
The pursuit of smaller pixel sizes at ever increasing resolution in digital image sensors is mainly driven by the stringent price and form-factor requirements of sensors and optics in the cellular phone market. Recently, Eric Fossum proposed a novel concept of an image sensor with dense sub-diffraction limit one-bit pixels (jots), which can be considered a digital emulation of silver halide photographic film. This idea has been recently embodied as the EPFL Gigavision camera. A major bottleneck in the design of such sensors is the image reconstruction process, producing a continuous high dynamic range image from oversampled binary measurements. The extreme quantization of the Poisson statistics is incompatible with the assumptions of most standard image processing and enhancement frameworks. The recently proposed maximum-likelihood (ML) approach addresses this difficulty, but suffers from image artifacts and has impractically high computational complexity. In this work, we study a variant of a sensor with binary threshold pixels and propose a reconstruction algorithm combining an ML data fitting term with a sparse synthesis prior. We also show an efficient hardware-friendly real-time approximation of this inverse operator. Promising results are shown on synthetic data as well as on HDR data emulated using multiple exposures of a regular CMOS sensor.Keywords: binary pixels, maximum likelihood, neural networks, sparse coding
Procedia PDF Downloads 2011222 Examining Geometric Thinking Behaviours of Undergraduates in Online Geometry Course
Authors: Peter Akayuure
Abstract:
Geometry is considered an important strand in mathematics due to its wide-ranging utilitarian value and because it serves as a building block for understanding other aspects of undergraduate mathematics, including algebra and calculus. Matters regarding students’ geometric thinking have therefore long been pursued by mathematics researchers and educators globally via different theoretical lenses, curriculum reform efforts, and innovative instructional practices. However, so far, studies remain inconclusive about the instructional platforms that effectively promote geometric thinking. At the University of Education, Winneba, an undergraduate geometry course was designed and delivered on UEW Learning Management System (LMS) using Moodle platform. This study utilizes van Hiele’s theoretical lens to examine the entry and exit’s geometric thinking behaviours of prospective teachers who took the undergraduate geometry course in the LMS platform. The study was a descriptive survey that involved an intact class of 280 first-year students enrolled to pursue a bachelor's in mathematics education at the university. The van Hiele’s Geometric thinking test was used to assess participants’ entry and exit behaviours, while semi-structured interviews were used to obtain data for triangulation. Data were analysed descriptively and displayed in tables and charts. An Independent t-test was used to test for significant differences in geometric thinking behaviours between those who entered the university with a diploma certificate and with senior high certificate. The results show that on entry, more than 70% of the prospective teachers operated within the visualization level of van Hiele’s geometric thinking. Less than 20% reached analysis and abstraction levels, and no participant reached deduction and rigor levels. On exit, participants’ geometric thinking levels increased markedly across levels, but the difference from entry was not significant and might have occurred by chance. The geometric thinking behaviours of those enrolled with diploma certificates did not differ significant from those enrolled directly from senior high school. The study recommends that the design principles and delivery of undergraduate geometry course via LMS should be structured and tackled using van Hiele’s geometric thinking levels to serve as means of bridging the existing learning gaps of undergraduate students.Keywords: geometric thinking, van Hiele’s, UEW learning management system, undergraduate geometry
Procedia PDF Downloads 1281221 Computer Countenanced Diagnosis of Skin Nodule Detection and Histogram Augmentation: Extracting System for Skin Cancer
Authors: S. Zith Dey Babu, S. Kour, S. Verma, C. Verma, V. Pathania, A. Agrawal, V. Chaudhary, A. Manoj Puthur, R. Goyal, A. Pal, T. Danti Dey, A. Kumar, K. Wadhwa, O. Ved
Abstract:
Background: Skin cancer is now is the buzzing button in the field of medical science. The cyst's pandemic is drastically calibrating the body and well-being of the global village. Methods: The extracted image of the skin tumor cannot be used in one way for diagnosis. The stored image contains anarchies like the center. This approach will locate the forepart of an extracted appearance of skin. Partitioning image models has been presented to sort out the disturbance in the picture. Results: After completing partitioning, feature extraction has been formed by using genetic algorithm and finally, classification can be performed between the trained and test data to evaluate a large scale of an image that helps the doctors for the right prediction. To bring the improvisation of the existing system, we have set our objectives with an analysis. The efficiency of the natural selection process and the enriching histogram is essential in that respect. To reduce the false-positive rate or output, GA is performed with its accuracy. Conclusions: The objective of this task is to bring improvisation of effectiveness. GA is accomplishing its task with perfection to bring down the invalid-positive rate or outcome. The paper's mergeable portion conflicts with the composition of deep learning and medical image processing, which provides superior accuracy. Proportional types of handling create the reusability without any errors.Keywords: computer-aided system, detection, image segmentation, morphology
Procedia PDF Downloads 1501220 Developing a Discourse Community of Doctoral Students in a Multicultural Context
Authors: Jinghui Wang, Minjie Xing
Abstract:
The increasing number of international students for doctoral education has brought vitality and diversity to the educational environment in China, and at the same time constituted a new challenge to the English teaching in the higher education as the majority of international students come from developing countries where English is not their first language. To make their contribution to knowledge development and technical innovation, these international doctoral students need to present their research work in English, locally and globally. This study reports an exploratory study with an emphasis on the cognition and construction of academic discourse in the multicultural context. The present study aims to explore ways to better prepare them for international academic exchange in English. Voluntarily, all international doctoral students (n = 81) from 35 countries enrolled in the English Course: Speaking and Writing as a New Scientist, participated in the study. Two research questions were raised: 1) What did these doctoral students say about their cognition and construction of English academic discourses? 2) How did they manage to develop their productive skills in a multicultural context? To answer the research questions, data were collected from self-reports, in-depth interviews, and video-recorded class observations. The major findings of the study suggest that the participants to varying degrees benefitted from the cognition and construction of English academic discourse in the multicultural context. Specifically, 1) The cognition and construction of meta-discourse allowed them to construct their own academic discourses in English; 2) In the light of Swales’ CARS Model, they became sensitive to the “moves” involved in the published papers closely related to their study, and learned to use them in their English academic discourses; 3) Multimodality-driven presentation (multimedia modes) enabled these doctoral student to have their voice heard for technical innovation purposes; 4) Speaking as a new scientist, every doctoral student felt happy and able to serve as an intercultural mediator in the multicultural context, bridging the gap between their home culture and the global culture; and most importantly, 5) most of the participants reported developing an English discourse community among international doctoral students, becoming resourceful and productive in the multicultural context. It is concluded that the cognition and construction of academic discourse in the multicultural context proves to be conducive to the productivity and intercultural citizenship education of international doctoral students.Keywords: academic discourse, international doctoral students, meta-discourse, multicultural context
Procedia PDF Downloads 3821219 Diagnostic Value of Different Noninvasive Criteria of Latent Myocarditis in Comparison with Myocardial Biopsy
Authors: Olga Blagova, Yuliya Osipova, Evgeniya Kogan, Alexander Nedostup
Abstract:
Purpose: to quantify the value of various clinical, laboratory and instrumental signs in the diagnosis of myocarditis in comparison with morphological studies of the myocardium. Methods: in 100 patients (65 men, 44.7±12.5 years) with «idiopathic» arrhythmias (n = 20) and dilated cardiomyopathy (DCM, n = 80) were performed 71 endomyocardial biopsy (EMB), 13 intraoperative biopsy, 5 study of explanted hearts, 11 autopsy with virus investigation (real-time PCR) of the blood and myocardium. Anti-heart antibodies (AHA) were also measured as well as cardiac CT (n = 45), MRI (n = 25), coronary angiography (n = 47). The comparison group included of 50 patients (25 men, 53.7±11.7 years) with non-inflammatory heart diseases who underwent open heart surgery. Results. Active/borderline myocarditis was diagnosed in 76.0% of the study group and in 21.6% of patients of the comparison group (p < 0.001). The myocardial viral genome was observed more frequently in patients of comparison group than in study group (group (65.0% and 40.2%; p < 0.01. Evaluated the diagnostic value of noninvasive markers of myocarditis. The panel of anti-heart antibodies had the greatest importance to identify myocarditis: sensitivity was 81.5%, positive and negative predictive value was 75.0 and 60.5%. It is defined diagnostic value of non-invasive markers of myocarditis and diagnostic algorithm providing an individual assessment of the likelihood of myocarditis is developed. Conclusion. The greatest significance in the diagnosis of latent myocarditis in patients with 'idiopathic' arrhythmias and DCM have AHA. The use of complex of noninvasive criteria allows estimate the probability of myocarditis and determine the indications for EMB.Keywords: myocarditis, "idiopathic" arrhythmias, dilated cardiomyopathy, endomyocardial biopsy, viral genome, anti-heart antibodies
Procedia PDF Downloads 1731218 Sustaining the Organizational Performance as Well as Maintaining Employee Satisfaction by Governing Work Life Balance
Authors: I. Gupta, C. Kathpal
Abstract:
Introduction: Time is really the only capital that any human being has, and the only thing he cannot afford to lose. Work life balance is a contested term on which researchers have begun to study in 1960s. Work-life balance refers to how people allocate time between their jobs and other pursuits, such as family, hobbies, and community involvement and includes the mental health fitness of the employees so that the future goal of organization to sustain the employees and earning profits can be achieved. Every organization primarily involves making a parity between the employees' work and their personal life by contributing the maximum. Aims and Objectives: The aim of the present study is to examine the impact of work-life balance as well as employee satisfaction on the organizational performance by evaluating the inter-related factors in order to maintain the healthy growth of concerns. Materials and Methods: To realize the aim of the study, an unstructured questionnaire, as well as face to face interview, was conducted from 100 persons which consisted majority of male members of top as well as middle level positions in the various organizations. The prime source of data collection was primary; however, the study has also used the theoretical contribution done in this respective field by various researchers. Results: Majority of the respondents were males(80%) from age group of 25-45. The collected data was analyzed through hypothesis testing statistical techniques such as correlation analysis, single regression analysis and ANOVA which has rejected the null hypothesis that there is no relation between work-life interface and organizational performance. The major finding of this study is that work-life balance is directly related to the organizations performance. The results show that the organization which works on the employee satisfaction earns more. Along with, there is a reduction of turnout rates, absenteeism, moreover, enhancement of productivity as well as revenue of corporations. Conclusion: The present study reflects that the disparity in the work-life balance gives invitation to many disorders either mental or physical which leads the dearth in performance. As a result, not only employees, however, organizations also suffers which is clearly shown in the interviews conducted face to face with employees. The study is not targeting the particular class of audience; however, it brings out benefits to the masses.Keywords: work-life balance, performance, culture, organization, satisfaction
Procedia PDF Downloads 1181217 Using Arellano-Bover/Blundell-Bond Estimator in Dynamic Panel Data Analysis – Case of Finnish Housing Price Dynamics
Authors: Janne Engblom, Elias Oikarinen
Abstract:
A panel dataset is one that follows a given sample of individuals over time, and thus provides multiple observations on each individual in the sample. Panel data models include a variety of fixed and random effects models which form a wide range of linear models. A special case of panel data models are dynamic in nature. A complication regarding a dynamic panel data model that includes the lagged dependent variable is endogeneity bias of estimates. Several approaches have been developed to account for this problem. In this paper, the panel models were estimated using the Arellano-Bover/Blundell-Bond Generalized method of moments (GMM) estimator which is an extension of the Arellano-Bond model where past values and different transformations of past values of the potentially problematic independent variable are used as instruments together with other instrumental variables. The Arellano–Bover/Blundell–Bond estimator augments Arellano–Bond by making an additional assumption that first differences of instrument variables are uncorrelated with the fixed effects. This allows the introduction of more instruments and can dramatically improve efficiency. It builds a system of two equations—the original equation and the transformed one—and is also known as system GMM. In this study, Finnish housing price dynamics were examined empirically by using the Arellano–Bover/Blundell–Bond estimation technique together with ordinary OLS. The aim of the analysis was to provide a comparison between conventional fixed-effects panel data models and dynamic panel data models. The Arellano–Bover/Blundell–Bond estimator is suitable for this analysis for a number of reasons: It is a general estimator designed for situations with 1) a linear functional relationship; 2) one left-hand-side variable that is dynamic, depending on its own past realizations; 3) independent variables that are not strictly exogenous, meaning they are correlated with past and possibly current realizations of the error; 4) fixed individual effects; and 5) heteroskedasticity and autocorrelation within individuals but not across them. Based on data of 14 Finnish cities over 1988-2012 differences of short-run housing price dynamics estimates were considerable when different models and instrumenting were used. Especially, the use of different instrumental variables caused variation of model estimates together with their statistical significance. This was particularly clear when comparing estimates of OLS with different dynamic panel data models. Estimates provided by dynamic panel data models were more in line with theory of housing price dynamics.Keywords: dynamic model, fixed effects, panel data, price dynamics
Procedia PDF Downloads 15081216 The Epistemology of Human Rights Cherished in Islamic Law and Its Compatibility with International Law
Authors: Malik Imtiaz Ahmad
Abstract:
Human beings are the super organism granted the gift of consciousness of life by the Almighty God and endowed with an intrinsic legal value to their humanity that shall be guarded and protected respecting dignity regardless of your cultural, religious, race, or physical background; you want to be treated equally for a reason for being human. Islam graces the essential integrity of humanity and confirms the freedom and accountability impact on individuality and the open societal sphere, including the moral, economic, and political aspects. Human Rights allow people to live with dignity, equality, justice, freedom, and peace. The Kantian approach to morality expresses that ethical actions follow universal moral laws. Hence, human rights are based upon the normative approaches setting the international standards to promote, guard, and protect the fundamental rights of the people. Islam is a divine religion commanding human rights based upon the principles of social justice and regulates all facets of the moral and spiritual ethics of Muslims besides bringing balance abreast in the non-Muslims to respect their lives with safety and security and property. The Canon law manifests the faith and equality amongst Christianity, regulating the communal dignity to build and promote the sanctity of Holy life (can. 208 to 223). This concept of the community is developed after the insight of the Islamic 'canon law', which is the code of revelation itself and inseparable from the natural part of the salvation of mankind. The etymology and history of human rights is a polemical debate in a preview of Islamic and Western culture. On the other hand, international law is meticulous about the fundamental part of Conon law that focuses on the communal political, social and economic relationship. The evolving process of human rights is considered to be an exclusive universal thought regarding an open society that forms a legal base for the constituent of international instruments of the protection of Human Rights, viz. UDHR. On the other side, Muslim scholars emphasize that human rights are devolving around Islamic law. Both traditions need a dire explanation of contemporary openness for bringing the harmonious universal law acceptable and applicable to the international communities concerning the anthropology of political, economic, and social aspects of a human being.Keywords: human rights-based approach (HRBA), human rights in Islam, evolution of universal human rights, conflict in western, Islamic human rights
Procedia PDF Downloads 891215 Research on the United Navigation Mechanism of Land, Sea and Air Targets under Multi-Sources Information Fusion
Authors: Rui Liu, Klaus Greve
Abstract:
The navigation information is a kind of dynamic geographic information, and the navigation information system is a kind of special geographic information system. At present, there are many researches on the application of centralized management and cross-integration application of basic geographic information. However, the idea of information integration and sharing is not deeply applied into the research of navigation information service. And the imperfection of navigation target coordination and navigation information sharing mechanism under certain navigation tasks has greatly affected the reliability and scientificity of navigation service such as path planning. Considering this, the project intends to study the multi-source information fusion and multi-objective united navigation information interaction mechanism: first of all, investigate the actual needs of navigation users in different areas, and establish the preliminary navigation information classification and importance level model; and then analyze the characteristics of the remote sensing and GIS vector data, and design the fusion algorithm from the aspect of improving the positioning accuracy and extracting the navigation environment data. At last, the project intends to analyze the feature of navigation information of the land, sea and air navigation targets, and design the united navigation data standard and navigation information sharing model under certain navigation tasks, and establish a test navigation system for united navigation simulation experiment. The aim of this study is to explore the theory of united navigation service and optimize the navigation information service model, which will lay the theory and technology foundation for the united navigation of land, sea and air targets.Keywords: information fusion, united navigation, dynamic path planning, navigation information visualization
Procedia PDF Downloads 2881214 Change of Bone Density with Treatments of Intravenous Zoledronic Acid in Patients with Osteoporotic Distal Radial Fractures
Authors: Hong Je Kang, Young Chae Choi, Jin Sung Park, Isac Kim
Abstract:
Purpose: Osteoporotic fractures are an important among postmenopausal women. When osteoporotic distal radial fractures occur, osteoporosis must be treated to prevent the hip and spine fractures. Intravenous injection of Zoledronic acid is expected to improve preventing osteoporotic fractures. Many articles reported the effect of intravenous Zoledronic acid to BMD of the hip and spine fracture or non-fracture patients with low BMD. However, that with distal radial fractures has rarely been reported. Therefore, the authors decided to study the effect of Zoledronic acid in BMD score, bone union, and bone turnover markers in the patients who underwent volar plating due to osteoporotic distal radial fractures. Materials: From April 2018 to May 2022, postmenopausal women aged 55 years or older who had osteoporotic distal radial fractures and who underwent surgical treatment using volar plate fixation were included. Zoledronic acid (5mg) was injected intravenously between 3 and 5 days after surgery. BMD scores after 1 year of operation were compared with the initial scores. Bone turnover markers were measured before surgery, after 3 months, and after 1 year. Radiological follow-up was performed every 2 weeks until the bone union and at 1 year postoperatively. Clinical outcome indicators were measured one year after surgery, and the occurrence of side effects was observed. Result: Total of 23 patients were included, with a lumbar BMD T score of -2.89±0.2 before surgery to -2.27±0.3 one year after surgery (p=0.012) and a femoral neck BMD T score of -2.45±0.3 before surgery to -2.36±0.3 (p=0.041) after one year, and all were statistically significant. Measured as bone resorption markers, serum CTX-1 was 337.43±10.4 pg/mL before surgery, 160.86±8.7 pg/mL (p=0.022) after three months, and 250.12±12.7 pg/mL (p=0.031) after one year. Urinary NTX-1 was 39.24±2.2 ng/mL before surgery, 24.46±1.2 ng/mL (p=0.014) after three months and 30.35±1.6 ng/mL (p=0.042) after one year. Measured as bone formation markers, serum osteocalcin was 13.04±1.1 ng/mL before surgery, 8.84±0.7 ng/mL (p=0.037) after 3 months and 11.1±0.4 ng/mL (p=0.026) after one year. Serum bone-specific ALP was 11.24±0.9 IU/L before surgery, 8.25±0.9 IU/L (p=0.036) after three months, and 10.2±0.9 IU/L (p=0.027) after one year. All were statistically significant. All cases showed bone union within an average of 6.91±0.3 weeks without any signs of failure. Complications were found in 5 out of 23 cases (21.7%), such as headache, nausea, muscle pain, and fever. Conclusion: When Zoledronic acid was used, BMD was improved in both the spine and femoral neck. This may reduce the likelihood and subsequent morbidity of additional osteoporotic fractures. This study is meaningful in that there was no difference in the duration of bone union and radiological characteristics in patients with distal radial fractures administrated with intravenous BP early after the fractures, and improvement in BMD and bone turnover indicators was measured.Keywords: zeoldreonic acid, BMD, osteoporosis, distal radius
Procedia PDF Downloads 1151213 Nanoindentation Studies of Metallic Cu-CuZr Composites Synthesized by Accumulative Roll Bonding
Authors: Ehsan Alishahi, Chuang Deng
Abstract:
Materials with microstructural heterogeneity have recently attracted dramatic attention in the materials science community. Although most of the metals are identified as crystalline, the new class of amorphous alloys, sometimes are known as metallic glasses (MGs), exhibited remarkable properties, particularly high mechanical strength and elastic limit. The unique properties of MGs led to the wide range of studies in developing and characterizing of new alloys or composites which met the commercial desires. In spite of applicable properties of MGs, commercializing of metallic glasses was limited due to a major drawback, the lack of ductility and sudden brittle failure mode. Hence, crystalline-amorphous (C-A) composites were introduced almost in 2000s as a toughening strategy to improve the ductility of MGs. Despite the considerable progress reported in previous studies, there are still challenges in both synthesis and characterization of metallic C-A composites. In this study, accumulative roll bonding (ARB) was used to synthesize bulk crystalline-amorphous composites starting from crystalline Cu-Zr multilayers. Due to the severe plastic deformation state, new CuZr phases were formed during the rolling process which was reflected in SEM-EDS analysis. EDS elemental analysis showed the variation in the composition of CuZr phases such as 38-62, 50-50 to 68-32 at Cu-Zr % respectively. Moreover, TEM with electron diffraction analysis indicated the presence of both crystalline and amorphous structures for the new formed CuZr phases. In addition to the microstructural analysis, the mechanical properties of the synthesized composites were studied using the nanoindentation technique. Hysitron Nanoindentation instrument was used to conduct nanoindentation tests with cube corner tip. The maximum load of 5000 µN was applied in load control mode to measure the elastic modulus and hardness of different phases. The trend of results indicated three distinct regimes of hardness and elastic modulus including pure Cu, pure Zr, and new formed CuZr phases. More specifically, pure Cu regions showed the lowest values for both nanoindentation hardness and elastic modulus while the CuZr phases take the highest values. Consequently, pure Zr was placed in the intermediate range which is harder than pure Cu but softer than CuZr phases. In overall, it was found that CuZr phases with higher hardness were nucleated during ARB process as a result of mechanical alloying phenomenon.Keywords: ARB, crystalline-amorphous composites, mechanical alloying, nanoindentation hardness
Procedia PDF Downloads 5501212 Fundamental Study on the Growth Mechanism of MoS₂ Quantum Dots: Impact of Reaction Time and Precursor Concentration
Authors: Geetika Sahu, Chanchal Chakraborty, Subhadeep Roy, Souri Banerjee
Abstract:
We aim to investigate the growth mechanism of molybdenum disulfide quantum dots (MoS₂ QDs) under hydrothermal reaction conditions by exploring two important parameters that control the growth process – (i) reaction time and (ii) precursor concentration. This fundamental study will focus on tuning the particle size, which eventually alters the optical and electronic properties of the QDs due to the quantum confinement effect, as well as monitoring the spatial growth of quantum dot sheets prepared through the aggregation of individual quantum dots. Among the mentioned two parameters, the former dictates the duration of aggregation while the latter controls the aggregation rate. The hydrothermally synthesized QDs have been analyzed through morphological and optical tools, and we used fractal analysis to understand the growth process. With increasing reaction time T (at a constant precursor concentration ≈ 73mM), the growth process shows a crossover from a bottom-up to a top-down process at T= 14 hours. A non-monotonic behavior of average QD size ( d ) is observed on the other side of it ( d=7nm at T= 7 hours; d=16nm at T=14 hours; d=2nm at T=30 hours), which is supported by morphological studies like TEM and STEM, as well as optical studies like UV visible and PL spectra. Higher (lower) QD sizes correspond to lower (higher) bandgap and significant redshift (blueshift) in the PL spectra. The fractal dimension ( f) of the QD clusters shows a sudden drop from 1.92 at this particular time T=14 to 1.82 and saturates at this value afterward. This signifies the onset of the fragmentation of the clusters due to the unavailability of active precursors. To validate the role of the precursors that have been claimed, we have carried out photophysical and statistical studies at a constant reaction time (14 hours ) and have varied the precursor concentration instead. We observe a similar non-monotonic behavior in QD size (maximum size at ≈ 73mM) supported by the morphological and optical studies as the precursor concentration varies from 22mM ( d=10nm) to 125mM (d=7nm ). This is in agreement with fractal analysis, where the maximum df of 1.97 is observed at 73 mM which decreases at both higher ( df = 1.67 at 125mM ) and lower concentration ( df = 1.75 at 22mM). This impact of precursor concentration is consistent for all reaction times. The fractal dimension of the QD sheets formed during the seeding and growth process is replicated for different reaction times as well as precursor concentration values through numerical simulations of random walk process on a 2D square lattice.Keywords: aggregation and fragmentation, fractal analysis, optical studies, random walk
Procedia PDF Downloads 01211 A Sensor Placement Methodology for Chemical Plants
Authors: Omid Ataei Nia, Karim Salahshoor
Abstract:
In this paper, a new precise and reliable sensor network methodology is introduced for unit processes and operations using the Constriction Coefficient Particle Swarm Optimization (CPSO) method. CPSO is introduced as a new search engine for optimal sensor network design purposes. Furthermore, a Square Root Unscented Kalman Filter (SRUKF) algorithm is employed as a new data reconciliation technique to enhance the stability and accuracy of the filter. The proposed design procedure incorporates precision, cost, observability, reliability together with importance-of-variables (IVs) as a novel measure in Instrumentation Criteria (IC). To the best of our knowledge, no comprehensive approach has yet been proposed in the literature to take into account the importance of variables in the sensor network design procedure. In this paper, specific weight is assigned to each sensor, measuring a process variable in the sensor network to indicate the importance of that variable over the others to cater to the ultimate sensor network application requirements. A set of distinct scenarios has been conducted to evaluate the performance of the proposed methodology in a simulated Continuous Stirred Tank Reactor (CSTR) as a highly nonlinear process plant benchmark. The obtained results reveal the efficacy of the proposed method, leading to significant improvement in accuracy with respect to other alternative sensor network design approaches and securing the definite allocation of sensors to the most important process variables in sensor network design as a novel achievement.Keywords: constriction coefficient PSO, importance of variable, MRMSE, reliability, sensor network design, square root unscented Kalman filter
Procedia PDF Downloads 1601210 Predicting the Next Offensive Play Types will be Implemented to Maximize the Defense’s Chances of Success in the National Football League
Authors: Chris Schoborg, Morgan C. Wang
Abstract:
In the realm of the National Football League (NFL), substantial dedication of time and effort is invested by both players and coaches in meticulously analyzing the game footage of their opponents. The primary aim is to anticipate the actions of the opposing team. Defensive players and coaches are especially focused on deciphering their adversaries' intentions to effectively counter their strategies. Acquiring insights into the specific play type and its intended direction on the field would confer a significant competitive advantage. This study establishes pre-snap information as the cornerstone for predicting both the play type (e.g., deep pass, short pass, or run) and its spatial trajectory (right, left, or center). The dataset for this research spans the regular NFL season data for all 32 teams from 2013 to 2022. This dataset is acquired using the nflreadr package, which conveniently extracts play-by-play data from NFL games and imports it into the R environment as structured datasets. In this study, we employ a recently developed machine learning algorithm, XGBoost. The final predictive model achieves an impressive lift of 2.61. This signifies that the presented model is 2.61 times more effective than random guessing—a significant improvement. Such a model has the potential to markedly enhance defensive coaches' ability to formulate game plans and adequately prepare their players, thus mitigating the opposing offense's yardage and point gains.Keywords: lift, NFL, sports analytics, XGBoost
Procedia PDF Downloads 561209 Non-Linear Assessment of Chromatographic Lipophilicity and Model Ranking of Newly Synthesized Steroid Derivatives
Authors: Milica Karadzic, Lidija Jevric, Sanja Podunavac-Kuzmanovic, Strahinja Kovacevic, Anamarija Mandic, Katarina Penov Gasi, Marija Sakac, Aleksandar Okljesa, Andrea Nikolic
Abstract:
The present paper deals with chromatographic lipophilicity prediction of newly synthesized steroid derivatives. The prediction was achieved using in silico generated molecular descriptors and quantitative structure-retention relationship (QSRR) methodology with the artificial neural networks (ANN) approach. Chromatographic lipophilicity of the investigated compounds was expressed as retention factor value logk. For QSRR modeling, a feedforward back-propagation ANN with gradient descent learning algorithm was applied. Using the novel sum of ranking differences (SRD) method generated ANN models were ranked. The aim was to distinguish the most consistent QSRR model that can be found, and similarity or dissimilarity between the models that could be noticed. In this study, SRD was performed with average values of retention factor value logk as reference values. An excellent correlation between experimentally observed retention factor value logk and values predicted by the ANN was obtained with a correlation coefficient higher than 0.9890. Statistical results show that the established ANN models can be applied for required purpose. This article is based upon work from COST Action (TD1305), supported by COST (European Cooperation in Science and Technology).Keywords: artificial neural networks, liquid chromatography, molecular descriptors, steroids, sum of ranking differences
Procedia PDF Downloads 3191208 Non-Linear Regression Modeling for Composite Distributions
Authors: Mostafa Aminzadeh, Min Deng
Abstract:
Modeling loss data is an important part of actuarial science. Actuaries use models to predict future losses and manage financial risk, which can be beneficial for marketing purposes. In the insurance industry, small claims happen frequently while large claims are rare. Traditional distributions such as Normal, Exponential, and inverse-Gaussian are not suitable for describing insurance data, which often show skewness and fat tails. Several authors have studied classical and Bayesian inference for parameters of composite distributions, such as Exponential-Pareto, Weibull-Pareto, and Inverse Gamma-Pareto. These models separate small to moderate losses from large losses using a threshold parameter. This research introduces a computational approach using a nonlinear regression model for loss data that relies on multiple predictors. Simulation studies were conducted to assess the accuracy of the proposed estimation method. The simulations confirmed that the proposed method provides precise estimates for regression parameters. It's important to note that this approach can be applied to datasets if goodness-of-fit tests confirm that the composite distribution under study fits the data well. To demonstrate the computations, a real data set from the insurance industry is analyzed. A Mathematica code uses the Fisher information algorithm as an iteration method to obtain the maximum likelihood estimation (MLE) of regression parameters.Keywords: maximum likelihood estimation, fisher scoring method, non-linear regression models, composite distributions
Procedia PDF Downloads 341207 False Assumptions Made in Cybersecurity Curriculum: K-12
Authors: Nathaniel Evans, Jessica Boersma, Kenneth Kass
Abstract:
With technology and STEM fields growing every day, there is a significant projected shortfall in qualified cybersecurity workers. As such, it is essential to develop a cybersecurity curriculum that builds skills and cultivates interest in cybersecurity early on. With new jobs being created every day and an already significant gap in the job market, it is vital that educators are pro-active in introducing a cybersecurity curriculum where students are able to learn new skills and engage in an age-appropriate cyber curriculum. Within this growing world of cybersecurity, students should engage in age-appropriate technology and cybersecurity curriculum, starting with elementary school (k-5), extending through high school, and ultimately into college. Such practice will provide students with the confidence, skills, and, ultimately, the opportunity to work in the burgeoning information security field. This paper examines educational methods, pedagogical practices, current cybersecurity curricula, and other educational resources and conducts analysis for false assumptions and developmental appropriateness. It also examines and identifies common mistakes with current cyber curriculum and lessons and discuss strategies for improvement. Throughout the lessons that were reviewed, many common mistakes continued to pop up. These mistakes included age appropriateness, technology resources that were available, and consistency of student’s skill levels. Many of these lessons were written for the wrong grade levels. The ones written for the elementary level all had activities that assumed that every student in the class could read at grade level and also had background knowledge of the cyber activity at hand, which is not always the case. Another major mistake was that these lessons assumed that all schools had any kind of technology resource available to them. Some schools are 1:1, and others are only allotted three computers in their classroom where the students have to share. While coming up with a cyber-curriculum, it has to be kept in mind that not all schools are the same, not every classroom is the same. There are many students who are not reading at their grade level or have not had exposure to the digital world. We need to start slow and ease children into the cyber world. Once they have a better understanding, it will be easier to move forward with these lessons and get the students engaged. With a better understanding of common mistakes that are being made, a more robust curriculum and lessons can be created that no only spark a student’s interest in this much-needed career field but encourage learning while keeping our students safe from cyber-attacks.Keywords: assumptions, cybersecurity, k-12, teacher
Procedia PDF Downloads 1661206 Introduction of Para-Sasaki-Like Riemannian Manifolds and Construction of New Einstein Metrics
Authors: Mancho Manev
Abstract:
The concept of almost paracontact Riemannian manifolds (abbr., apcR manifolds) was introduced by I. Sato in 1976 as an analogue of almost contact Riemannian manifolds. The notion of an apcR manifold of type (p,q) was defined by S. Sasaki in 1980, where p and q are respectively the numbers of the multiplicity of the structure eigenvalues 1 and -1. It also has a simple eigenvalue of 0. In our work, we consider (2n+1)-dimensional apcR manifolds of type (n,n), i.e., the paracontact distribution of the studied manifold can be considered as a 2n-dimensional almost paracomplex Riemannian distribution with almost paracomplex structure and structure group O(n) × O(n). The aim of the present study is to introduce a new class of apcR manifolds. Such a manifold is obtained using the construction of a certain Riemannian cone over it, and the resulting manifold is a paraholomorphic paracomplex Riemannian manifold (abbr., phpcR manifold). We call it a para-Sasaki-like Riemannian manifold (abbr., pSlR manifold) and give some explicit examples. We study the structure of pSlR spaces and find that the paracontact form η is closed and each pSlR manifold locally can be considered as a certain product of the real line with a phpcR manifold, which is locally a Riemannian product of two equidimensional Riemannian spaces. We also obtain that the curvature of the pSlR manifolds is completely determined by the curvature of the underlying local phpcR manifold. Moreover, the ξ-directed Ricci curvature is equal to -2n, while in the Sasaki case, it is 2n. Accordingly, the pSlR manifolds can be interpreted as the counterpart of the Sasaki manifolds; the skew-symmetric part of ∇η vanishes, while in the Sasaki case, the symmetric part vanishes. We define a hyperbolic extension of a (complete) phpcR manifold that resembles a certain warped product, and we indicate that it is a (complete) pSlR manifold. In addition, we consider the hyperbolic extension of a phpcR manifold and prove that if the initial manifold is a complete Einstein manifold with negative scalar curvature, then the resulting manifold is a complete Einstein pSlR manifold with negative scalar curvature. In this way, we produce new examples of a complete Einstein Riemannian manifold with negative scalar curvature. Finally, we define and study para contact conformal/homothetic deformations by deriving a subclass that preserves the para-Sasaki-like condition. We then find that if we apply a paracontact homothetic deformation of a pSlR space, we obtain that the Ricci tensor is invariant.Keywords: almost paracontact Riemannian manifolds, Einstein manifolds, holomorphic product manifold, warped product manifold
Procedia PDF Downloads 2061205 Effects of Foreign-language Learning on Bilinguals' Production in Both Their Languages
Authors: Natalia Kartushina
Abstract:
Foreign (second) language (L2) learning is highly promoted in modern society. Students are encouraged to study abroad (SA) to achieve the most effective learning outcomes. However, L2 learning has side effects for native language (L1) production, as L1 sounds might show a drift from the L1 norms towards those of the L2, and this, even after a short period of L2 learning. L1 assimilatory drift has been attributed to a strong perceptual association between similar L1 and L2 sounds in the mind of L2 leaners; thus, a change in the production of an L2 target leads to the change in the production of the related L1 sound. However, nowadays, it is quite common that speakers acquire two languages from birth, as, for example, it is the case for many bilingual communities (e.g., Basque and Spanish in the Basque Country). Yet, it remains to be established how FL learning affects native production in individuals who have two native languages, i.e., in simultaneous or very early bilinguals. Does FL learning (here a third language, L3) affect bilinguals’ both languages or only one? What factors determine which of the bilinguals’ languages is more susceptible to change? The current study examines the effects of L3 (English) learning on the production of vowels in the two native languages of simultaneous Spanish-Basque bilingual adolescents enrolled into the Erasmus SA English program. Ten bilingual speakers read five Spanish and Basque consonant-vowel-consonant-vowel words two months before their SA and the next day after their arrival back to Spain. Each word contained the target vowel in the stressed syllable and was repeated five times. Acoustic analyses measuring vowel openness (F1) and backness (F2) were performed. Two possible outcomes were considered. First, we predicted that L3 learning would affect the production of only one language and this would be the language that would be used the most in contact with English during the SA period. This prediction stems from the results of recent studies showing that early bilinguals have separate phonological systems for each of their languages; and that late FL learner (as it is the case of our participants), who tend to use their L1 in language-mixing contexts, have more L2-accented L1 speech. The second possibility stated that L3 learning would affect both of the bilinguals’ languages in line with the studies showing that bilinguals’ L1 and L2 phonologies interact and constantly co-influence each other. The results revealed that speakers who used both languages equally often (balanced users) showed an F1 drift in both languages toward the F1 of the English vowel space. Unbalanced speakers, however, showed a drift only in the less used language. The results are discussed in light of recent studies suggesting that the amount of language use is a strong predictor of the authenticity in speech production with less language use leading to more foreign-accented speech and, eventually, to language attrition.Keywords: language-contact, multilingualism, phonetic drift, bilinguals' production
Procedia PDF Downloads 1091204 Energy Efficiency Improvement of Excavator with Independent Metering Valve by Continuous Mode Changing Considering Engine Fuel Consumption
Authors: Sang-Wook Lee, So-Yeon Jeon, Min-Gi Cho, Dae-Young Shin, Sung-Ho Hwang
Abstract:
Hydraulic system of excavator gets working energy from hydraulic pump which is connected to output shaft of engine. Recently, main control valve (MCV) which is composed of several independent metering valve (IMV) has been introduced for better energy efficiency of the hydraulic system so that fuel efficiency of the excavator can be improved. Excavator with IMV has 5 operating modes depending on the quantity of regeneration flow. In this system, the hydraulic pump is controlled to supply demanded flow which is needed to operate each mode. Because the regenerated flow supply energy to actuators, the hydraulic pump consumes less energy to make same motion than one that does not regenerate flow. The horse power control is applied to the hydraulic pump of excavator for maintaining engine start under a heavy load and this control makes the flow of hydraulic pump reduced. When excavator is in complex operation such as loading or unloading soil, the hydraulic pump discharges small quantity of working fluid in high pressure. At this operation, the engine of excavator does not run at optimal operating line (OOL). The engine needs to be operated on OOL to improve fuel efficiency and by controlling hydraulic pump the engine can drive on OOL. By continuous mode changing of IMV, the hydraulic pump is controlled to make engine runs on OOL. The simulation result of this study shows that fuel efficiency of excavator with IMV can be improved by considering engine OOL and continuous mode changing algorithm.Keywords: continuous mode changing, engine fuel consumption, excavator, fuel efficiency, IMV
Procedia PDF Downloads 3851203 Design of an Improved Distributed Framework for Intrusion Detection System Based on Artificial Immune System and Neural Network
Authors: Yulin Rao, Zhixuan Li, Burra Venkata Durga Kumar
Abstract:
Intrusion detection refers to monitoring the actions of internal and external intruders on the system and detecting the behaviours that violate security policies in real-time. In intrusion detection, there has been much discussion about the application of neural network technology and artificial immune system (AIS). However, many solutions use static methods (signature-based and stateful protocol analysis) or centralized intrusion detection systems (CIDS), which are unsuitable for real-time intrusion detection systems that need to process large amounts of data and detect unknown intrusions. This article proposes a framework for a distributed intrusion detection system (DIDS) with multi-agents based on the concept of AIS and neural network technology to detect anomalies and intrusions. In this framework, multiple agents are assigned to each host and work together, improving the system's detection efficiency and robustness. The trainer agent in the central server of the framework uses the artificial neural network (ANN) rather than the negative selection algorithm of AIS to generate mature detectors. Mature detectors can distinguish between self-files and non-self-files after learning. Our analyzer agents use genetic algorithms to generate memory cell detectors. This kind of detector will effectively reduce false positive and false negative errors and act quickly on known intrusions.Keywords: artificial immune system, distributed artificial intelligence, multi-agent, intrusion detection system, neural network
Procedia PDF Downloads 1091202 Optimization of the Mechanical Performance of Fused Filament Fabrication Parts
Authors: Iván Rivet, Narges Dialami, Miguel Cervera, Michele Chiumenti
Abstract:
Process parameters in Additive Manufacturing (AM) play a critical role in the mechanical performance of the final component. In order to find the input configuration that guarantees the optimal performance of the printed part, the process-performance relationship must be found. Fused Filament Fabrication (FFF) is the selected demonstrative AM technology due to its great popularity in the industrial manufacturing world. A material model that considers the different printing patterns present in a FFF part is used. A voxelized mesh is built from the manufacturing toolpaths described in the G-Code file. An Adaptive Mesh Refinement (AMR) based on the octree strategy is used in order to reduce the complexity of the mesh while maintaining its accuracy. High-fidelity and cost-efficient Finite Element (FE) simulations are performed and the influence of key process parameters in the mechanical performance of the component is analyzed. A robust optimization process based on appropriate failure criteria is developed to find the printing direction that leads to the optimal mechanical performance of the component. The Tsai-Wu failure criterion is implemented due to the orthotropy and heterogeneity constitutive nature of FFF components and because of the differences between the strengths in tension and compression. The optimization loop implements a modified version of an Anomaly Detection (AD) algorithm and uses the computed metrics to obtain the optimal printing direction. The developed methodology is verified with a case study on an industrial demonstrator.Keywords: additive manufacturing, optimization, printing direction, mechanical performance, voxelization
Procedia PDF Downloads 631201 Alteration Quartz-Kfeldspar-Apatite-Molybdenite at B Anomaly Prospection with Artificial Neural Network to Determining Molydenite Economic Deposits in Malala District, Western Sulawesi
Authors: Ahmad Lutfi, Nikolas Dhega
Abstract:
The Malala deposit in northwest Sulawesi is the only known porphyry molybdenum and the only source for rhenium, occurrence in Indonesia. The neural network method produces results that correspond very closely to those of the knowledge-based fuzzy logic method and weights of evidence method. This method required data of solid geology, regional faults, airborne magnetic, gamma-ray survey data and GIS data. This interpretation of the network output fits with the intuitive notion that a prospective area has characteristics that closely resemble areas known to contain mineral deposits. Contrasts with the weights of evidence and fuzzy logic methods, where, for a given grid location, each input-parameter value automatically results in an increase in the prospective estimated. Malala District indicated molybdenum anomalies in stream sediments from in excess of 15 km2 were obtained, including the Takudan Fault as most prominent structure with striking 40̊ to 60̊ over a distance of about 30 km and in most places weakly at anomaly B, developed over an area of 4 km2, with a ‘shell’ up to 50 m thick at the intrusive contact with minor mineralization occurring in the Tinombo Formation. Series of NW trending, steeply dipping fracture zones, named the East Zone has an estimated resource of 100 Mt at 0.14% MoS2 and minimum target of 150 Mt 0.25%. The Malala porphyries occur as stocks and dykes with predominantly granitic, with fluorine-poor class of molybdenum deposits and belongs to the plutonic sub-type. Unidirectional solidification textures consisting of subparallel, crenulated layers of quartz that area separated by layers of intrusive material textures. The deuteric nature of the molybdenum mineralization and the dominance of carbonate alteration.The nature of the Stage I with alteration barren quartz K‐feldspar; and Stage II with alteration quartz‐K‐feldspar‐apatite-molybdenite veins combined with the presence of disseminated molybdenite with primary biotite in the host intrusive.Keywords: molybdenite, Malala, porphyries, anomaly B
Procedia PDF Downloads 1531200 Comparative Study of Various Treatment Positioning Technique: A Site Specific Study-CA. Breast
Authors: Kamal Kaushik, Dandpani Epili, Ajay G. V., Ashutosh, S. Pradhaan
Abstract:
Introduction: Radiation therapy has come a long way over a period of decades, from 2-dimensional radiotherapy to intensity-modulated radiation therapy (IMRT) or VMAT. For advanced radiation therapy, we need better patient position reproducibility to deliver precise and quality treatment, which raises the need for better image guidance technologies for precise patient positioning. This study presents a two tattoo simulation with roll correction technique which is comparable to other advanced patient positioning techniques. Objective: This is a site-specific study is aimed to perform a comparison between various treatment positioning techniques used for the treatment of patients of Ca- Breast undergoing radiotherapy. In this study, we are comparing 5 different positioning methods used for the treatment of ca-breast, namely i) Vacloc with 3 tattoos, ii) Breast board with three tattoos, iii) Thermoplastic cast with three fiducials, iv) Breast board with a thermoplastic mask with 3 tattoo, v) Breast board with 2 tattoos – A roll correction method. Methods and material: All in one (AIO) solution immobilization was used in all patient positioning techniques for immobilization. The process of two tattoo simulations includes positioning of the patient with the help of a thoracic-abdomen wedge, armrest & knee rest. After proper patient positioning, we mark two tattoos on the treatment side of the patient. After positioning, place fiducials as per the clinical borders markers (1) sternum notch (lower border of clavicle head) (2) 2 cm below from contralateral breast (3) midline between 1 & 2 markers (4) mid axillary on the same axis of 3 markers (Marker 3 & 4 should be on the same axis). During plan implementation, a roll depth correction is applied as per the anterior and lateral positioning tattoos, followed by the shifts required for the Isocentre position. The shifts are then verified by SSD on the patient surface followed by radiographic verification using Cone Beam Computed Tomography (CBCT). Results: When all the five positioning techniques were compared all together, the produced shifts in Vertical, Longitudinal and lateral directions are as follows. The observations clearly suggest that the Longitudinal average shifts in two tattoo roll correction techniques are less than every other patient positioning technique. Vertical and lateral Shifts are also comparable to other modern positioning techniques. Concluded: The two tattoo simulation with roll correction technique provides us better patient setup with a technique that can be implemented easily in most of the radiotherapy centers across the developing nations where 3D verification techniques are not available along with delivery units as the shifts observed are quite minimal and are comparable to those with Vacloc and modern amenities.Keywords: Ca. breast, breast board, roll correction technique, CBCT
Procedia PDF Downloads 1351199 Rainstorm Characteristics over the Northeastern Region of Thailand: Weather Radar Analysis
Authors: P. Intaracharoen, P. Chantraket, C. Detyothin, S. Kirtsaeng
Abstract:
Radar reflectivity data from Phimai weather radar station of DRRAA (Department of Royal Rainmaking and Agricultural Aviation) were used to analyzed the rainstorm characteristics via Thunderstorm Identification Tracking Analysis and Nowcasting (TITAN) algorithm. The Phimai weather radar station was situated at Nakhon Ratchasima province, northeastern Thailand. The data from 277 days of rainstorm events occurring from May 2016 to May 2017 were used to investigate temporal distribution characteristics of convective individual rainclouds. The important storm properties, structures, and their behaviors were analyzed by 9 variables as storm number, storm duration, storm volume, storm area, storm top, storm base, storm speed, storm orientation, and maximum storm reflectivity. The rainstorm characteristics were also examined by separating the data into two periods as wet and dry season followed by an announcement of TMD (Thai Meteorological Department), under the influence of southwest monsoon (SWM) and northeast monsoon (NEM). According to the characteristics of rainstorm results, it can be seen that rainstorms during the SWM influence were found to be the most potential rainstorms over northeastern region of Thailand. The SWM rainstorms are larger number of the storm (404, 140 no./day), storm area (34.09, 26.79 km²) and storm volume (95.43, 66.97 km³) than NEM rainstorms, respectively. For the storm duration, the average individual storm duration during the SWM and NEM was found a minor difference in both periods (47.6, 48.38 min) and almost all storm duration in both periods were less than 3 hours. The storm velocity was not exceeding 15 km/hr (13.34 km/hr for SWM and 10.67 km/hr for NEM). For the rainstorm reflectivity, it was found a little difference between wet and dry season (43.08 dBz for SWM and 43.72 dBz for NEM). It assumed that rainstorms occurred in both seasons have same raindrop size.Keywords: rainstorm characteristics, weather radar, TITAN, Northeastern Thailand
Procedia PDF Downloads 1931198 Relay-Augmented Bottleneck Throughput Maximization for Correlated Data Routing: A Game Theoretic Perspective
Authors: Isra Elfatih Salih Edrees, Mehmet Serdar Ufuk Türeli
Abstract:
In this paper, an energy-aware method is presented, integrating energy-efficient relay-augmented techniques for correlated data routing with the goal of optimizing bottleneck throughput in wireless sensor networks. The system tackles the dual challenge of throughput optimization while considering sensor network energy consumption. A unique routing metric has been developed to enable throughput maximization while minimizing energy consumption by utilizing data correlation patterns. The paper introduces a game theoretic framework to address the NP-complete optimization problem inherent in throughput-maximizing correlation-aware routing with energy limitations. By creating an algorithm that blends energy-aware route selection strategies with the best reaction dynamics, this framework provides a local solution. The suggested technique considerably raises the bottleneck throughput for each source in the network while reducing energy consumption by choosing the best routes that strike a compromise between throughput enhancement and energy efficiency. Extensive numerical analyses verify the efficiency of the method. The outcomes demonstrate the significant decrease in energy consumption attained by the energy-efficient relay-augmented bottleneck throughput maximization technique, in addition to confirming the anticipated throughput benefits.Keywords: correlated data aggregation, energy efficiency, game theory, relay-augmented routing, throughput maximization, wireless sensor networks
Procedia PDF Downloads 821197 Impact of Alternative Fuel Feeding on Fuel Cell Performance and Durability
Authors: S. Rodosik, J. P. Poirot-Crouvezier, Y. Bultel
Abstract:
With the expansion of the hydrogen economy, Proton Exchange Membrane Fuel Cell (PEMFC) systems are often presented as promising energy converters suitable for transport applications. However, reaching a durability of 5000 h recommended by the U.S. Department of Energy and decreasing system cost are still major hurdles to their development. In order to increase the system efficiency and simplify the system without affecting the fuel cell lifetime, an architecture called alternative fuel feeding has been developed. It consists in a fuel cell stack divided into two parts, alternatively fed, implemented on a 5-kW system for real scale testing. The operation strategy can be considered close to Dead End Anode (DEA) with specific modifications to avoid water and nitrogen accumulation in the cells. The two half-stacks are connected in series to enable each stack to be alternatively fed. Water and nitrogen accumulated can be shifted from one half-stack to the other one according to the alternative feeding frequency. Thanks to the homogenization of water vapor along the stack, water management was improved. The operating conditions obtained at system scale are close to recirculation without the need of a pump or an ejector. In a first part, a performance comparison with the DEA strategy has been performed. At high temperature and low pressure (80°C, 1.2 bar), performance of alternative fuel feeding was higher, and the system efficiency increased. In a second part, in order to highlight the benefits of the architecture on the fuel cell lifetime, two durability tests, lasting up to 1000h, have been conducted. A test on the 5-kW system has been compared to a reference test performed on a test bench with a shorter stack, conducted with well-controlled operating parameters and flow-through hydrogen strategy. The durability test is based upon the Fuel Cell Dynamic Load Cycle (FC-DLC) protocol but adapted to the system limitations: without OCV steps and a maximum current density of 0.4 A/cm². In situ local measurements with a segmented S++® plate performed all along the tests, showed a more homogeneous distribution of the current density with alternative fuel feeding than in flow-through strategy. Tests performed in this work enabled the understanding of this architecture advantages and drawbacks. Alternative fuel feeding architecture appeared to be a promising solution to ensure the humidification function at the anode side with a simplified fuel cell system.Keywords: automotive conditions, durability, fuel cell system, proton exchange membrane fuel cell, stack architecture
Procedia PDF Downloads 142