Search results for: arbitrary triangular-z node
170 Indigenous Understandings of Climate Vulnerability in Chile: A Qualitative Approach
Authors: Rosario Carmona
Abstract:
This article aims to discuss the importance of indigenous people participation in climate change mitigation and adaptation. Specifically, it analyses different understandings of climate vulnerability among diverse actors involved in climate change policies in Chile: indigenous people, state officials, and academics. These data were collected through participant observation and interviews conducted during October 2017 and January 2019 in Chile. Following Karen O’Brien, there are two types of vulnerability, outcome vulnerability and contextual vulnerability. How vulnerability to climate change is understood determines the approach, which actors are involved and which knowledge is considered to address it. Because climate change is a very complex phenomenon, it is necessary to transform the institutions and their responses. To do so, it is fundamental to consider these two perspectives and different types of knowledge, particularly those of the most vulnerable, such as indigenous people. For centuries and thanks to a long coexistence with the environment, indigenous societies have elaborated coping strategies, and some of them are already adapting to climate change. Indigenous people from Chile are not an exception. But, indigenous people tend to be excluded from decision-making processes. And indigenous knowledge is frequently seen as subjective and arbitrary in relation to science. Nevertheless, last years indigenous knowledge has gained particular relevance in the academic world, and indigenous actors are getting prominence in international negotiations. There are some mechanisms that promote their participation (e.g., Cancun safeguards, World Bank operational policies, REDD+), which are not absent from difficulties. And since 2016 parties are working on a Local Communities and Indigenous Peoples Platform. This paper also explores the incidence of this process in Chile. Although there is progress in the participation of indigenous people, this participation responds to the operational policies of the funding agencies and not to a real commitment of the state with this sector. The State of Chile omits a review of the structure that promotes inequality and the exclusion of indigenous people. In this way, climate change policies could be configured as a new mechanism of coloniality that validates a single type of knowledge and leads to new territorial control strategies, which increases vulnerability.Keywords: indigenous knowledge, climate change, vulnerability, Chile
Procedia PDF Downloads 126169 Arabic Light Word Analyser: Roles with Deep Learning Approach
Authors: Mohammed Abu Shquier
Abstract:
This paper introduces a word segmentation method using the novel BP-LSTM-CRF architecture for processing semantic output training. The objective of web morphological analysis tools is to link a formal morpho-syntactic description to a lemma, along with morpho-syntactic information, a vocalized form, a vocalized analysis with morpho-syntactic information, and a list of paradigms. A key objective is to continuously enhance the proposed system through an inductive learning approach that considers semantic influences. The system is currently under construction and development based on data-driven learning. To evaluate the tool, an experiment on homograph analysis was conducted. The tool also encompasses the assumption of deep binary segmentation hypotheses, the arbitrary choice of trigram or n-gram continuation probabilities, language limitations, and morphology for both Modern Standard Arabic (MSA) and Dialectal Arabic (DA), which provide justification for updating this system. Most Arabic word analysis systems are based on the phonotactic morpho-syntactic analysis of a word transmitted using lexical rules, which are mainly used in MENA language technology tools, without taking into account contextual or semantic morphological implications. Therefore, it is necessary to have an automatic analysis tool taking into account the word sense and not only the morpho-syntactic category. Moreover, they are also based on statistical/stochastic models. These stochastic models, such as HMMs, have shown their effectiveness in different NLP applications: part-of-speech tagging, machine translation, speech recognition, etc. As an extension, we focus on language modeling using Recurrent Neural Network (RNN); given that morphological analysis coverage was very low in dialectal Arabic, it is significantly important to investigate deeply how the dialect data influence the accuracy of these approaches by developing dialectal morphological processing tools to show that dialectal variability can support to improve analysis.Keywords: NLP, DL, ML, analyser, MSA, RNN, CNN
Procedia PDF Downloads 42168 Development of Ready Reckoner Charts for Easy, Convenient, and Widespread Use of Horrock’s Apparatus by Field Level Health Functionaries in India
Authors: Gumashta Raghvendra, Gumashta Jyotsna
Abstract:
Aim and Objective of Study : The use of Horrock’s Apparatus by health care worker requires onsite mathematical calculations for estimation of ‘volume of water’ and ‘amount of bleaching powder’ necessary as per the serial number of first cup showing blue coloration after adding freshly prepared starch-iodide indicator solution. In view of the difficulties of two simultaneous calculations required to be done, the use of Horrock’s Apparatus is not routinely done by health care workers because it is impractical and inconvenient Material and Methods: Arbitrary use of bleaching powder in wells results in hyper-chlorination or hypo-chlorination of well defying the purpose of adequate chlorination or non-usage of well water due to hyper-chlorination. Keeping this in mind two nomograms have been developed, one to assess the volume of well using depth and diameter of well and the other to know the quantity of bleaching powder to b added using the number of the cup of Horrock’s apparatus which shows the colour indication. Result & Conclusion: Out of thus developed two self-speaking interlinked easy charts, first chart will facilitate bypassing requirement of formulae ‘πr2h’ for water volume (ready reckoner table with depth of water shown on ‘X’ axis and ‘diameter of well’ on ‘Y’ axis) and second chart will facilitate bypassing requirement formulae ‘2ab/455’ (where ‘a’ is for ‘serial number of cup’ and ‘b’ is for ‘water volume’, while ready reckoner table showing ‘water volume’ shown on ‘X’ axis and ‘serial number of cup’ on ‘Y’ axis). The use of these two charts will help health care worker to immediately known, by referring the two charts, about the exact requirement of bleaching powder. Thus, developed ready reckoner charts will be easy and convenient to use for ensuring prevention of water-borne diseases occurring due to hypo-chlorination, especially in rural India and other developing countries.Keywords: apparatus, bleaching, chlorination, Horrock’s, nomogram
Procedia PDF Downloads 483167 Nasopharyngeal Cancer in Children and Adolescents: Experience of Emir Abdelkader Cancer Center of Oran Algeria
Authors: Taleb L., Benarbia M., Brahmi M., Belmiloud H., Boukerche A.
Abstract:
Introduction and purpose of the study: Cavum cancer in children and adolescents is rare and represents 8% of all nasopharyngeal cancers treated in our department. Our objective is to study its epidemiological, clinical, therapeutic, and evolutionary particularities. Material and methods: Retrospective study of 39 patients under 20 years old, treated for undifferentiated non-metastatic carcinoma of the nasopharynx at the Emir Abdelkader Cancer Center between 2014 and 2020. Results and statistical analysis: Median age was 14 years [7-19 years], with a sex ratio of 2.9. The median time to diagnosis was 5.6 months [1 to 14 months], the circumstances of the discovery of which were dominated by lymph node syndrome in 43.6% of cases (n=17) followed by a rhinological syndrome in 30.8% of cases (n=13). The tumor stage was T1 for two patients (5.1%), T2 for 8 (20.5%), T3 for 9 (23.1%), T4 for 20 (51.3%), N0 for 2 (5 .1%) N1 for 4 (10.3%), N2 for 28 (71.8%) and N3 for 5 (12.8%). All patients received induction chemotherapy followed by concomitant radiotherapy with cisplatin. The dose of irradiation delivered to the cavum and adenopathies was 66 Gy with fractionation of 2 Gy per session in 69.2% of cases (n=27) and 1.8 Gy in 30.8% of cases (n=12). With a median follow-up of 51 months (15 to 97 months), the locoregional, metastatic, specific, and overall relapse-free survival rates at five years were 91.1%, 73.5%, 66.1%, and 68.4, respectively. Conclusion: Chemotherapy and radiotherapy treatment of cavum cancer in children and adolescents has allowed excellent locoregional control despite the advanced stage of the disease. However, the frequency of metastatic relapses could justify the possible use of systemic maintenance treatment.Keywords: cancer, nasopharynx, radiotherapy, chemotherapy, survival
Procedia PDF Downloads 111166 A Quantitative Model for Replacement of Medical Equipment Based on Technical and Environmental Factors
Authors: Ghadeer Mohammad Said El-Sheikh, Samer Mohamad Shalhoob
Abstract:
Medical equipment operation state is a valid reflection of health care organizations' performance, where such equipment highly contributes to the quality of healthcare services on several levels in which quality improvement has become an intrinsic part of the discourse and activities of health care services. In healthcare organizations, clinical and biomedical engineering departments play an essential role in maintaining the safety and efficiency of such equipment. One of the most challenging topics when it comes to such sophisticated equipment is the lifespan of medical equipment, where many factors will impact such characteristics of medical equipment through its life cycle. So far, many attempts have been made in order to address this issue where most of the approaches are kind of arbitrary approaches and one of the criticisms of existing approaches trying to estimate and understand the lifetime of a medical equipment lies under the inquiry of what are the environmental factors that can play into such a critical characteristic of a medical equipment. In an attempt to address this shortcoming, the purpose of our study rises where in addition to the standard technical factors taken into consideration through the decision-making process by a clinical engineer in case of medical equipment failure, the dimension of environmental factors shall be added. The investigations, researches and studies applied for the purpose of supporting the decision making process by a clinical engineers and assessing the lifespan of healthcare equipment’s in the Lebanese society was highly dependent on the identification of technical criteria’s that impacts the lifespan of a medical equipment where the affecting environmental factors didn’t receive the proper attention. The objective of our study is based on the need for introducing a new well-designed plan for evaluating medical equipment depending on two dimensions. According to this approach, the equipment that should be replaced or repaired will be classified based on a systematic method taking into account two essential criteria; the standard identified technical criteria and the added environmental criteria.Keywords: technical, environmental, healthcare, characteristic of medical equipment
Procedia PDF Downloads 155165 Optimizing Detection Methods for THz Bio-imaging Applications
Authors: C. Bolakis, I. S. Karanasiou, D. Grbovic, G. Karunasiri, N. Uzunoglu
Abstract:
A new approach for efficient detection of THz radiation in biomedical imaging applications is proposed. A double-layered absorber consisting of a 32 nm thick aluminum (Al) metallic layer, located on a glass medium (SiO2) of 1 mm thickness, was fabricated and used to design a fine-tuned absorber through a theoretical and finite element modeling process. The results indicate that the proposed low-cost, double-layered absorber can be tuned based on the metal layer sheet resistance and the thickness of various glass media taking advantage of the diversity of the absorption of the metal films in the desired THz domain (6 to 10 THz). It was found that the composite absorber could absorb up to 86% (a percentage exceeding the 50%, previously shown to be the highest achievable when using single thin metal layer) and reflect less than 1% of the incident THz power. This approach will enable monitoring of the transmission coefficient (THz transmission ‘’fingerprint’’) of the biosample with high accuracy, while also making the proposed double-layered absorber a good candidate for a microbolometer pixel’s active element. Based on the aforementioned promising results, a more sophisticated and effective double-layered absorber is under development. The glass medium has been substituted by diluted poly-si and the results were twofold: An absorption factor of 96% was reached and high TCR properties acquired. In addition, a generalization of these results and properties over the active frequency spectrum was achieved. Specifically, through the development of a theoretical equation having as input any arbitrary frequency in the IR spectrum (0.3 to 405.4 THz) and as output the appropriate thickness of the poly-si medium, the double-layered absorber retains the ability to absorb the 96% and reflects less than 1% of the incident power. As a result, through that post-optimization process and the spread spectrum frequency adjustment, the microbolometer detector efficiency could be further improved.Keywords: bio-imaging, fine-tuned absorber, fingerprint, microbolometer
Procedia PDF Downloads 348164 Automatic Tuning for a Systemic Model of Banking Originated Losses (SYMBOL) Tool on Multicore
Authors: Ronal Muresano, Andrea Pagano
Abstract:
Nowadays, the mathematical/statistical applications are developed with more complexity and accuracy. However, these precisions and complexities have brought as result that applications need more computational power in order to be executed faster. In this sense, the multicore environments are playing an important role to improve and to optimize the execution time of these applications. These environments allow us the inclusion of more parallelism inside the node. However, to take advantage of this parallelism is not an easy task, because we have to deal with some problems such as: cores communications, data locality, memory sizes (cache and RAM), synchronizations, data dependencies on the model, etc. These issues are becoming more important when we wish to improve the application’s performance and scalability. Hence, this paper describes an optimization method developed for Systemic Model of Banking Originated Losses (SYMBOL) tool developed by the European Commission, which is based on analyzing the application's weakness in order to exploit the advantages of the multicore. All these improvements are done in an automatic and transparent manner with the aim of improving the performance metrics of our tool. Finally, experimental evaluations show the effectiveness of our new optimized version, in which we have achieved a considerable improvement on the execution time. The time has been reduced around 96% for the best case tested, between the original serial version and the automatic parallel version.Keywords: algorithm optimization, bank failures, OpenMP, parallel techniques, statistical tool
Procedia PDF Downloads 369163 Analyzing the Impact of Global Financial Crisis on Interconnectedness of Asian Stock Markets Using Network Science
Authors: Jitendra Aswani
Abstract:
In the first section of this study, impact of Global Financial Crisis (GFC) on the synchronization of fourteen Asian Stock Markets (ASM’s) of countries like Hong Kong, India, Thailand, Singapore, Taiwan, Pakistan, Bangladesh, South Korea, Malaysia, Indonesia, Japan, China, Philippines and Sri Lanka, has been analysed using the network science and its metrics like degree of node, clustering coefficient and network density. Then in the second section of this study by introducing the US stock market in existing network and developing a Minimum Spanning Tree (MST) spread of crisis from the US stock market to Asian Stock Markets (ASM) has been explained. Data used for this study is adjusted the closing price of these indices from 6th January, 2000 to 15th September, 2013 which further divided into three sub-periods: Pre, during and post-crisis. Using network analysis, it is found that Asian stock markets become more interdependent during the crisis than pre and post crisis, and also Hong Kong, India, South Korea and Japan are systemic important stock markets in the Asian region. Therefore, failure or shock to any of these systemic important stock markets can cause contagion to another stock market of this region. This study is useful for global investors’ in portfolio management especially during the crisis period and also for policy makers in formulating the financial regulation norms by knowing the connections between the stock markets and how the system of these stock markets changes in crisis period and after that.Keywords: global financial crisis, Asian stock markets, network science, Kruskal algorithm
Procedia PDF Downloads 424162 Unveiling the Indonesian Identity through Proverbial Expressions: The Relation of Meaning between Authority and Globalization
Authors: Prima Gusti Yanti, Fairul Zabadi
Abstract:
The purpose of the study is to find out relation of moral massage with the authority ang globalization in proverb. Proverb is one of the many forms of cultural identity of the Indonesian/Malay people fulled with moral values. The values contained within those proverbs are beneficial not only to the society, but also to those who held power amidst on this era of globalization. The method being used is qualitative research by using content analysis which is done by describing and uncovering the forms and meanings of proverbs used within Indonesia Minangkabau society. Sources for this study’s data were extracted from a Minangkabau native speaker in the subdistrict of Tanah Abang, Jakarta. Said sources were retrieved through a series of interviews with the Minangkabau native speaker, whose speech is still adorned with idiomatic expressions. The research findings show that there existed 30 proverbs or idiomatic expressions in the Minangkabau language that are often used by its indigenous people. The thirty data contain moral values that are closely interwoven with the matter of power and globalization. Analytical results show that there are fourteen moral values contained within proverbs reflect a firm connection between rule and power in globalization; such as: responsible, brave, togetherness and consensus,tolerance, politeness, thorough and meticulous,honest and keeping promise, ingenious and learning, care, self-correction, be fair, alert, arbitrary, self-awareness. Structurally, proverbs possess an unchangeably formal construction; symbolically, proverbs possess meanings that are clearly decided through ethnographic communicative factors along with situational and cultural contexts. Values contained within proverbs may be used as a guide in social management, be it between fellow men, men between nature, or even men between their Creator. Therefore, the meanings and values contained within the morals of proverbs could also be utilized as a counsel for those who rule and in charge of power in order to stem the tides of globalization that had already spread into sectoral, territorial and educational continuums.Keywords: continuum, globalization, identity, proverb, rule-power
Procedia PDF Downloads 389161 Finite Element Analysis of Steel-Concrete Composite Structures Considering Bond-Slip Effect
Authors: WonHo Lee, Hyo-Gyoung Kwak
Abstract:
A numerical model considering slip behavior of steel-concrete composite structure is introduced. This model is based on a linear bond stress-slip relation along the interface. Single node was considered at the interface of steel and concrete member in finite element analysis, and it improves analytical problems of model that takes double nodes at the interface by adopting spring elements to simulate the partial interaction. The slip behavior is simulated by modifying material properties of steel element contacting concrete according to the derived formulation. Decreased elastic modulus simulates the slip occurrence at the interface and decreased yield strength simulates drop in load capacity of the structure. The model is verified by comparing numerical analysis applying this model with experimental studies. Acknowledgment—This research was supported by a grant(13SCIPA01) from Smart Civil Infrastructure Research Program funded by Ministry of Land, Infrastructure and Transport(MOLIT) of Korea government and Korea Agency for Infrastructure Technology Advancement(KAIA) and financially supported by Korea Ministry of Land, Infrastructure and Transport(MOLIT) as U-City Master and Doctor Course Grant Program.Keywords: bond-slip, composite structure, partial interaction, steel-concrete structure
Procedia PDF Downloads 178160 Localization of Buried People Using Received Signal Strength Indication Measurement of Wireless Sensor
Authors: Feng Tao, Han Ye, Shaoyi Liao
Abstract:
City constructions collapse after earthquake and people will be buried under ruins. Search and rescue should be conducted as soon as possible to save them. Therefore, according to the complicated environment, irregular aftershocks and rescue allow of no delay, a kind of target localization method based on RSSI (Received Signal Strength Indication) is proposed in this article. The target localization technology based on RSSI with the features of low cost and low complexity has been widely applied to nodes localization in WSN (Wireless Sensor Networks). Based on the theory of RSSI transmission and the environment impact to RSSI, this article conducts the experiments in five scenes, and multiple filtering algorithms are applied to original RSSI value in order to establish the signal propagation model with minimum test error respectively. Target location can be calculated from the distance, which can be estimated from signal propagation model, through improved centroid algorithm. Result shows that the localization technology based on RSSI is suitable for large-scale nodes localization. Among filtering algorithms, mixed filtering algorithm (average of average, median and Gaussian filtering) performs better than any other single filtering algorithm, and by using the signal propagation model, the minimum error of distance between known nodes and target node in the five scene is about 3.06m.Keywords: signal propagation model, centroid algorithm, localization, mixed filtering, RSSI
Procedia PDF Downloads 300159 Development and Validation of a Carbon Dioxide TDLAS Sensor for Studies on Fermented Dairy Products
Authors: Lorenzo Cocola, Massimo Fedel, Dragiša Savić, Bojana Danilović, Luca Poletto
Abstract:
An instrument for the detection and evaluation of gaseous carbon dioxide in the headspace of closed containers has been developed in the context of Packsensor Italian-Serbian joint project. The device is based on Tunable Diode Laser Absorption Spectroscopy (TDLAS) with a Wavelength Modulation Spectroscopy (WMS) technique in order to accomplish a non-invasive measurement inside closed containers of fermented dairy products (yogurts and fermented cheese in cups and bottles). The purpose of this instrument is the continuous monitoring of carbon dioxide concentration during incubation and storage of products over a time span of the whole shelf life of the product, in the presence of different microorganisms. The instrument’s optical front end has been designed to be integrated in a thermally stabilized incubator. An embedded computer provides processing of spectral artifacts and storage of an arbitrary set of calibration data allowing a properly calibrated measurement on many samples (cups and bottles) of different shapes and sizes commonly found in the retail distribution. A calibration protocol has been developed in order to be able to calibrate the instrument on the field also on containers which are notoriously difficult to seal properly. This calibration protocol is described and evaluated against reference measurements obtained through an industry standard (sampling) carbon dioxide metering technique. Some sets of validation test measurements on different containers are reported. Two test recordings of carbon dioxide concentration evolution are shown as an example of instrument operation. The first demonstrates the ability to monitor a rapid yeast growth in a contaminated sample through the increase of headspace carbon dioxide. Another experiment shows the dissolution transient with a non-saturated liquid medium in presence of a carbon dioxide rich headspace atmosphere.Keywords: TDLAS, carbon dioxide, cups, headspace, measurement
Procedia PDF Downloads 324158 A Highly Efficient Broadcast Algorithm for Computer Networks
Authors: Ganesh Nandakumaran, Mehmet Karaata
Abstract:
A wave is a distributed execution, often made up of a broadcast phase followed by a feedback phase, requiring the participation of all the system processes before a particular event called decision is taken. Wave algorithms with one initiator such as the 1-wave algorithm have been shown to be very efficient for broadcasting messages in tree networks. Extensions of this algorithm broadcasting a sequence of waves using a single initiator have been implemented in algorithms such as the m-wave algorithm. However as the network size increases, having a single initiator adversely affects the message delivery times to nodes further away from the initiator. As a remedy, broadcast waves can be allowed to be initiated by multiple initiator nodes distributed across the network to reduce the completion time of broadcasts. These waves initiated by one or more initiator processes form a collection of waves covering the entire network. Solutions to global-snapshots, distributed broadcast and various synchronization problems can be solved efficiently using waves with multiple concurrent initiators. In this paper, we propose the first stabilizing multi-wave sequence algorithm implementing waves started by multiple initiator processes such that every process in the network receives at least one sequence of broadcasts. Due to being stabilizing, the proposed algorithm can withstand transient faults and do not require initialization. We view a fault as a transient fault if it perturbs the configuration of the system but not its program.Keywords: distributed computing, multi-node broadcast, propagation of information with feedback and cleaning (PFC), stabilization, wave algorithms
Procedia PDF Downloads 504157 Measurement Technologies for Advanced Characterization of Magnetic Materials Used in Electric Drives and Automotive Applications
Authors: Lukasz Mierczak, Patrick Denke, Piotr Klimczyk, Stefan Siebert
Abstract:
Due to the high complexity of the magnetization in electrical machines and influence of the manufacturing processes on the magnetic properties of their components, the assessment and prediction of hysteresis and eddy current losses has remained a challenge. In the design process of electric motors and generators, the power losses of stators and rotors are calculated based on the material supplier’s data from standard magnetic measurements. This type of data does not include the additional loss from non-sinusoidal multi-harmonic motor excitation nor the detrimental effects of residual stress remaining in the motor laminations after manufacturing processes, such as punching, housing shrink fitting and winding. Moreover, in production, considerable attention is given to the measurements of mechanical dimensions of stator and rotor cores, whereas verification of their magnetic properties is typically neglected, which can lead to inconsistent efficiency of assembled motors. Therefore, to enable a comprehensive characterization of motor materials and components, Brockhaus Measurements developed a range of in-line and offline measurement technologies for testing their magnetic properties under actual motor operating conditions. Multiple sets of experimental data were obtained to evaluate the influence of various factors, such as elevated temperature, applied and residual stress, and arbitrary magnetization on the magnetic properties of different grades of non-oriented steel. Measured power loss for tested samples and stator cores varied significantly, by more than 100%, comparing to standard measurement conditions. Quantitative effects of each of the applied measurement were analyzed. This research and applied Brockhaus measurement methodologies emphasized the requirement for advanced characterization of magnetic materials used in electric drives and automotive applications.Keywords: magnetic materials, measurement technologies, permanent magnets, stator and rotor cores
Procedia PDF Downloads 141156 Efficient Mercury Sorbent: Activated Carbon and Metal Organic Framework Hybrid
Authors: Yongseok Hong, Kurt Louis Solis
Abstract:
In the present study, a hybrid sorbent using the metal organic framework (MOF), UiO-66, and powdered activated carbon (pAC) is synthesized to remove cationic and anionic metals simultaneously. UiO-66 is an octahedron-shaped MOF with a Zr₆O₄(OH)₄ metal node and 1,4-benzene dicarboxylic acid (BDC) organic linker. Zr-based MOFs are attractive for trace element remediation in wastewaters, because Zr is relatively non-toxic as compared to other classes of MOF and, therefore, it will not cause secondary pollution. Most remediation studies with UiO-66 target anions such as fluoride, but trace element oxyanions such as arsenic, selenium, and antimony have also been investigated. There have also been studies involving mercury removal by UiO-66 derivatives, however these require post-synthetic modifications or have lower effective surface areas. Activated carbon is known for being a readily available, well-studied, effective adsorbent for metal contaminants. Solvothermal method was employed to prepare hybrid sorbent from UiO66 and activated carbon, which could be used to remove mercury and selenium simultaneously. The hybrid sorbent was characterized using FSEM-EDS, FT-IR, XRD, and TGA. The results showed that UiO66 and activated carbon are successfully composited. From BET studies, the hybrid sorbent has a SBET of 1051 m² g⁻¹. Adsorption studies were performed, where the hybrid showed maximum adsorption of 204.63 mg g⁻¹ and 168 mg g⁻¹ for Hg (II) and selenite, respectively, and follows the Langmuir model for both species. Kinetics studies have revealed that the Hg uptake of the hybrid is pseudo-2nd order and has rate constant of 5.6E-05 g mg⁻¹ min⁻¹ and the selenite uptake follows the simplified Elovich model with α = 2.99 mg g⁻¹ min⁻¹, β = 0.032 g mg⁻¹.Keywords: adsorption, flue gas wastewater, mercury, selenite, metal organic framework
Procedia PDF Downloads 175155 Diagnostic Accuracy Of Core Biopsy In Patients Presenting With Axillary Lymphadenopathy And Suspected Non-Breast Malignancy
Authors: Monisha Edirisooriya, Wilma Jack, Dominique Twelves, Jennifer Royds, Fiona Scott, Nicola Mason, Arran Turnbull, J. Michael Dixon
Abstract:
Introduction: Excision biopsy has been the investigation of choice for patients presenting with pathological axillary lymphadenopathy without a breast abnormality. Core biopsy of nodes can provide sufficient tissue for diagnosis and has advantages in terms of morbidity and speed of diagnosis. This study evaluates the diagnostic accuracy of core biopsy in patients presenting with axillary lymphadenopathy. Methods: Between 2009 and 2019, 165 patients referred to the Edinburgh Breast Unit had a total of 179 axillary lymph node core biopsies. Results: 152 (92%) of the 165 initial core biopsies were deemed to contain adequate nodal tissue. Core biopsy correctly established malignancy in 75 of the 78 patients with haematological malignancy (96%) and in all 28 patients with metastatic carcinoma (100%) and correctly diagnosed benign changes in 49 of 57 (86%) patients with benign conditions. There were no false positives and no false negatives. In 67 (85.9%) of the 78 patients with hematological malignancy, there was sufficient material in the first core biopsy to allow the pathologist to make an actionable diagnosis and not ask for more tissue sampling prior to treatment. There were no complications of core biopsy. On follow up, none of the patients with benign cores has been shown to have malignancy in the axilla and none with lymphoma had their initial disease incorrectly classified. Conclusions: This study shows that core biopsy is now the investigation of choice for patients presenting with axillary lymphadenopathy even in those suspected as having lymphoma.Keywords: core biopsy, excision biopsy, axillary lymphadenopathy, non-breast malignancy
Procedia PDF Downloads 241154 Investigation of Influence of Maize Stover Components and Urea Treatment on Dry Matter Digestibility and Fermentation Kinetics Using in vitro Gas Techniques
Authors: Anon Paserakung, Chaloemphon Muangyen, Suban Foiklang, Yanin Opatpatanakit
Abstract:
Improving nutritive values and digestibility of maize stover is an alternative way to increase their utilization in ruminant and reduce air pollution from open burning of maize stover in the northern Thailand. The present study, 2x3 factorial arrangements in completely randomized design was conducted to investigate the effect of maize stover components (whole and upper stover; cut above 5th node). Urea treatment at levels 0, 3, and 6% DM on dry matter digestibility and fermentation kinetics of maize stover using in vitro gas production. After 21 days of urea treatment, results illustrated that there was no interaction between maize stover components and urea treatment on 48h in vitro dry matter digestibility (IVDMD). IVDMD was unaffected by maize stover components (P > 0.05), average IVDMD was 55%. However, using whole maize stover gave higher cumulative gas and gas kinetic parameters than those of upper stover (P<0.05). Treating maize stover by ensiling with urea resulted in a significant linear increase in IVDMD (P<0.05). IVDMD increased from 42.6% to 53.9% when increased urea concentration from 0 to 3% and maximum IVDMD (65.1%) was observed when maize stover was ensiled with 6% urea. Maize stover treated with urea at levels of 0, 3, and 6% linearly increased cumulative gas production at 96h (31.1 vs 50.5 and 59.1 ml, respectively) and all gas kinetic parameters excepted the gas production from the immediately soluble fraction (P<0.50). The results indicate that maize stover treated with 6% urea enhance in vitro dry matter digestibility and fermentation kinetics. This study provides a practical approach to increasing utilization of maize stover in feeding ruminant animals.Keywords: maize stover, urea treatment, ruminant feed, gas production
Procedia PDF Downloads 224153 Time and Cost Prediction Models for Language Classification Over a Large Corpus on Spark
Authors: Jairson Barbosa Rodrigues, Paulo Romero Martins Maciel, Germano Crispim Vasconcelos
Abstract:
This paper presents an investigation of the performance impacts regarding the variation of five factors (input data size, node number, cores, memory, and disks) when applying a distributed implementation of Naïve Bayes for text classification of a large Corpus on the Spark big data processing framework. Problem: The algorithm's performance depends on multiple factors, and knowing before-hand the effects of each factor becomes especially critical as hardware is priced by time slice in cloud environments. Objectives: To explain the functional relationship between factors and performance and to develop linear predictor models for time and cost. Methods: the solid statistical principles of Design of Experiments (DoE), particularly the randomized two-level fractional factorial design with replications. This research involved 48 real clusters with different hardware arrangements. The metrics were analyzed using linear models for screening, ranking, and measurement of each factor's impact. Results: Our findings include prediction models and show some non-intuitive results about the small influence of cores and the neutrality of memory and disks on total execution time, and the non-significant impact of data input scale on costs, although notably impacts the execution time.Keywords: big data, design of experiments, distributed machine learning, natural language processing, spark
Procedia PDF Downloads 120152 Robotic Lingulectomy for Primary Lung Cancer: A Video Presentation
Authors: Abraham J. Rizkalla, Joanne F. Irons, Christopher Q. Cao
Abstract:
Purpose: Lobectomy was considered the standard of care for early-stage non-small lung cancer (NSCLC) after the Lung Cancer Study Group trial demonstrated increased locoregional recurrence for sublobar resections. However, there has been heightened interest in segmentectomies for selected patients with peripheral lesions ≤2cm, as investigated by the JCOG0802 and CALGB140503 trials. Minimally invasive robotic surgery facilitates segmentectomies with improved maneuverability and visualization of intersegmental planes using indocyanine green. We hereby present a patient who underwent robotic lingulectomy for an undiagnosed ground-glass opacity. Methodology: This video demonstrates a robotic portal lingulectomy using three 8mm ports and a 12mm port. Stereoscopic direct vision facilitated the identification of the lingula artery and vein, and intra-operative bronchoscopy was performed to confirm the lingula bronchus. The intersegmental plane was identified by indocyanine green and a near-infrared camera. Thorough lymph node sampling was performed in accordance with international standards. Results: The 18mm lesion was successfully excised with clear margins to achieve R0 resection with no evidence of malignancy in the 8 lymph nodes sampled. Histopathological examination revealed lepidic predominant adenocarcinoma, pathological stage IA. Conclusion: This video presentation exemplifies the standard approach for robotic portal lingulectomy in appropriately selected patients.Keywords: lung cancer, robotic segmentectomy, indocyanine green, lingulectomy
Procedia PDF Downloads 67151 Using Cyclic Structure to Improve Inference on Network Community Structure
Authors: Behnaz Moradijamei, Michael Higgins
Abstract:
Identifying community structure is a critical task in analyzing social media data sets often modeled by networks. Statistical models such as the stochastic block model have proven to explain the structure of communities in real-world network data. In this work, we develop a goodness-of-fit test to examine community structure's existence by using a distinguishing property in networks: cyclic structures are more prevalent within communities than across them. To better understand how communities are shaped by the cyclic structure of the network rather than just the number of edges, we introduce a novel method for deciding on the existence of communities. We utilize these structures by using renewal non-backtracking random walk (RNBRW) to the existing goodness-of-fit test. RNBRW is an important variant of random walk in which the walk is prohibited from returning back to a node in exactly two steps and terminates and restarts once it completes a cycle. We investigate the use of RNBRW to improve the performance of existing goodness-of-fit tests for community detection algorithms based on the spectral properties of the adjacency matrix. Our proposed test on community structure is based on the probability distribution of eigenvalues of the normalized retracing probability matrix derived by RNBRW. We attempt to make the best use of asymptotic results on such a distribution when there is no community structure, i.e., asymptotic distribution under the null hypothesis. Moreover, we provide a theoretical foundation for our statistic by obtaining the true mean and a tight lower bound for RNBRW edge weights variance.Keywords: hypothesis testing, RNBRW, network inference, community structure
Procedia PDF Downloads 150150 Research on “Three Ports in One” Comprehensive Transportation System of Sea, Land and Airport in Nantong City under the Background of a New Round of Territorial Space Planning
Authors: Ying Sun, Yuxuan Lei
Abstract:
Based on the analysis of the current situation of Nantong's comprehensive transportation system, the interactive relationship between the transportation system and the economy and society is clarified, and then the development strategy for the planning and implementation of the "three ports in one" comprehensive transportation system of ocean, land, and airport is proposed for this round of territorial spatial planning. The research findings are as follows: (1) The comprehensive transportation network system of Nantong City is beginning to take shape, but the lack of a unified and complete system planning makes it difficult to establish a "multi-port integration" pattern with transportation hubs. (2) At the Yangtze River Delta level and Nantong City level, a connected transport node integrating ocean, land, and airport should be built in the transportation construction planning to effectively meet the guidance of the overall territorial space planning of Nantong City. (3) Nantong's comprehensive transportation system and economic society have experienced three interactive development relations in different stages: mutual promotion, geographical separation, and high-level driving. Therefore, the current planning of Nantong's comprehensive transportation system needs to be optimized. The four levels of Nantong city, Shanghai metropolitan area, Yangtze River Delta, and each district, county, and city should be comprehensively considered, and the four development strategies of accelerating construction, dislocation development, active docking, and innovative implementation should be adopted.Keywords: master plan for territorial space, Integrated transportation system, Nantong, sea, land and air, "Three ports in one"
Procedia PDF Downloads 146149 Laparoscopic Proximal Gastrectomy in Gastroesophageal Junction Tumours
Authors: Ihab Saad Ahmed
Abstract:
Background For Siewert type I and II gastroesophageal junction tumor (GEJ) laparoscopic proximal gastrectomy can be performed. It is associated with several perioperative benefits compared with open proximal gastrectomy. The use of laparoscopic proximal gastrectomy (LPG) has become an increasingly popular approach for select tumors Methods We describe our technique for LPG, including the preoperative work-up, illustrated images of the main principle steps of the surgery, and our postoperative course. Results Thirteen pts (nine males, four female) with type I, II (GEJ) adenocarcinoma had laparoscopic radical proximal gastrectomy and D2 lymphadenectomy. All of our patient received neoadjuvant chemotherapy, eleven patients had intrathoracic anastomosis through mini thoracotomy (two hand sewn end to end anastomoses and the other 9 patient end to side using circular stapler), two patients with intrathoracic anastomosis had flap and wrap technique, two patients had thoracoscopic esophageal and mediastinal lymph node dissection with cervical anastomosis The mean blood loss 80ml, no cases were converted to open. The mean operative time 250 minute Average LN retrieved 19-25, No sever complication such as leakage, stenosis, pancreatic fistula ,or intra-abdominal abscess were reported. Only One patient presented with empyema 1.5 month after discharge that was managed conservatively. Conclusion For carefully selected patients, LPG in GEJ tumour type I and II is a safe and reasonable alternative for open technique , which is associated with similar oncologic outcomes and low morbidity. It showed less blood loss, respiratory infections, with similar 1- and 3-year survival rates.Keywords: LPG(laparoscopic proximal gastrectomy, GEJ( gastroesophageal junction tumour), d2 lymphadenectomy, neoadjuvant cth
Procedia PDF Downloads 125148 150 KVA Multifunction Laboratory Test Unit Based on Power-Frequency Converter
Authors: Bartosz Kedra, Robert Malkowski
Abstract:
This paper provides description and presentation of laboratory test unit built basing on 150 kVA power frequency converter and Simulink RealTime platform. Assumptions, based on criteria which load and generator types may be simulated using discussed device, are presented, as well as control algorithm structure. As laboratory setup contains transformer with thyristor controlled tap changer, a wider scope of setup capabilities is presented. Information about used communication interface, data maintenance, and storage solution as well as used Simulink real-time features is presented. List and description of all measurements are provided. Potential of laboratory setup modifications is evaluated. For purposes of Rapid Control Prototyping, a dedicated environment was used Simulink RealTime. Therefore, load model Functional Unit Controller is based on a PC computer with I/O cards and Simulink RealTime software. Simulink RealTime was used to create real-time applications directly from Simulink models. In the next step, applications were loaded on a target computer connected to physical devices that provided opportunity to perform Hardware in the Loop (HIL) tests, as well as the mentioned Rapid Control Prototyping process. With Simulink RealTime, Simulink models were extended with I/O cards driver blocks that made automatic generation of real-time applications and performing interactive or automated runs on a dedicated target computer equipped with a real-time kernel, multicore CPU, and I/O cards possible. Results of performed laboratory tests are presented. Different load configurations are described and experimental results are presented. This includes simulation of under frequency load shedding, frequency and voltage dependent characteristics of groups of load units, time characteristics of group of different load units in a chosen area and arbitrary active and reactive power regulation basing on defined schedule.Keywords: MATLAB, power converter, Simulink Real-Time, thyristor-controlled tap changer
Procedia PDF Downloads 323147 A Prediction Model for Dynamic Responses of Building from Earthquake Based on Evolutionary Learning
Authors: Kyu Jin Kim, Byung Kwan Oh, Hyo Seon Park
Abstract:
The seismic responses-based structural health monitoring system has been performed to prevent seismic damage. Structural seismic damage of building is caused by the instantaneous stress concentration which is related with dynamic characteristic of earthquake. Meanwhile, seismic response analysis to estimate the dynamic responses of building demands significantly high computational cost. To prevent the failure of structural members from the characteristic of the earthquake and the significantly high computational cost for seismic response analysis, this paper presents an artificial neural network (ANN) based prediction model for dynamic responses of building considering specific time length. Through the measured dynamic responses, input and output node of the ANN are formed by the length of specific time, and adopted for the training. In the model, evolutionary radial basis function neural network (ERBFNN), that radial basis function network (RBFN) is integrated with evolutionary optimization algorithm to find variables in RBF, is implemented. The effectiveness of the proposed model is verified through an analytical study applying responses from dynamic analysis for multi-degree of freedom system to training data in ERBFNN.Keywords: structural health monitoring, dynamic response, artificial neural network, radial basis function network, genetic algorithm
Procedia PDF Downloads 304146 Investigating Visual Statistical Learning during Aging Using the Eye-Tracking Method
Authors: Zahra Kazemi Saleh, Bénédicte Poulin-Charronnat, Annie Vinter
Abstract:
This study examines the effects of aging on visual statistical learning, using eye-tracking techniques to investigate this cognitive phenomenon. Visual statistical learning is a fundamental brain function that enables the automatic and implicit recognition, processing, and internalization of environmental patterns over time. Some previous research has suggested the robustness of this learning mechanism throughout the aging process, underscoring its importance in the context of education and rehabilitation for the elderly. The study included three distinct groups of participants, including 21 young adults (Mage: 19.73), 20 young-old adults (Mage: 67.22), and 17 old-old adults (Mage: 79.34). Participants were exposed to a series of 12 arbitrary black shapes organized into 6 pairs, each with different spatial configurations and orientations (horizontal, vertical, and oblique). These pairs were not explicitly revealed to the participants, who were instructed to passively observe 144 grids presented sequentially on the screen for a total duration of 7 min. In the subsequent test phase, participants performed a two-alternative forced-choice task in which they had to identify the most familiar pair from 48 trials, each consisting of a base pair and a non-base pair. Behavioral analysis using t-tests revealed notable findings. The mean score for the first group was significantly above chance, indicating the presence of visual statistical learning. Similarly, the second group also performed significantly above chance, confirming the persistence of visual statistical learning in young-old adults. Conversely, the third group, consisting of old-old adults, showed a mean score that was not significantly above chance. This lack of statistical learning in the old-old adult group suggests a decline in this cognitive ability with age. Preliminary eye-tracking results showed a decrease in the number and duration of fixations during the exposure phase for all groups. The main difference was that older participants focused more often on empty cases than younger participants, likely due to a decline in the ability to ignore irrelevant information, resulting in a decrease in statistical learning performance.Keywords: aging, eye tracking, implicit learning, visual statistical learning
Procedia PDF Downloads 77145 Application of Finite Volume Method for Numerical Simulation of Contaminant Transfer in a Two-Dimensional Reservoir
Authors: Atousa Ataieyan, Salvador A. Gomez-Lopera, Gennaro Sepede
Abstract:
Today, due to the growing urban population and consequently, the increasing water demand in cities, the amount of contaminants entering the water resources is increasing. This can impose harmful effects on the quality of the downstream water. Therefore, predicting the concentration of discharged pollutants at different times and distances of the interested area is of high importance in order to carry out preventative and controlling measures, as well as to avoid consuming the contaminated water. In this paper, the concentration distribution of an injected conservative pollutant in a square reservoir containing four symmetric blocks and three sources using Finite Volume Method (FVM) is simulated. For this purpose, after estimating the flow velocity, classical Advection-Diffusion Equation (ADE) has been discretized over the studying domain by Backward Time- Backward Space (BTBS) scheme. Then, the discretized equations for each node have been derived according to the initial condition, boundary conditions and point contaminant sources. Finally, taking into account the appropriate time step and space step, a computational code was set up in MATLAB. Contaminant concentration was then obtained at different times and distances. Simulation results show how using BTBS differentiating scheme and FVM as a numerical method for solving the partial differential equation of transport is an appropriate approach in the case of two-dimensional contaminant transfer in an advective-diffusive flow.Keywords: BTBS differentiating scheme, contaminant concentration, finite volume, mass transfer, water pollution
Procedia PDF Downloads 135144 Dynamic Response around Inclusions in Infinitely Inhomogeneous Media
Authors: Jinlai Bian, Zailin Yang, Guanxixi Jiang, Xinzhu Li
Abstract:
The problem of elastic wave propagation in inhomogeneous medium has always been a classic problem. Due to the frequent occurrence of earthquakes, many economic losses and casualties have been caused, therefore, to prevent earthquake damage to people and reduce damage, this paper studies the dynamic response around the circular inclusion in the whole space with inhomogeneous modulus, the inhomogeneity of the medium is reflected in the shear modulus of the medium with the spatial position, and the density is constant, this method can be used to solve the problem of the underground buried pipeline. Stress concentration phenomena are common in aerospace and earthquake engineering, and the dynamic stress concentration factor (DSCF) is one of the main factors leading to material damage, one of the important applications of the theory of elastic dynamics is to determine the stress concentration in the body with discontinuities such as cracks, holes, and inclusions. At present, the methods include wave function expansion method, integral transformation method, integral equation method and so on. Based on the complex function method, the Helmholtz equation with variable coefficients is standardized by using conformal transformation method and wave function expansion method, the displacement and stress fields in the whole space with circular inclusions are solved in the complex coordinate system, the unknown coefficients are solved by using boundary conditions, by comparing with the existing results, the correctness of this method is verified, based on the superiority of the complex variable function theory to the conformal transformation, this method can be extended to study the inclusion problem of arbitrary shapes. By solving the dynamic stress concentration factor around the inclusions, the influence of the inhomogeneous parameters of the medium and the wavenumber ratio of the inclusions to the matrix on the dynamic stress concentration factor is analyzed. The research results can provide some reference value for the evaluation of nondestructive testing (NDT), oil exploration, seismic monitoring, and soil-structure interaction.Keywords: circular inclusions, complex variable function, dynamic stress concentration factor (DSCF), inhomogeneous medium
Procedia PDF Downloads 135143 Internet of Things Networks: Denial of Service Detection in Constrained Application Protocol Using Machine Learning Algorithm
Authors: Adamu Abdullahi, On Francisca, Saidu Isah Rambo, G. N. Obunadike, D. T. Chinyio
Abstract:
The paper discusses the potential threat of Denial of Service (DoS) attacks in the Internet of Things (IoT) networks on constrained application protocols (CoAP). As billions of IoT devices are expected to be connected to the internet in the coming years, the security of these devices is vulnerable to attacks, disrupting their functioning. This research aims to tackle this issue by applying mixed methods of qualitative and quantitative for feature selection, extraction, and cluster algorithms to detect DoS attacks in the Constrained Application Protocol (CoAP) using the Machine Learning Algorithm (MLA). The main objective of the research is to enhance the security scheme for CoAP in the IoT environment by analyzing the nature of DoS attacks and identifying a new set of features for detecting them in the IoT network environment. The aim is to demonstrate the effectiveness of the MLA in detecting DoS attacks and compare it with conventional intrusion detection systems for securing the CoAP in the IoT environment. Findings: The research identifies the appropriate node to detect DoS attacks in the IoT network environment and demonstrates how to detect the attacks through the MLA. The accuracy detection in both classification and network simulation environments shows that the k-means algorithm scored the highest percentage in the training and testing of the evaluation. The network simulation platform also achieved the highest percentage of 99.93% in overall accuracy. This work reviews conventional intrusion detection systems for securing the CoAP in the IoT environment. The DoS security issues associated with the CoAP are discussed.Keywords: algorithm, CoAP, DoS, IoT, machine learning
Procedia PDF Downloads 80142 Bluetooth Communication Protocol Study for Multi-Sensor Applications
Authors: Joao Garretto, R. J. Yarwood, Vamsi Borra, Frank Li
Abstract:
Bluetooth Low Energy (BLE) has emerged as one of the main wireless communication technologies used in low-power electronics, such as wearables, beacons, and Internet of Things (IoT) devices. BLE’s energy efficiency characteristic, smart mobiles interoperability, and Over the Air (OTA) capabilities are essential features for ultralow-power devices, which are usually designed with size and cost constraints. Most current research regarding the power analysis of BLE devices focuses on the theoretical aspects of the advertising and scanning cycles, with most results being presented in the form of mathematical models and computer software simulations. Such computer modeling and simulations are important for the comprehension of the technology, but hardware measurement is essential for the understanding of how BLE devices behave in real operation. In addition, recent literature focuses mostly on the BLE technology, leaving possible applications and its analysis out of scope. In this paper, a coin cell battery-powered BLE Data Acquisition Device, with a 4-in-1 sensor and one accelerometer, is proposed and evaluated with respect to its Power Consumption. First, evaluations of the device in advertising mode with the sensors turned off completely, followed by the power analysis when each of the sensors is individually turned on and data is being transmitted, and concluding with the power consumption evaluation when both sensors are on and respectively broadcasting the data to a mobile phone. The results presented in this paper are real-time measurements of the electrical current consumption of the BLE device, where the energy levels that are demonstrated are matched to the BLE behavior and sensor activity.Keywords: bluetooth low energy, power analysis, BLE advertising cycle, wireless sensor node
Procedia PDF Downloads 91141 Load Balancing Technique for Energy - Efficiency in Cloud Computing
Authors: Rani Danavath, V. B. Narsimha
Abstract:
Cloud computing is emerging as a new paradigm of large scale distributed computing. Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., three service models, and four deployment networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. This cloud model is composed of five essential characteristics models. Load balancing is one of the main challenges in cloud computing, which is required to distribute the dynamic workload across multiple nodes, to ensure that no single node is overloaded. It helps in optimal utilization of resources, enhancing the performance of the system. The goal of the load balancing is to minimize the resource consumption and carbon emission rate, that is the direct need of cloud computing. This determined the need of new metrics energy consumption and carbon emission for energy-efficiency load balancing techniques in cloud computing. Existing load balancing techniques mainly focuses on reducing overhead, services, response time and improving performance etc. In this paper we introduced a Technique for energy-efficiency, but none of the techniques have considered the energy consumption and carbon emission. Therefore, our proposed work will go towards energy – efficiency. So this energy-efficiency load balancing technique can be used to improve the performance of cloud computing by balancing the workload across all the nodes in the cloud with the minimum resource utilization, in turn, reducing energy consumption, and carbon emission to an extent, which will help to achieve green computing.Keywords: cloud computing, distributed computing, energy efficiency, green computing, load balancing, energy consumption, carbon emission
Procedia PDF Downloads 449