Search results for: exponentially weighted moving average
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6153

Search results for: exponentially weighted moving average

1833 Effects of Plumage Colour on Measurable Attributes of Indigenous Chickens in North Central Nigeria

Authors: Joseph J. Okoh, Samuel T. Mbap, Tahir Ibrahim, Yusuf P. Mancha

Abstract:

The influence of plumage colour on measurable attributes of 6176 adult indigenous chickens of mixed-sex from four states of the North Central Zone of Nigeria namely; Nasarawa, Niger, Benue, Kogi and the Federal Capital Territory (FCT) Abuja were assessed. The overall average body weight of the chickens was 1.95 ± 0.03kg. The body weights of black, white, black/white, brown, black/brown, grey and mottled chicken however were 1.87 ± 0.04, 1.94 ± 0.04, 1.95 ± 0.03, 1.93 ± 0.03, 2.01 ± 0.04, 1.96 ± 0.04 and 1.94±0.14kg respectively. Only body length did not vary by plumage colour. The others; body weight and width, shank, comb and breast length, breast height (p < 0.001), beak and wing lengths (p < 0.001) varied significantly. Generally, no colour was outrightly superior to others in all body measurements. However, body weight and breast height were both highest in black/brown chickens which also had the second highest breast length. Body width, shank, beak, comb and wing lengths were highest in grey chickens but lowest in those with white colour and combinations. Egg quality was on the other hand mostly lowest in grey chickens. In selection for genetic improvement in body measurements, black/brown and grey chickens should be favoured. However, in view of the known negative relationship between body weight and egg attributes, selection in favour of grey plumage may result in chickens of poor egg attributes. Therefore, grey chickens should be selected against egg quality.

Keywords: body weight, indigenous chicken, measurements, plumage colour

Procedia PDF Downloads 112
1832 Effect of Iron Ore Tailings on the Properties of Fly-ash Cement Concrete

Authors: Sikiru F. Oritola, Abd Latif Saleh, Abd Rahman Mohd Sam, Rozana Zakaria, Mushairry Mustaffar

Abstract:

The strength of concrete varies with the types of material used; the material used within concrete can also result in different strength due to improper selection of the component. Each material brings a different aspect to the concrete. This work studied the effect of using Iron ore Tailings (IOTs) as partial replacement for sand on some properties of concrete using Fly ash Cement as the binder. The sieve analysis and some other basic properties of the materials used in producing concrete samples were first determined. Two brands of Fly ash Cement were studied. For each brand of Fly ash Cement, five different types of concrete samples denoted as HCT0, HCT10, HCT20, HCT30 and HCT40, for the first brand and PCT0, PCT10, PCT20, PCT30 and PCT40, for the second brand were produced. The percentage of Tailings as partial replacement for sand in the sample was varied from 0% to 40% at 10% interval. For each concrete sample, the average of three cubes, three cylinders and three prism specimen results was used for the determination of the compressive strength, splitting tensile strength and the flexural strength respectively. Water/cement ratio of 0.54 with fly-ash cement content of 463 Kg/m3 was used in preparing the fresh concrete. The slump values for the HCT brand concrete ranges from 152mm – 75mm while that of PCT brand ranges from 149mm to 70mm. The concrete sample PCT30 recorded the highest 28 days compressive strength of 28.12 N/mm2, the highest splitting tensile strength of 2.99 N/mm2 as well as the highest flexural strength of 4.99 N/mm2. The texture of the iron-ore tailings is rough and angular and was therefore able to improve the strength of the fly ash cement concrete. Also, due to the fineness of the IOTs more void in the concrete can be filled, but this reaches the optimum at 30% replacement level, hence the drop in strength at 40% replacement

Keywords: concrete strength, fine aggregate, fly ash cement, iron ore tailings

Procedia PDF Downloads 658
1831 Using Deep Learning for the Detection of Faulty RJ45 Connectors on a Radio Base Station

Authors: Djamel Fawzi Hadj Sadok, Marrone Silvério Melo Dantas Pedro Henrique Dreyer, Gabriel Fonseca Reis de Souza, Daniel Bezerra, Ricardo Souza, Silvia Lins, Judith Kelner

Abstract:

A radio base station (RBS), part of the radio access network, is a particular type of equipment that supports the connection between a wide range of cellular user devices and an operator network access infrastructure. Nowadays, most of the RBS maintenance is carried out manually, resulting in a time consuming and costly task. A suitable candidate for RBS maintenance automation is repairing faulty links between devices caused by missing or unplugged connectors. A suitable candidate for RBS maintenance automation is repairing faulty links between devices caused by missing or unplugged connectors. This paper proposes and compares two deep learning solutions to identify attached RJ45 connectors on network ports. We named connector detection, the solution based on object detection, and connector classification, the one based on object classification. With the connector detection, we get an accuracy of 0:934, mean average precision 0:903. Connector classification, get a maximum accuracy of 0:981 and an AUC of 0:989. Although connector detection was outperformed in this study, this should not be viewed as an overall result as connector detection is more flexible for scenarios where there is no precise information about the environment and the possible devices. At the same time, the connector classification requires that information to be well-defined.

Keywords: radio base station, maintenance, classification, detection, deep learning, automation

Procedia PDF Downloads 179
1830 Early Depression Detection for Young Adults with a Psychiatric and AI Interdisciplinary Multimodal Framework

Authors: Raymond Xu, Ashley Hua, Andrew Wang, Yuru Lin

Abstract:

During COVID-19, the depression rate has increased dramatically. Young adults are most vulnerable to the mental health effects of the pandemic. Lower-income families have a higher ratio to be diagnosed with depression than the general population, but less access to clinics. This research aims to achieve early depression detection at low cost, large scale, and high accuracy with an interdisciplinary approach by incorporating clinical practices defined by American Psychiatric Association (APA) as well as multimodal AI framework. The proposed approach detected the nine depression symptoms with Natural Language Processing sentiment analysis and a symptom-based Lexicon uniquely designed for young adults. The experiments were conducted on the multimedia survey results from adolescents and young adults and unbiased Twitter communications. The result was further aggregated with the facial emotional cues analyzed by the Convolutional Neural Network on the multimedia survey videos. Five experiments each conducted on 10k data entries reached consistent results with an average accuracy of 88.31%, higher than the existing natural language analysis models. This approach can reach 300+ million daily active Twitter users and is highly accessible by low-income populations to promote early depression detection to raise awareness in adolescents and young adults and reveal complementary cues to assist clinical depression diagnosis.

Keywords: artificial intelligence, COVID-19, depression detection, psychiatric disorder

Procedia PDF Downloads 118
1829 Assessment of E-Readiness in Libraries of Public Sector Universities Khyber Pakhtunkhwa-Pakistan

Authors: Saeed Ullah Jan

Abstract:

This study has examined the e-readiness in libraries of public sector universities in Khyber Pakhtunkhwa. Efforts were made to evaluate the availability of human resources, electronic infrastructure, and network services and programs in the public sector university libraries. The population of the study was the twenty-seven public sector university libraries of Khyber Pakhtunkhwa. A quantitative approach was adopted, and a questionnaire-based survey was conducted to collect data from the librarian/in charge of public sector university libraries. The collected data were analyzed using Statistical Package for Social Sciences version 22 (SPSS). The mean score of the knowledge component interpreted magnitudes below three which indicates that the respondents are poorly or moderately satisfied regards knowledge of libraries. The satisfaction level of the respondents about the other components, such as electronic infrastructure, network services and programs, and enhancers of the networked world, was rated as average or below. The study suggested that major aspects of existing public-sector university libraries require significant transformation. For this purpose, the government should provide all the required resources and facilities to meet the population's informational and recreational demands. The Information Communication Technology (ICT) infrastructure of public university libraries needs improvement in terms of the availability of computer equipment, databases, network servers, multimedia projectors, digital cameras, uninterruptible power supply, scanners, and backup devices such as hard discs and Digital Video Disc/Compact Disc.

Keywords: ICT-libraries, e-readiness-libraries, e-readiness-university libraries, e-readiness-Pakistan

Procedia PDF Downloads 71
1828 Strain Based Failure Criterion for Composite Notched Laminates

Authors: Ibrahim A. Elsayed, Mohamed H. Elalfy, Mostafa M. Abdalla

Abstract:

A strain-based failure criterion for composite notched laminates is introduced where the most critical stress concentration factor for the anisotropic notched laminates could be related to the failure of the corresponding quasi-isotropic laminate and the anisotropy ratio of the laminate. The proposed criterion will simplify the design of composites to meet notched failure requirements by eliminating the need for the detailed specifications of the stacking sequence at the preliminary design stage. The designer will be able to design based on the stiffness of the laminate, then at a later stage, select an appropriate stacking sequence to meet the stiffness requirements. The failure strains for the notched laminates are computed using the material’s Omni-strain envelope. The concept of Omni-strain envelope concerns the region of average strain where the laminate is safe regardless of ply orientation. In this work, we use Hashin’s failure criteria and the strains around the hole are computed using Savin’s analytic solution. A progressive damage analysis study has been conducted where the failure loads for the notched laminates are computed using finite element analysis. The failure strains are computed and used to estimate the concentration factor. It is found that the correlation found using Savin’s analytic solution predicts the same ratio of concentration factors between anisotropic and quasi-isotropic laminates as the more expensive progressive failure analysis.

Keywords: anisotropy ratio, failure criteria, notched laminates, Omni-strain envelope, savin’s solution

Procedia PDF Downloads 102
1827 Globalization of Pesticide Technology and Sustainable Agriculture

Authors: Gagandeep Kaur

Abstract:

The pesticide industry is a big supplier of agricultural inputs. The uses of pesticides control weeds, fungal diseases, etc., which causes of yield losses in agricultural production. In agribusiness and agrichemical industry, Globalization of markets, competition and innovation are the dominant trends. By the tradition of increasing the productivity of agro-systems through generic, universally applicable technologies, innovation in the agrichemical industry is limited. The marketing of technology of agriculture needs to deal with some various trends such as locally-organized forces that envision regionalized sustainable agriculture in the future. Agricultural production has changed dramatically over the past century. Before World War second agricultural production was featured as a low input of money, high labor, mixed farming and low yields. Although mineral fertilizers were applied already in the second half of the 19th century, most f the crops were restricted by local climatic, geological and ecological conditions. After World War second, in the period of reconstruction, political and socioeconomic pressure changed the nature of agricultural production. For a growing population, food security at low prices and securing farmer income at acceptable levels became political priorities. Current agricultural policy the new European common agricultural policy is aimed to reduce overproduction, liberalization of world trade and the protection of landscape and natural habitats. Farmers have to increase the quality of their productivity and they have to control costs because of increased competition from the world market. Pesticides should be more effective at lower application doses, less toxic and not pose a threat to groundwater. There is a big debate taking place about how and whether to mitigate the intensive use of pesticides. This debate is about the future of agriculture which is sustainable agriculture. This is possible by moving away from conventional agriculture. Conventional agriculture is featured as high inputs and high yields. The use of pesticides in conventional agriculture implies crop production in a wide range. To move away from conventional agriculture is possible through the gradual adoption of less disturbing and polluting agricultural practices at the level of the cropping system. For a healthy environment for crop production in the future there is a need for the maintenance of chemical, physical or biological properties. There is also required to minimize the emission of volatile compounds in the atmosphere. Companies are limiting themselves to a particular interpretation of sustainable development, characterized by technological optimism and production-maximizing. So the main objective of the paper will present the trends in the pesticide industry and in agricultural production in the era of Globalization. The second objective is to analyze sustainable agriculture. Companies of pesticides seem to have identified biotechnology as a promising alternative and supplement to the conventional business of selling pesticides. The agricultural sector is in the process of transforming its conventional mode of operation. Some experts give suggestions to farmers to move towards precision farming and some suggest engaging in organic farming. The methodology of the paper will be historical and analytical. Both primary and secondary sources will be used.

Keywords: globalization, pesticides, sustainable development, organic farming

Procedia PDF Downloads 85
1826 Open Source Knowledge Management Approach to Manage and Disseminate Distributed Content in a Global Enterprise

Authors: Rahul Thakur, Onkar Chandel

Abstract:

Red Hat is the world leader in providing open source software and solutions. A global enterprise, like Red Hat, has unique issues of connecting employees with content because of distributed offices, multiple teams spread across geographies, multiple languages, and different cultures. Employees, of a global company, create content that is distributed across departments, teams, regions, and countries. This makes finding the best content difficult since owners keep iterating on the existing content. When employees are unable to find the content, they end up creating it once again and in the process duplicating existing material and effort. Also, employees may not find the relevant content and spend time reviewing obsolete duplicate, or irrelevant content. On an average, a person spends 15 minutes/day in failed searches that might result in missed business opportunities, employee frustration, and substandard deliverables. Red Hat Knowledge Management Office (KMO) applied 'open source strategy' to solve the above problems. Under the Open Source Strategy, decisions are taken collectively. The strategy aims at accomplishing common goals with the help of communities. The objectives of this initiative were to save employees' time, get them authentic content, improve their content search experience, avoid duplicate content creation, provide context based search, improve analytics, improve content management workflows, automate content classification, and automate content upload. This session will describe open source strategy, its applicability in content management, challenges, recommended solutions, and outcome.

Keywords: content classification, content management, knowledge management, open source

Procedia PDF Downloads 196
1825 Prevalence of Physical Activity Levels and Perceived Benefits of and Barriers to Physical Activity among Jordanian Patients with Coronary Heart Disease: A Cross-Sectional Study

Authors: Eman Ahmed Alsaleh

Abstract:

Background: Many studies published in other countries identified certain perceived benefits and barriers to physical activity among patients with coronary heart disease. Nevertheless, there is no data about the issue relating to Jordanian patients with coronary heart disease. Objective: This study aimed to describe the prevalence of level of physical activity, benefits of and barriers to physical activity as perceived by Jordanian patients with coronary heart disease, and the relationship between physical activity and perceived benefits of and barriers to physical activity. In addition, it focused on examining the influence of selected sociodemographic and health characteristics on physical activity and the perceived benefits of and barriers to physical activity. Methods: A cross-sectional design was performed on a sample of 400 patients with coronary heart disease. They were given a list of perceived benefits and barriers to physical activity and asked to what extent they disagreed or agreed with each. Results: Jordanian patients with coronary heart disease perceived various benefits and barriers to physical activity. Most of these benefits were physiologically related (average mean = 5.7, SD = .7). The most substantial barriers to physical activity as perceived by the patients were: feeling anxiety, not having enough time, lack of interest, bad weather, and feeling of being uncomfortable. Sociodemographic and health characteristics that significantly influenced perceived barriers to physical activity were age, gender, health perception, chest pain frequency, education, job, caring responsibilities, ability to travel alone, smoking, and previous and current physical activity behaviour. Conclusion: This research demonstrates that patients with coronary heart disease have perceived physiological benefits of physical activity, and they have perceived motivational, physical health, and environmental barriers to physical activity, which is significant in developing intervention strategies that aim to maximize patients' participation in physical activity and overcome barriers to physical activity.

Keywords: prevalence, coronary heart disease, physical activity, perceived barriers

Procedia PDF Downloads 98
1824 Prevalence and Factors Associated to Work Accidents in the Construction Sector in Benin: Cases of CFIR – Consulting

Authors: Antoine Vikkey Hinson, Menonli Adjobimey, Gemayel Ahmed Biokou, Rose Mikponhoue

Abstract:

Introduction: Construction industry is a critical concern with regard to Health and Safety Service worldwide. World health Organization revealed that work-related disease and trauma were held responsible for the death of one million nine hundred thousand people in 2016. The aim of this study it was to determine the prevalence and factors associated with the occurrence of work accidents in a construction industry in Benin. Method: It was a descriptive cross-sectional and analytical study. Data analysis was performed with R software 4.1.1. In multivariate analysis, we performed a binary logistic regression. OR adjusted (ORa) association measures and their 95% confidence interval [CI95%] were presented for the explanatory variables used in the final model. The significance threshold for all tests selected was 5% (p < 0.05) Result: In this study, 472 workers were included, and, of these, 452 (95.7%) were men corresponding to a sex ratio of 22.6. The average age of the workers was 33 years ± 8.8 years. Workers were mostly laborers (84.7%), and had declared having inadequate personal protective equipment (50.6%, n=239). The prevalence of work accidents is 50.8%. Collision with a rolling stock (25.8%), cut (16.2%), and stumbling (16.2%) were the main types of work accidents on the construction site. Four factors were associated with contributing to work accidents. Fatigue or exhaustion (ORa : 1.53[1.03 ; 2.28]); The use of dangerous tools (ORa : 1.81 [1.22 ; 2.71]); The various laborers’ jobs (ORa : 4.78 [2.62 ; 9.21]); and seniority in the company ≥ 4 years (ORa : 2.00 [1.35 ; 2.96]). Conclusion: This study allowed us to identify the associated factors. It is imperative to implement a rigorous policy of occupational health and security mostly the continuing training for workers safe, the supply of appropriate work tools and protective

Keywords: prevalence, work accident, associated factors, construction, benin

Procedia PDF Downloads 39
1823 Hope as a Predictor for Complicated Grief and Anxiety: A Bayesian Structural Equational Modeling Study

Authors: Bo Yan, Amy Y. M. Chow

Abstract:

Bereavement is recognized as a universal challenging experience. It is important to gather research evidence on protective factors in bereavement. Hope is considered as one of the protective factors in previous coping studies. The present study aims to add knowledge by investigating hope at the first month after death to predict psychological symptoms altogether including complicated grief (CG), anxiety, and depressive symptoms at the seventh month. The data were collected via one-on-one interview survey in a longitudinal project with Hong Kong hospice users (sample size 105). Most participants were at their middle age (49-year-old on average), female (72%), with no religious affiliation (58%). Bayesian Structural Equation Modeling (BSEM) analysis was conducted on the longitudinal dataset. The BSEM findings show that hope at the first month of bereavement negatively predicts both CG and anxiety symptoms at the seventh month but not for depressive symptoms. Age and gender are controlled in the model. The overall model fit is good. The current study findings suggest assessing hope at the first month of bereavement. Hope at the first month after the loss is identified as an excellent predictor for complicated grief and anxiety symptoms at the seventh month. The result from this sample is clear, so it encourages cross-cultural research on replicated modeling and development of further clinical application. Particularly, practical consideration for early intervention to increase the level of hope has the potential to reduce the psychological symptoms and thus to improve the bereaved persons’ wellbeing in the long run.

Keywords: anxiety, complicated grief, depressive symptoms, hope, structural equational modeling

Procedia PDF Downloads 186
1822 Application of Interferometric Techniques for Quality Control Oils Used in the Food Industry

Authors: Andres Piña, Amy Meléndez, Pablo Cano, Tomas Cahuich

Abstract:

The purpose of this project is to propose a quick and environmentally friendly alternative to measure the quality of oils used in food industry. There is evidence that repeated and indiscriminate use of oils in food processing cause physicochemical changes with formation of potentially toxic compounds that can affect the health of consumers and cause organoleptic changes. In order to assess the quality of oils, non-destructive optical techniques such as Interferometry offer a rapid alternative to the use of reagents, using only the interaction of light on the oil. Through this project, we used interferograms of samples of oil placed under different heating conditions to establish the changes in their quality. These interferograms were obtained by means of a Mach-Zehnder Interferometer using a beam of light from a HeNe laser of 10mW at 632.8nm. Each interferogram was captured, analyzed and measured full width at half-maximum (FWHM) using the software from Amcap and ImageJ. The total of FWHMs was organized in three groups. It was observed that the average obtained from each of the FWHMs of group A shows a behavior that is almost linear, therefore it is probable that the exposure time is not relevant when the oil is kept under constant temperature. Group B exhibits a slight exponential model when temperature raises between 373 K and 393 K. Results of the t-Student show a probability of 95% (0.05) of the existence of variation in the molecular composition of both samples. Furthermore, we found a correlation between the Iodine Indexes (Physicochemical Analysis) and the Interferograms (Optical Analysis) of group C. Based on these results, this project highlights the importance of the quality of the oils used in food industry and shows how Interferometry can be a useful tool for this purpose.

Keywords: food industry, interferometric, oils, quality control

Procedia PDF Downloads 359
1821 Older Consumer’s Willingness to Trust Social Media Advertising: An Australian Case

Authors: Simon J. Wilde, David M. Herold, Michael J. Bryant

Abstract:

Social media networks have become the hotbed for advertising activities, due mainly to their increasing consumer/user base, and secondly, owing to the ability of marketers to accurately measure ad exposure and consumer-based insights on such networks. More than half of the world’s population (4.8 billion) now uses social media (60%), with 150 million new users having come online within the last 12 months (to June 2022). As the use of social media networks by users grows, key business strategies used for interacting with these potential customers have matured, especially social media advertising. Unlike other traditional media outlets, social media advertising is highly interactive and digital channel-specific. Social media advertisements are clearly targetable, providing marketers with an extremely powerful marketing tool. Yet despite the measurable benefits afforded to businesses engaged in social media advertising, recent controversies (such as the relationship between Facebook and Cambridge Analytica in 2018) have only heightened the role trust and privacy play within these social media networks. The purpose of this exploratory paper is to investigate the extent to which social media users trust social media advertising. Understanding this relationship will fundamentally assist marketers in better understanding social media interactions and their implications for society. Using a web-based quantitative survey instrument, survey participants were recruited via a reputable online panel survey site. Respondents to the survey represented social media users from all states and territories within Australia. Completed responses were received from a total of 258 social media users. Survey respondents represented all core age demographic groupings, including Gen Z/Millennials (18-45 years = 60.5% of respondents) and Gen X/Boomers (46-66+ years = 39.5% of respondents). An adapted ADTRUST scale, using a 20 item 7-point Likert scale, measured trust in social media advertising. The ADTRUST scale has been shown to be a valid measure of trust in advertising within traditional different media, such as broadcast media and print media, and more recently, the Internet (as a broader platform). The adapted scale was validated through exploratory factor analysis (EFA), resulting in a three-factor solution. These three factors were named reliability, usefulness and affect, and the willingness to rely on. Factor scores (weighted measures) were then calculated for these factors. Factor scores are estimates of the scores survey participants would have received on each of the factors had they been measured directly, with the following results recorded (Reliability = 4.68/7; Usefulness and Affect = 4.53/7; and Willingness to Rely On = 3.94/7). Further statistical analysis (independent samples t-test) determined the difference in factor scores between the factors when age (Gen Z/Millennials vs. Gen X/Boomers) was utilised as the independent, categorical variable. The results showed the difference in mean scores across all three factors to be statistically significant (p<0.05) for these two core age groupings: Gen Z/Millennials Reliability = 4.90/7 vs Gen X/Boomers Reliability = 4.34/7; Gen Z/Millennials Usefulness and Affect = 4.85/7 vs Gen X/Boomers Usefulness and Affect = 4.05/7; and Gen Z/Millennials Willingness to Rely On = 4.53/7 vs Gen X/Boomers Willingness to Rely On = 3.03/7. The results clearly indicate that older social media users lack trust in the quality of information conveyed in social media ads, when compared to younger, more social media-savvy consumers. This is especially evident with respect to Factor 3 (Willingness to Rely On), whose underlying variables reflect one’s behavioural intent to act based on the information conveyed in advertising. These findings can be useful to marketers, advertisers, and brand managers in that the results highlight a critical need to design ‘authentic’ advertisements on social media sites to better connect with these older users, in an attempt to foster positive behavioural responses from within this large demographic group – whose engagement with social media sites continues to increase year on year.

Keywords: social media advertising, trust, older consumers, online

Procedia PDF Downloads 69
1820 The DAQ Debugger for iFDAQ of the COMPASS Experiment

Authors: Y. Bai, M. Bodlak, V. Frolov, S. Huber, V. Jary, I. Konorov, D. Levit, J. Novy, D. Steffen, O. Subrt, M. Virius

Abstract:

In general, state-of-the-art Data Acquisition Systems (DAQ) in high energy physics experiments must satisfy high requirements in terms of reliability, efficiency and data rate capability. This paper presents the development and deployment of a debugging tool named DAQ Debugger for the intelligent, FPGA-based Data Acquisition System (iFDAQ) of the COMPASS experiment at CERN. Utilizing a hardware event builder, the iFDAQ is designed to be able to readout data at the average maximum rate of 1.5 GB/s of the experiment. In complex softwares, such as the iFDAQ, having thousands of lines of code, the debugging process is absolutely essential to reveal all software issues. Unfortunately, conventional debugging of the iFDAQ is not possible during the real data taking. The DAQ Debugger is a tool for identifying a problem, isolating the source of the problem, and then either correcting the problem or determining a way to work around it. It provides the layer for an easy integration to any process and has no impact on the process performance. Based on handling of system signals, the DAQ Debugger represents an alternative to conventional debuggers provided by most integrated development environments. Whenever problem occurs, it generates reports containing all necessary information important for a deeper investigation and analysis. The DAQ Debugger was fully incorporated to all processes in the iFDAQ during the run 2016. It helped to reveal remaining software issues and improved significantly the stability of the system in comparison with the previous run. In the paper, we present the DAQ Debugger from several insights and discuss it in a detailed way.

Keywords: DAQ Debugger, data acquisition system, FPGA, system signals, Qt framework

Procedia PDF Downloads 268
1819 A Neurofeedback Learning Model Using Time-Frequency Analysis for Volleyball Performance Enhancement

Authors: Hamed Yousefi, Farnaz Mohammadi, Niloufar Mirian, Navid Amini

Abstract:

Investigating possible capacities of visual functions where adapted mechanisms can enhance the capability of sports trainees is a promising area of research, not only from the cognitive viewpoint but also in terms of unlimited applications in sports training. In this paper, the visual evoked potential (VEP) and event-related potential (ERP) signals of amateur and trained volleyball players in a pilot study were processed. Two groups of amateur and trained subjects are asked to imagine themselves in the state of receiving a ball while they are shown a simulated volleyball field. The proposed method is based on a set of time-frequency features using algorithms such as Gabor filter, continuous wavelet transform, and a multi-stage wavelet decomposition that are extracted from VEP signals that can be indicative of being amateur or trained. The linear discriminant classifier achieves the accuracy, sensitivity, and specificity of 100% when the average of the repetitions of the signal corresponding to the task is used. The main purpose of this study is to investigate the feasibility of a fast, robust, and reliable feature/model determination as a neurofeedback parameter to be utilized for improving the volleyball players’ performance. The proposed measure has potential applications in brain-computer interface technology where a real-time biomarker is needed.

Keywords: visual evoked potential, time-frequency feature extraction, short-time Fourier transform, event-related spectrum potential classification, linear discriminant analysis

Procedia PDF Downloads 122
1818 Photonic Dual-Microcomb Ranging with Extreme Speed Resolution

Authors: R. R. Galiev, I. I. Lykov, A. E. Shitikov, I. A. Bilenko

Abstract:

Dual-comb interferometry is based on the mixing of two optical frequency combs with slightly different lines spacing which results in the mapping of the optical spectrum into the radio-frequency domain for future digitizing and numerical processing. The dual-comb approach enables diverse applications, including metrology, fast high-precision spectroscopy, and distance range. Ordinary frequency-modulated continuous-wave (FMCW) laser-based Light Identification Detection and Ranging systems (LIDARs) suffer from two main disadvantages: slow and unreliable mechanical, spatial scan and a rather wide linewidth of conventional lasers, which limits speed measurement resolution. Dual-comb distance measurements with Allan deviations down to 12 nanometers at averaging times of 13 microseconds, along with ultrafast ranging at acquisition rates of 100 megahertz, allowing for an in-flight sampling of gun projectiles moving at 150 meters per second, was previously demonstrated. Nevertheless, pump lasers with EDFA amplifiers made the device bulky and expensive. An alternative approach is a direct coupling of the laser to a reference microring cavity. Backscattering can tune the laser to the eigenfrequency of the cavity via the so-called self-injection locked (SIL) effect. Moreover, the nonlinearity of the cavity allows a solitonic frequency comb generation in the very same cavity. In this work, we developed a fully integrated, power-efficient, electrically driven dual-micro comb source based on the semiconductor lasers SIL to high-quality integrated Si3N4 microresonators. We managed to obtain robust 1400-1700 nm combs generation with a 150 GHz or 1 THz lines spacing and measure less than a 1 kHz Lorentzian withs of stable, MHz spaced beat notes in a GHz band using two separated chips, each pumped by its own, self-injection locked laser. A deep investigation of the SIL dynamic allows us to find out the turn-key operation regime even for affordable Fabry-Perot multifrequency lasers used as a pump. It is important that such lasers are usually more powerful than DFB ones, which were also tested in our experiments. In order to test the advantages of the proposed techniques, we experimentally measured a minimum detectable speed of a reflective object. It has been shown that the narrow line of the laser locked to the microresonator provides markedly better velocity accuracy, showing velocity resolution down to 16 nm/s, while the no-SIL diode laser only allowed 160 nm/s with good accuracy. The results obtained are in agreement with the estimations and open up ways to develop LIDARs based on compact and cheap lasers. Our implementation uses affordable components, including semiconductor laser diodes and commercially available silicon nitride photonic circuits with microresonators.

Keywords: dual-comb spectroscopy, LIDAR, optical microresonator, self-injection locking

Procedia PDF Downloads 56
1817 Time to CT in Major Trauma in Coffs Harbour Health Campus - The Australian Rural Centre Experience

Authors: Thampi Rawther, Jack Cecire, Andrew Sutherland

Abstract:

Introduction: CT facilitates the diagnosis of potentially life-threatening injuries and facilitates early management. There is evidence that reduced CT acquisition time reduces mortality and length of hospital stay. Currently, there are variable recommendations for ideal timing. Indeed, the NHS standard contract for a major trauma service and STAG both recommend immediate access to CT within a maximum time of 60min and appropriate reporting within 60min of the scan. At Coffs Harbour Health Campus (CHHC), a CT radiographer is on site between 8am-11pm. Aim: To investigate the average time to CT at CHHC and assess for any significant relationship between time to CT and injury severity score (ISS) or time of triage. Method: All major trauma calls between Jan 2021-Oct 2021 were audited (N=87). Patients were excluded if they went from ED to the theatre. Time to CT is defined as the time between triage to the timestamp on the first CT image. Median and interquartile range was used as a measure of central tendency as the data was not normally distributed, and Chi-square test was used to determine association. Results: The median time to CT is 51.5min (IQR 40-74). We found no relationship between time to CT and ISS (P=0.18) and time of triage to time to CT (P=0.35). We compared this to other centres such as John Hunter Hospital and Gold Coast Hospital. We found that the median CT acquisition times were 76min (IQR 52-115) and 43min, respectively. Conclusion: This shows an avenue for improvement given 35% of CT’s were >30min. Furthermore, being proactive and aware of time to CT as an important factor to trauma management can be another avenue for improvement. Based on this, we will re-audit in 12-24months to assess if any improvement has been made.

Keywords: imaging, rural surgery, trauma surgery, improvement

Procedia PDF Downloads 93
1816 Evaluating the ‘Assembled Educator’ of a Specialized Postgraduate Engineering Course Using Activity Theory and Genre Ecologies

Authors: Simon Winberg

Abstract:

The landscape of professional postgraduate education is changing: the focus of these programmes is moving from preparing candidates for a life in academia towards a focus of training in expert knowledge and skills to support industry. This is especially pronounced in engineering disciplines where increasingly more complex products are drawing on a depth of knowledge from multiple fields. This connects strongly with the broader notion of Industry 4.0 – where technology and society are being brought together to achieve more powerful and desirable products, but products whose inner workings also are more complex than before. The changes in what we do, and how we do it, has a profound impact on what industry would like universities to provide. One such change is the increased demand for taught doctoral and Masters programmes. These programmes aim to provide skills and training for professionals, to expand their knowledge of state-of-the-art tools and technologies. This paper investigates one such course, namely a Software Defined Radio (SDR) Master’s degree course. The teaching support for this course had to be drawn from an existing pool of academics, none of who were specialists in this field. The paper focuses on the kind of educator, a ‘hybrid academic’, assembled from available academic staff and bolstered by research. The conceptual framework for this paper combines Activity Theory and Genre Ecology. Activity Theory is used to reason about learning and interactions during the course, and Genre Ecology is used to model building and sharing of technical knowledge related to using tools and artifacts. Data were obtained from meetings with students and lecturers, logs, project reports, and course evaluations. The findings show how the course, which was initially academically-oriented, metamorphosed into a tool-dominant peer-learning structure, largely supported by the sharing of technical tool-based knowledge. While the academic staff could address gaps in the participants’ fundamental knowledge of radio systems, the participants brought with them extensive specialized knowledge and tool experience which they shared with the class. This created a complicated dynamic in the class, which centered largely on engagements with technology artifacts, such as simulators, from which knowledge was built. The course was characterized by a richness of ‘epistemic objects’, which is to say objects that had knowledge-generating qualities. A significant portion of the course curriculum had to be adapted, and the learning methods changed to accommodate the dynamic interactions that occurred during classes. This paper explains the SDR Masters course in terms of conflicts and innovations in its activity system, as well as the continually hybridizing genre ecology to show how the structuring and resource-dependence of the course transformed from its initial ‘traditional’ academic structure to a more entangled arrangement over time. It is hoped that insights from this paper would benefit other educators involved in the design and teaching of similar types of specialized professional postgraduate taught programmes.

Keywords: professional postgraduate education, taught masters, engineering education, software defined radio

Procedia PDF Downloads 74
1815 Pavement Quality Evaluation Using Intelligent Compaction Technology: Overview of Some Case Studies in Oklahoma

Authors: Sagar Ghos, Andrew E. Elaryan, Syed Ashik Ali, Musharraf Zaman, Mohammed Ashiqur Rahman

Abstract:

Achieving desired density during construction is an important indicator of pavement quality. Insufficient compaction often compromises pavement performance and service life. Intelligent compaction (IC) is an emerging technology for monitoring compaction quality during the construction of asphalt pavements. This paper aims to provide an overview of findings from four case studies in Oklahoma involving the compaction quality of asphalt pavements, namely SE 44th St project (Project 1) and EOC Turnpike project (Project 2), Highway 92 project (Project 3), and 108th Avenue project (Project 4). For this purpose, an IC technology, the intelligent compaction analyzer (ICA), developed at the University of Oklahoma, was used to evaluate compaction quality. Collected data include GPS locations, roller vibrations, roller speed, the direction of movement, and temperature of the asphalt mat. The collected data were analyzed using a widely used software, VETA. The average densities for Projects 1, 2, 3 and 4, were found as 89.8%, 91.50%, 90.7% and 87.5%, respectively. The maximum densities were found as 94.6%, 95.8%, 95.9%, and 89.7% for Projects 1, 2, 3, and 4, respectively. It was observed that the ICA estimated densities correlated well with the field core densities. The ICA results indicated that at least 90% of the asphalt mats were subjected to at least two roller passes. However, the number of passes required to achieve the desired density (94% to 97%) differed from project to project depending on the underlying layer. The results of these case studies show both opportunities and challenges in using IC for monitoring compaction quality during construction in real-time.

Keywords: asphalt pavement construction, density, intelligent compaction, intelligent compaction analyzer, intelligent compaction measure value

Procedia PDF Downloads 140
1814 Understanding Tourism Innovation through Fuzzy Measures

Authors: Marcella De Filippo, Delio Colangelo, Luca Farnia

Abstract:

In recent decades, the hyper-competition of tourism scenario has implicated the maturity of many businesses, attributing a central role to innovative processes and their dissemination in the economy of company management. At the same time, it has defined the need for monitoring the application of innovations, in order to govern and improve the performance of companies and destinations. The study aims to analyze and define the innovation in the tourism sector. The research actions have concerned, on the one hand, some in-depth interviews with experts, identifying innovation in terms of process and product, digitalization, sustainability policies and, on the other hand, to evaluate the interaction between these factors, in terms of substitutability and complementarity in management scenarios, in order to identify which one is essential to be competitive in the global scenario. Fuzzy measures and Choquet integral were used to elicit Experts’ preferences. This method allows not only to evaluate the relative importance of each pillar, but also and more interestingly, the level of interaction, ranging from complementarity to substitutability, between pairs of factors. The results of the survey are the following: in terms of Shapley values, Experts assert that Innovation is the most important factor (32.32), followed by digitalization (31.86), Network (20.57) and Sustainability (15.25). In terms of Interaction indices, given the low degree of consensus among experts, the interaction between couples of criteria on average could be ignored; however, it is worth to note that the factors innovations and digitalization are those in which experts express the highest degree of interaction. However for some of them, these factors have a moderate level of complementarity (with a pick of 57.14), and others consider them moderately substitutes (with a pick of -39.58). Another example, although outlier is the interaction between network and digitalization, in which an expert consider them markedly substitutes (-77.08).

Keywords: innovation, business model, tourism, fuzzy

Procedia PDF Downloads 249
1813 Hemoglobin Levels at a Standalone Dialysis Unit

Authors: Babu Shersad, Partha Banerjee

Abstract:

Reduction in haemoglobin levels has been implicated to be a cause for reduced exercise tolerance and cardiovascular complications of chronic renal diseases. Trends of hemoglobin levels in patients on haemodialysis could be an indicator of efficacy of hemodialysis and an indicator of quality of life in haemodialysis patients. In the UAE, the rate of growth (of patients on dialysis) is 10 to 15 per cent per year. The primary mode of haemodialysis in the region is based on in-patient hospital-based hemodialysis units. The increase in risk of cardiovascular and cerebrovascular morbidity as well as mortality in pre-dialysis Chronic Renal Disease has been reported. However, data on the health burden on haemodialysis in standalone dialysis facilities is very scarce. This is mainly due to the paucity of ambulatory centres for haemodialysis in the region. AMSA is the first center to offer standalone dialysis in the UAE and a study over a one year period was performed. Patient data was analyzed using a questionnaire for 45 patients with an average of 2.5 dialysis sessions per week. All patients were on chronic haemodialysis as outpatients. The trends of haemoglobin levels as an independent variable were evaluated. These trends were interpreted in comparison with other parameters of renal function (creatinine, uric acid, blood pressure and ferritin). Trends indicate an increase in hemoglobin levels with increased supplementation of iron and erythropoietin over time. The adequacy of hemodialysis shows improvement concomitantly. This, in turn, correlates with better patient outcomes and has a direct impact on morbidity and mortality. This study is a pilot study and further studies are indicated so that objective parameters can be studied and validated for hemodialysis in the region.

Keywords: haemodialysis, haemoglobin in haemodialysis, haemodialysis parameters, erythropoietic agents in haemodialysis

Procedia PDF Downloads 268
1812 Air Quality Assessment for a Hot-Spot Station by Neural Network Modelling of the near-Traffic Emission-Immission Interaction

Authors: Tim Steinhaus, Christian Beidl

Abstract:

Urban air quality and climate protection are two major challenges for future mobility systems. Despite the steady reduction of pollutant emissions from vehicles over past decades, local immission load within cities partially still reaches heights, which are considered hazardous to human health. Although traffic-related emissions account for a major part of the overall urban pollution, modeling the exact interaction remains challenging. In this paper, a novel approach for the determination of the emission-immission interaction on the basis of neural network modeling for traffic induced NO2-immission load within a near-traffic hot-spot scenario is presented. In a detailed sensitivity analysis, the significance of relevant influencing variables on the prevailing NO2 concentration is initially analyzed. Based on this, the generation process of the model is described, in which not only environmental influences but also the vehicle fleet composition including its associated segment- and certification-specific real driving emission factors are derived and used as input quantities. The validity of this approach, which has been presented in the past, is re-examined in this paper using updated data on vehicle emissions and recent immission measurement data. Within the framework of a final scenario analysis, the future development of the immission load is forecast for different developments in the vehicle fleet composition. It is shown that immission levels of less than half of today’s yearly average limit values are technically feasible in hot-spot situations.

Keywords: air quality, emission, emission-immission-interaction, immission, NO2, zero impact

Procedia PDF Downloads 113
1811 Joint Replenishment and Heterogeneous Vehicle Routing Problem with Cyclical Schedule

Authors: Ming-Jong Yao, Chin-Sum Shui, Chih-Han Wang

Abstract:

This paper is developed based on a real-world decision scenario that an industrial gas company that applies the Vendor Managed Inventory model and supplies liquid oxygen with a self-operated heterogeneous vehicle fleet to hospitals in nearby cities. We name it as a Joint Replenishment and Heterogeneous Vehicle Routing Problem with Cyclical Schedule and formulate it as a non-linear mixed-integer linear programming problem which simultaneously determines the length of the planning cycle (PC), the length of the replenishment cycle and the dates of replenishment for each customer and the vehicle routes of each day within PC, such that the average daily operation cost within PC, including inventory holding cost, setup cost, transportation cost, and overtime labor cost, is minimized. A solution method based on genetic algorithm, embedded with an encoding and decoding mechanism and local search operators, is then proposed, and the hash function is adopted to avoid repetitive fitness evaluation for identical solutions. Numerical experiments demonstrate that the proposed solution method can effectively solve the problem under different lengths of PC and number of customers. The method is also shown to be effective in determining whether the company should expand the storage capacity of a customer whose demand increases. Sensitivity analysis of the vehicle fleet composition shows that deploying a mixed fleet can reduce the daily operating cost.

Keywords: cyclic inventory routing problem, joint replenishment, heterogeneous vehicle, genetic algorithm

Procedia PDF Downloads 66
1810 Agritourism Development Mode Study in Rural Area of Boshan China

Authors: Lingfei Sun

Abstract:

Based on the significant value of ecology, the strategic planning for ecological civilization construction was mentioned in the 17th and 18th National Congress of the Communist Party of China. How to generate economic value based on the environmental capacity is not only an economic decision but also a political decision to make. Boshan took the full use of “Ecology” and transformed it as an inexhaustible green resource to benefit people, reflecting the sustainable value of new agriculture development mode. The Strawberry Harvest Festival and Blueberry Harvest Festival hosted approximately 96,000 and 54,000 leisure tourists respectively in 2014. For the Kiwi Harvest Festival in August 2014, in average, it attracted about 4600 tourists per day, which generated daily kiwi sales of 50,000 lbs and 3 million RMB (About 476,000 USD) of daily revenue. The purpose of this study is to elaborate the modes of agritourism development, by analyzing the cases in rural area of Boshan, China. Interviews with the local government officers were applied to discover operation mode of agritourism operation. The financial data was used to demonstrate the strength of government policy and improvement of the income of rural people. The result indicated that there are mainly three types of modes: the Intensive Mode, the Model Mode and the Mixed Mode, supported by case study respectively. With the boom of tourism, the development of agritourism in Boshan relies on the agriculture encouraging policy of China and the effort of local government; meanwhile, large scale of cultivation and the product differentiation are the crucial elements for the success of rural agritourism projects.

Keywords: agriculture, agritourism, economy, rural area development

Procedia PDF Downloads 289
1809 Validation and Fit of a Biomechanical Bipedal Walking Model for Simulation of Loads Induced by Pedestrians on Footbridges

Authors: Dianelys Vega, Carlos Magluta, Ney Roitman

Abstract:

The simulation of loads induced by walking people in civil engineering structures is still challenging It has been the focus of considerable research worldwide in the recent decades due to increasing number of reported vibration problems in pedestrian structures. One of the most important key in the designing of slender structures is the Human-Structure Interaction (HSI). How moving people interact with structures and the effect it has on their dynamic responses is still not well understood. To rely on calibrated pedestrian models that accurately estimate the structural response becomes extremely important. However, because of the complexity of the pedestrian mechanisms, there are still some gaps in knowledge and more reliable models need to be investigated. On this topic several authors have proposed biodynamic models to represent the pedestrian, whether these models provide a consistent approximation to physical reality still needs to be studied. Therefore, this work comes to contribute to a better understanding of this phenomenon bringing an experimental validation of a pedestrian walking model and a Human-Structure Interaction model. In this study, a bi-dimensional bipedal walking model was used to represent the pedestrians along with an interaction model which was applied to a prototype footbridge. Numerical models were implemented in MATLAB. In parallel, experimental tests were conducted in the Structures Laboratory of COPPE (LabEst), at Federal University of Rio de Janeiro. Different test subjects were asked to walk at different walking speeds over instrumented force platforms to measure the walking force and an accelerometer was placed at the waist of each subject to measure the acceleration of the center of mass at the same time. By fitting the step force and the center of mass acceleration through successive numerical simulations, the model parameters are estimated. In addition, experimental data of a walking pedestrian on a flexible structure was used to validate the interaction model presented, through the comparison of the measured and simulated structural response at mid span. It was found that the pedestrian model was able to adequately reproduce the ground reaction force and the center of mass acceleration for normal and slow walking speeds, being less efficient for faster speeds. Numerical simulations showed that biomechanical parameters such as leg stiffness and damping affect the ground reaction force, and the higher the walking speed the greater the leg length of the model. Besides, the interaction model was also capable to estimate with good approximation the structural response, that remained in the same order of magnitude as the measured response. Some differences in frequency spectra were observed, which are presumed to be due to the perfectly periodic loading representation, neglecting intra-subject variabilities. In conclusion, this work showed that the bipedal walking model could be used to represent walking pedestrians since it was efficient to reproduce the center of mass movement and ground reaction forces produced by humans. Furthermore, although more experimental validations are required, the interaction model also seems to be a useful framework to estimate the dynamic response of structures under loads induced by walking pedestrians.

Keywords: biodynamic models, bipedal walking models, human induced loads, human structure interaction

Procedia PDF Downloads 117
1808 The Research of the Relationship between Triathlon Competition Results with Physical Fitness Performance

Authors: Chen Chan Wei

Abstract:

The purpose of this study was to investigate the impact of swim 1500m, 10000m run, VO2 max, and body fat on Olympic distance triathlon competition performance. The subjects were thirteen college triathletes with endurance training, with an average age, height and weight of 20.61±1.04 years (mean ± SD), 171.76±8.54 cm and 65.32±8.14 kg respectively. All subjects were required to take the tests of swim 1500m, run 10000m, VO2 max, body fat, and participate in the Olympic distance triathlon competition. First, the swim 1500m test was taken in the standardized 50m pool, with a depth of 2m, and the 10000m run test on the standardized 400m track. After three days, VO2 max was tested with the MetaMax 3B and body fat was measured with the DEXA machine. After two weeks, all 13 subjects joined the Olympic distance triathlon competition at the 2016 New Taipei City Asian Cup. The relationships between swim 1500m, 10000m run, VO2 max, body fat test, and Olympic distance triathlon competition performance were evaluated using Pearson's product-moment correlation. The results show that 10000m run and body fat had a significant positive correlation with Olympic distance triathlon performance (r=.830, .768), but VO2 max has a significant negative correlation with Olympic distance triathlon performance (r=-.735). In conclusion, for improved non-draft Olympic distance triathlon performance, triathletes should focus on running than swimming training and can be measure VO2 max to prediction triathlon performance. Also, managing body fat can improve Olympic distance triathlon performance. In addition, swimming performance was not significantly correlated to Olympic distance triathlon performance, possibly because the 2016 New Taipei City Asian Cup age group was not a drafting competition. The swimming race is the shortest component of Olympic distance triathlons. Therefore, in a non-draft competition, swimming ability is not significantly correlated with overall performance.

Keywords: triathletes, olympic, non-drafting, correlation

Procedia PDF Downloads 238
1807 Polymorphisms of Calpastatin Gene and Its Association with Growth Traits in Indonesian Thin Tail Sheep

Authors: Muhammad Ihsan Andi Dagong, Cece Sumantri, Ronny Rachman Noor, Rachmat Herman, Mohamad Yamin

Abstract:

Calpastatin involved in various physiological processes in the body such as the protein turnover, growth, fusion and mioblast migration. Thus, allegedly Calpastatin gene diversity (CAST) have an association with growth and potential use as candidate genes for growth trait. This study aims to identify the association between the genetic diversity of CAST gene with some growth properties such as body dimention (morphometric), body weight and daily weight gain in sheep. A total of 157 heads of Thin Tail Sheep (TTS) reared intensively for fattening purposes in the uniform environmental conditions. Overall sheep used were male, and maintained for 3 months. The parameters of growth properties were measured among others: body weight gain (ADG) (g/head / day), body weight (kg), body length (cm), chest circumference (cm), height (cm). All the sheep were genotyped by using PCR-SSCP (single strand conformational polymorphism) methods. CAST gene in locus fragment intron 5 - exon 6 were amplified with a predicted length of about 254 bp PCR products. Then the sheep were stratified based on their CAST genotypes. The result of this research showed that no association were found between the CAST gene variations with morphometric body weight, but there was a significant association with daily body weight gain (ADG) in sheep observed. CAST-23 and CAST-33 genotypes has higher average daily gain than other genotypes. CAST-23 and CAST-33 genotypes that carrying the CAST-2 and CAST-3 alleles potential to be used in the selection of the nature of the growth trait of the TTS sheep.

Keywords: body weight, calpastatin, genotype, growth trait, thin tail sheep

Procedia PDF Downloads 302
1806 Epidemiological-Anatomopathological-Immunohistochemical Profile of Gastric Cancer throughout Eastern Algeria

Authors: S. Tebibel, R. L. Bouchouka, C. Mechati, S. Messaoudi

Abstract:

The stomach cancer or gastric cancer is an aggressive cancer with a significant geographic disparity. The decrease in frequency is attributed to refrigeration, which has several beneficial consequences, increased consumption of fresh fruits and vegetables, reduced consumption of salt, which was widely used as a food preservative, and less contamination of food by carcinogenic compounds. The infection with Helicobacter pylori is responsible for progressive inflammatory changes in the gastric mucosa usually evolving into stomach cancer in 80% of cases. Methodology: This epidemiological and analytical study concerns 65 patients (46 men and 19 women) with gastric adenocarcinomas with an average age of 56.5 years and a male predominance with a sex ratio of 2.4. Results and Discussion: In this series, the clinical symptoms are dominated by epigastralgia (72.31%), vomiting (27,69%), and slimming (24,62%). The FOGD (Oeso-Gastro Duodenal Fibroscopy) performed in the 65 patients revealed a predominance of the antro-pyloric localization in 19 cases (i.e., 29.23%) and anulcerative budding appearance in 33 subjects (50,77%). Histologically, the moderately differentiated adenocarcinoma is found in 30.77% of patients, followed by well differentiated adenocarcinoma with 26.15% of patients. The immunohistochemical study revealed a positive labeling of half of the T cells by anti-CD3 AC, and a positive labeling of anti-CD20 AC in a diffuse and intense manner, with the presence of CD20-positive lymphoepithelial lesions compatible with CD20 a low grade MALT non-Hodgkin's lymphoma. Conclusion: This framework of analysis revealed some risk factors for gastric cancer, such as food, hygiene, Helicobacter pylori infection, smoking and family history.

Keywords: cancer, Helicobacter pylori, immunohistochemistry, stomach

Procedia PDF Downloads 110
1805 Floating Building Potential for Adaptation to Rising Sea Levels: Development of a Performance Based Building Design Framework

Authors: Livia Calcagni

Abstract:

Most of the largest cities in the world are located in areas that are vulnerable to coastal erosion and flooding, both linked to climate change and rising sea levels (RSL). Nevertheless, more and more people are moving to these vulnerable areas as cities keep growing. Architects, engineers and policy makers are called to rethink the way we live and to provide timely and adequate responses not only by investigating measures to improve the urban fabric, but also by developing strategies capable of planning change, exploring unusual and resilient frontiers of living, such as floating architecture. Since the beginning of the 21st century we have seen a dynamic growth of water-based architecture. At the same time, the shortage of land available for urban development also led to reclaim the seabed or to build floating structures. In light of these considerations, time is ripe to consider floating architecture not only as a full-fledged building typology but especially as a full-fledged adaptation solution for RSL. Currently, there is no global international legal framework for urban development on water and there is no structured performance based building design (PBBD) approach for floating architecture in most countries, let alone national regulatory systems. Thus, the research intends to identify the technological, morphological, functional, economic, managerial requirements that must be considered in a the development of the PBBD framework conceived as a meta-design tool. As it is expected that floating urban development is mostly likely to take place as extension of coastal areas, the needs and design criteria are definitely more similar to those of the urban environment than of the offshore industry. Therefor, the identification and categorization of parameters takes the urban-architectural guidelines and regulations as the starting point, taking the missing aspects, such as hydrodynamics, from the offshore and shipping regulatory frameworks. This study is carried out through an evidence-based assessment of performance guidelines and regulatory systems that are effective in different countries around the world addressing on-land and on-water architecture as well as offshore and shipping industries. It involves evidence-based research and logical argumentation methods. Overall, this paper highlights how inhabiting water is not only a viable response to the problem of RSL, thus a resilient frontier for urban development, but also a response to energy insecurity, clean water and food shortages, environmental concerns and urbanization, in line with Blue Economy principles and the Agenda 2030. Moreover, the discipline of architecture is presented as a fertile field for investigating solutions to cope with climate change and its effects on life safety and quality. Future research involves the development of a decision support system as an information tool to guide the user through the decision-making process, emphasizing the logical interaction between the different potential choices, based on the PBBD.

Keywords: adaptation measures, floating architecture, performance based building design, resilient architecture, rising sea levels

Procedia PDF Downloads 74
1804 FSO Performance under High Solar Irradiation: Case Study Qatar

Authors: Syed Jawad Hussain, Abir Touati, Farid Touati

Abstract:

Free-Space Optics (FSO) is a wireless technology that enables the optical transmission of data though the air. FSO is emerging as a promising alternative or complementary technology to fiber optic and wireless radio-frequency (RF) links due to its high-bandwidth, robustness to EMI, and operation in unregulated spectrum. These systems are envisioned to be an essential part of future generation heterogeneous communication networks. Despite the vibrant advantages of FSO technology and the variety of its applications, its widespread adoption has been hampered by rather disappointing link reliability for long-range links due to atmospheric turbulence-induced fading and sensitivity to detrimental climate conditions. Qatar, with modest cloud coverage, high concentrations of airborne dust and high relative humidity particularly lies in virtually rainless sunny belt with a typical daily average solar radiation exceeding 6 kWh/m2 and 80-90% clear skies throughout the year. The specific objective of this work is to study for the first time in Qatar the effect of solar irradiation on the deliverability of the FSO Link. In order to analyze the transport media, we have ported Embedded Linux kernel on Field Programmable Gate Array (FPGA) and designed a network sniffer application that can run into FPGA. We installed new FSO terminals and configure and align them successively. In the reporting period, we carry out measurement and relate them to weather conditions.

Keywords: free space optics, solar irradiation, field programmable gate array, FSO outage

Procedia PDF Downloads 347