Search results for: cohesion metrics
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 838

Search results for: cohesion metrics

208 Analysis of the Variation on Earth Pressure by Addition of Construction Demolition Waste (C&D Waste) In Black Cotton Soil

Authors: Nirav Jadav, M. G.Vanza

Abstract:

Black cotton soils mainly exhibit the property of swelling/shrinkage when they react to moisture variations. This property causes development of cracks in the structures resting on these soils, which poses instability to the structures. Soil stabilization is a technique to enhance the geotechnical characteristics of Black cotton soils by changing their properties. Due to rapid growth in construction industry, a lot of waste material is being generated every day, which poses the problem of its disposal. If the waste material can be utilized for soil stabilization, it will mitigate the problems of its disposal. The tests results evaluate that the strength of the Black cotton soils increased by the use of C&D waste material. This study determines various Index and engineering properties of soil and compare for different proportions of soil and C&D Waste. For finding properties of soil and C&D Waste, various test is carried out like sieve analysis, hydrometer test, specific gravity test, Atterberg’s limit test, Standard proctor test and soil Triaxial unconsolidated undrained test. It also takes into account the characteristics alteration due to addition of C&D Waste in active and passive pressure. This study presents the efficacy for use of C&D Waste as a stabilizing material to be mixed with backfill soil in retaining walls. Standard proctor test was conducted at proportions S1W0 (soil = 100%, Waste = 0%), S7W1 (soil = 87.5%, waste = 12.5%), S3W1, S5W3 and S1W1. From these, S5W3 showed optimum results, so this proportion was considered for Soil Triaxial UU-Test. Also, S1W0 was considered too. When 37.5% of soil is replaced by C&D Waste, the Optimum moisture content (OMC) decrease by 11.48%, further, increase C&D Waste in soil OMC remains constant, and maximum dry density (MDD) were observed to be increased by 9.27%, further increased C&D Waste in soil MDD reduces. Carried out strength test, which shows cohesion decreased by 162% and the internal friction angle increased by 49.4% with compare to virgin soil. The study focuses on the potential use of C&D Waste as a stabilizing material in the retaining wall backfill. The active earth pressure decreases, and the passive earth pressure increases in the S5W3 mixture compared to the S1W0 mixture at the same depth.

Keywords: black cotton soil, construction demolition waste, compaction test, strength test

Procedia PDF Downloads 55
207 Recurrent Neural Networks for Complex Survival Models

Authors: Pius Marthin, Nihal Ata Tutkun

Abstract:

Survival analysis has become one of the paramount procedures in the modeling of time-to-event data. When we encounter complex survival problems, the traditional approach remains limited in accounting for the complex correlational structure between the covariates and the outcome due to the strong assumptions that limit the inference and prediction ability of the resulting models. Several studies exist on the deep learning approach to survival modeling; moreover, the application for the case of complex survival problems still needs to be improved. In addition, the existing models need to address the data structure's complexity fully and are subject to noise and redundant information. In this study, we design a deep learning technique (CmpXRnnSurv_AE) that obliterates the limitations imposed by traditional approaches and addresses the above issues to jointly predict the risk-specific probabilities and survival function for recurrent events with competing risks. We introduce the component termed Risks Information Weights (RIW) as an attention mechanism to compute the weighted cumulative incidence function (WCIF) and an external auto-encoder (ExternalAE) as a feature selector to extract complex characteristics among the set of covariates responsible for the cause-specific events. We train our model using synthetic and real data sets and employ the appropriate metrics for complex survival models for evaluation. As benchmarks, we selected both traditional and machine learning models and our model demonstrates better performance across all datasets.

Keywords: cumulative incidence function (CIF), risk information weight (RIW), autoencoders (AE), survival analysis, recurrent events with competing risks, recurrent neural networks (RNN), long short-term memory (LSTM), self-attention, multilayers perceptrons (MLPs)

Procedia PDF Downloads 59
206 A Meaning-Making Approach to Understand the Relationship between the Physical Built Environment of the Heritage Sites including the Intangible Values and the Design Development of the Public Open Spaces: Case Study Liverpool Pier Head

Authors: May Newisar, Richard Kingston, Philip Black

Abstract:

Heritage-led regeneration developments have been considered as one of the cornerstones of the economic and social revival of historic towns and cities in the UK. However, this approach has proved its deficiency within the development of Liverpool World Heritage site. This is due to the conflict between sustaining the tangible and intangible values as well as achieving the aimed economic developments. Accordingly, the development of such areas is influenced by a top-down approach which considers heritage as consumable experience and urban regeneration as the economic development for it. This neglects the heritage sites characteristics and values as well as the design criteria for public open spaces that overlap with the heritage sites. Currently, knowledge regarding the relationship between the physical built environment of the heritage sites including the intangible values and the design development of the public open spaces is limited. Public open spaces have been studied from different perspectives such as increasing walkability, a source of social cohesion, provide a good quality of life as well as understanding users’ perception. While heritage sites have been discussed heavily on how to maintain the physical environment, understanding the courses of threats and how to be protected. In addition to users’ experiences and motivations of visiting such areas. Furthermore, new approaches tried to overcome the gap such as the historic urban landscape approach. This approach is focusing on the entire human environment with all its tangible and intangible qualities. However, this research aims to understand the relationship between the heritage sites and public open spaces and how the overlap of the design and development of both could be used as a quality to enhance the heritage sites and improve users’ experience. A meaning-making approach will be used in order to understand and articulate how the development of Liverpool World Heritage site and its value could influence and shape the design of public open space Pier Head in order to attract a different level of tourists to be used as a tool for economic development. Consequently, this will help in bridging the gap between the planning and conservation areas’ policies through an understanding of how flexible is the system in order to adopt alternative approaches for the design and development strategies for those areas.

Keywords: historic urban landscape, environmental psychology, urban governance, identity

Procedia PDF Downloads 104
205 The Role of Urban Agriculture in Enhancing Food Supply and Export Potential: A Case Study of Neishabour, Iran

Authors: Mohammadreza Mojtahedi

Abstract:

Rapid urbanization presents multifaceted challenges, including environmental degradation and public health concerns. As the inevitability of urban sprawl continues, it becomes essential to devise strategies to alleviate its pressures on natural ecosystems and elevate socio-economic benchmarks within cities. This research investigates urban agriculture's economic contributions, emphasizing its pivotal role in food provisioning and export potential. Adopting a descriptive-analytical approach, field survey data was primarily collected via questionnaires. The tool's validity was affirmed by expert opinions, and its reliability secured by achieving a Cronbach's alpha score over 0.70 from 30 preliminary questionnaires. The research encompasses Neishabour's populace of 264,375, extracting a sample size of 384 via Cochran's formula. Findings reveal the significance of urban agriculture in food supply and its potential for exports, underlined by a p-value < 0.05. Neishabour's urban farming can augment the export of organic commodities, fruits, vegetables, ornamental plants, and foster product branding. Moreover, it supports the provision of fresh produce, bolstering dietary quality. Urban agriculture further impacts urban development metrics—enhancing environmental quality, job opportunities, income levels, and aesthetics, while promoting rainwater utilization. Popular cultivations include peaches, Damask roses, and poultry, tailored to available spaces. Structural equation modeling indicates urban agriculture's overarching influence, accounting for a 56% variance, predominantly in food sufficiency and export proficiency.

Keywords: urban agriculture, food supply, export potential, urban development, environmental health, structural equation modeling

Procedia PDF Downloads 35
204 Comparison of Deep Learning and Machine Learning Algorithms to Diagnose and Predict Breast Cancer

Authors: F. Ghazalnaz Sharifonnasabi, Iman Makhdoom

Abstract:

Breast cancer is a serious health concern that affects many people around the world. According to a study published in the Breast journal, the global burden of breast cancer is expected to increase significantly over the next few decades. The number of deaths from breast cancer has been increasing over the years, but the age-standardized mortality rate has decreased in some countries. It’s important to be aware of the risk factors for breast cancer and to get regular check- ups to catch it early if it does occur. Machin learning techniques have been used to aid in the early detection and diagnosis of breast cancer. These techniques, that have been shown to be effective in predicting and diagnosing the disease, have become a research hotspot. In this study, we consider two deep learning approaches including: Multi-Layer Perceptron (MLP), and Convolutional Neural Network (CNN). We also considered the five-machine learning algorithm titled: Decision Tree (C4.5), Naïve Bayesian (NB), Support Vector Machine (SVM), K-Nearest Neighbors (KNN) Algorithm and XGBoost (eXtreme Gradient Boosting) on the Breast Cancer Wisconsin Diagnostic dataset. We have carried out the process of evaluating and comparing classifiers involving selecting appropriate metrics to evaluate classifier performance and selecting an appropriate tool to quantify this performance. The main purpose of the study is predicting and diagnosis breast cancer, applying the mentioned algorithms and also discovering of the most effective with respect to confusion matrix, accuracy and precision. It is realized that CNN outperformed all other classifiers and achieved the highest accuracy (0.982456). The work is implemented in the Anaconda environment based on Python programing language.

Keywords: breast cancer, multi-layer perceptron, Naïve Bayesian, SVM, decision tree, convolutional neural network, XGBoost, KNN

Procedia PDF Downloads 46
203 Hand Movements and the Effect of Using Smart Teaching Aids: Quality of Writing Styles Outcomes of Pupils with Dysgraphia

Authors: Sadeq Al Yaari, Muhammad Alkhunayn, Sajedah Al Yaari, Adham Al Yaari, Ayman Al Yaari, Montaha Al Yaari, Ayah Al Yaari, Fatehi Eissa

Abstract:

Dysgraphia is a neurological disorder of written expression that impairs writing ability and fine motor skills, resulting primarily in problems relating not only to handwriting but also to writing coherence and cohesion. We investigate the properties of smart writing technology to highlight some unique features of the effects they cause on the academic performance of pupils with dysgraphia. In Amis, dysgraphics undergo writing problems to express their ideas due to ordinary writing aids, as the default strategy. The Amis data suggests a possible connection between available writing aids and pupils’ writing improvement; therefore, texts’ expression and comprehension. A group of thirteen dysgraphic pupils were placed in a regular classroom of primary school, with twenty-one pupils being recruited in the study as a control group. To ensure validity, reliability and accountability to the research, both groups studied writing courses for two semesters, of which the first was equipped with smart writing aids while the second took place in an ordinary classroom. Two pre-tests were undertaken at the beginning of the first two semesters, and two post-tests were administered at the end of both semesters. Tests examined pupils’ ability to write coherent, cohesive and expressive texts. The dysgraphic group received the treatment of a writing course in the first semester in classes with smart technology and produced significantly greater increases in writing expression than in an ordinary classroom, and their performance was better than that of the control group in the second semester. The current study concludes that using smart teaching aids is a ‘MUST’, both for teaching and learning dysgraphia. Furthermore, it is demonstrated that for young dysgraphia, expressive tasks are more challenging than coherent and cohesive tasks. The study, therefore, supports the literature suggesting a role for smart educational aids in writing and that smart writing techniques may be an efficient addition to regular educational practices, notably in special educational institutions and speech-language therapeutic facilities. However, further research is needed to prompt the adults with dysgraphia more often than is done to the older adults without dysgraphia in order to get them to finish the other productive and/or written skills tasks.

Keywords: smart technology, writing aids, pupils with dysgraphia, hands’ movement

Procedia PDF Downloads 9
202 Evaluation of NASA POWER and CRU Precipitation and Temperature Datasets over a Desert-prone Yobe River Basin: An Investigation of the Impact of Drought in the North-East Arid Zone of Nigeria

Authors: Yusuf Dawa Sidi, Abdulrahman Bulama Bizi

Abstract:

The most dependable and precise source of climate data is often gauge observation. However, long-term records of gauge observations, on the other hand, are unavailable in many regions around the world. In recent years, a number of gridded climate datasets with high spatial and temporal resolutions have emerged as viable alternatives to gauge-based measurements. However, it is crucial to thoroughly evaluate their performance prior to utilising them in hydroclimatic applications. Therefore, this study aims to assess the effectiveness of NASA Prediction of Worldwide Energy Resources (NASA POWER) and Climate Research Unit (CRU) datasets in accurately estimating precipitation and temperature patterns within the dry region of Nigeria from 1990 to 2020. The study employs widely used statistical metrics and the Standardised Precipitation Index (SPI) to effectively capture the monthly variability of precipitation and temperature and inter-annual anomalies in rainfall. The findings suggest that CRU exhibited superior performance compared to NASA POWER in terms of monthly precipitation and minimum and maximum temperatures, demonstrating a high correlation and much lower error values for both RMSE and MAE. Nevertheless, NASA POWER has exhibited a moderate agreement with gauge observations in accurately replicating monthly precipitation. The analysis of the SPI reveals that the CRU product exhibits superior performance compared to NASA POWER in accurately reflecting inter-annual variations in rainfall anomalies. The findings of this study indicate that the CRU gridded product is often regarded as the most favourable gridded precipitation product.

Keywords: CRU, climate change, precipitation, SPI, temperature

Procedia PDF Downloads 50
201 Implementing a Hospitalist Co-Management Service in Orthopaedic Surgery

Authors: Diane Ghanem, Whitney Kagabo, Rebecca Engels, Uma Srikumaran, Babar Shafiq

Abstract:

Hospitalist co-management of orthopaedic surgery patients is a growing trend across the country. It was created as a collaborative effort to provide overarching care to patients with the goal of improving their postoperative care and decreasing in-hospital medical complications. The aim of this project is to provide a guide for implementing and optimizing a hospitalist co-management service in orthopaedic surgery. Key leaders from the hospitalist team, orthopaedic team and quality, safety and service team were identified. Multiple meetings were convened to discuss the comanagement service and determine the necessary building blocks behind an efficient and well-designed co-management framework. After meticulous deliberation, a consensus was reached on the final service agreement and a written guide was drafted. Fundamental features of the service include the identification of service stakeholders and leaders, frequent consensus meetings, a well-defined framework, with goals, program metrics and unified commands, and a regular satisfaction assessment to update and improve the program. Identified pearls for co-managing orthopaedic surgery patients are standardization, timing, adequate patient selection, and two-way feedback between hospitalists and orthopaedic surgeons to optimize the protocols. Developing a service agreement is a constant work in progress, with meetings, discussions, revisions, and multiple piloting attempts before implementation. It is a partnership created to provide hospitals with a streamlined admission process where at-risk patients are identified early, and patient care is optimized regardless of the number or nature of medical comorbidities. A wellestablished hospitalist co-management service can increase patient care quality and safety, as well as health care value.

Keywords: co-management, hospitalist co-management, implementation, orthopaedic surgery, quality improvement

Procedia PDF Downloads 57
200 People's Perspective on Water Commons in Trans-Boundary Water Governance: A Case Study from Nepal

Authors: Sristi Silwal

Abstract:

South Asian rivers support ecosystems and sustain well-being of thousands of riparian communities. Rivers however are also sources of conflict between countries and one of the contested issues between governments of the region. Governments have signed treaties to harness some of the rivers but their provisions have not been successful in improving the quality of life of those who depend on water as common property resources. This paper will present a case of the study of the status of the water commons along the lower command areas of Koshi, Gandka and Mahakali rivers. Nepal and India have signed treaties for development and management of these rivers in 1928, 1954 and 1966. The study investigated perceptions of the local community on climate-induced disasters, provision of the treaties such as water for irrigation, participation in decision-making and specific impact of women. It looked at how the local community coped with adversities. The study showed that the common pool resources are gradually getting degraded, flood events increasing while community blame ‘other state’ and state administration for exacerbating these ills. The level of awareness about provisions of existing treatise is poor. Ongoing approach to trans-boundary water management has taken inadequate cognizance of these realities as the dominant narrative perpetuates cooperation between the governments. The paper argues that on-going discourses on trans-boundary water development and management need to use a new metrics of taking cognizance of the condition of the commons and that of the people depended on them for sustenance. In absence of such narratives, the scale of degradation would increase making those already marginalized more vulnerable to impacts of global climate change.

Keywords: climate change vulnerability, conflict, cooperation, water commons

Procedia PDF Downloads 208
199 Accuracy of Peak Demand Estimates for Office Buildings Using Quick Energy Simulation Tool

Authors: Mahdiyeh Zafaranchi, Ethan S. Cantor, William T. Riddell, Jess W. Everett

Abstract:

The New Jersey Department of Military and Veteran’s Affairs (NJ DMAVA) operates over 50 facilities throughout the state of New Jersey, U.S. NJDMAVA is under a mandate to move toward decarbonization, which will eventually include eliminating the use of natural gas and other fossil fuels for heating. At the same time, the organization requires increased resiliency regarding electric grid disruption. These competing goals necessitate adopting the use of on-site renewables such as photovoltaic and geothermal power, as well as implementing power control strategies through microgrids. Planning for these changes requires a detailed understanding of current and future electricity use on yearly, monthly, and shorter time scales, as well as a breakdown of consumption by heating, ventilation, and air conditioning (HVAC) equipment. This paper discusses case studies of two buildings that were simulated using the QUick Energy Simulation Tool (eQUEST). Both buildings use electricity from the grid and photovoltaics. One building also uses natural gas. While electricity use data are available in hourly intervals and natural gas data are available in monthly intervals, the simulations were developed using monthly and yearly totals. This approach was chosen to reflect the information available for most NJ DMAVA facilities. Once completed, simulation results are compared to metrics recommended by several organizations to validate energy use simulations. In addition to yearly and monthly totals, the simulated peak demands are compared to actual monthly peak demand values. The simulations resulted in monthly peak demand values that were within 30% of the measured values. These benchmarks will help to assess future energy planning efforts for NJ DMAVA.

Keywords: building energy modeling, eQUEST, peak demand, smart meters

Procedia PDF Downloads 45
198 Transit-Oriented Development as a Tool for Building Social Capital

Authors: Suneet Jagdev

Abstract:

Rapid urbanization has resulted in informal settlements on the periphery of nearly all big cities in the developing world due to lack of affordable housing options in the city. Residents of these communities have to travel long distances to get to work or search for jobs in these cities, and women, children and elderly people are excluded from urban opportunities. Affordable and safe public transport facilities can help them expand their possibilities. The aim of this research is to identify social capital as another important element of livable cities that can be protected and nurtured through transit-oriented development, as a tool to provide real resources that can help these transit-oriented communities become self-sustainable. Social capital has been referred to the collective value of all social networks and the inclinations that arise from these networks to do things for each other. It is one of the key component responsible to build and maintain democracy. Public spaces, pedestrian amenities and social equity are the other essential part of Transit Oriented Development models that will be analyzed in this research. The data has been collected through the analysis of several case studies, the urban design strategies implemented and their impact on the perception and on the community´s experience, and, finally, how these focused on the social capital. Case studies have been evaluated on several metrics, namely ecological, financial, energy consumption, etc. A questionnaire and other tools were designed to collect data to analyze the research objective and reflect the dimension of social capital. The results of the questionnaire indicated that almost all the participants have a positive attitude towards this dimensions of building a social capital with the aid of transit-oriented development. Statistical data of the identified key motivators against against demographic characteristics have been generated based on the case studies used for the paper. The findings suggested that there is a direct relation between urbanization, transit-oriented developments, and social capital.

Keywords: better opportunities, low-income settlements, social capital, social inclusion, transit oriented development

Procedia PDF Downloads 307
197 Usability Evaluation of a Self-Report Mobile App for COVID-19 Symptoms: Supporting Health Monitoring in the Work Context

Authors: Kevin Montanez, Patricia Garcia

Abstract:

The confinement and restrictions adopted to avoid an exponential spread of the COVID-19 have negatively impacted the Peruvian economy. In this context, Industries offering essential products could continue operating, but they have to follow safety protocols and implement strategies to ensure employee health. In view of the increasing internet access and mobile phone ownership, “Alerta Temprana”, a mobile app, was developed to self-report COVID-19 symptoms in the work context. In this study, the usability of the mobile app “Alerta Temprana” was evaluated from the perspective of health monitors and workers. In addition to reporting the metrics related to the usability of the application, the utility of the system is also evaluated from the monitors' perspective. In this descriptive study, the participants used the mobile app for two months. Afterwards, System Usability Scale (SUS) questionnaire was answered by the workers and monitors. A Usefulness questionnaire with open questions was also used for the monitors. The data related to the use of the application was collected during one month. Furthermore, descriptive statistics and bivariate analysis were used. The workers rated the application as good (70.39). In the case of the monitors, usability was excellent (83.0). The most important feature for the monitors were the emails generated by the application. The average interaction per user was 30 seconds and a total of 6172 self-reports were sent. Finally, a statistically significant association was found between the acceptability scale and the work area. The results of this study suggest that Alerta Temprana has the potential to be used for surveillance and health monitoring in any context of face-to-face modality. Participants reported a high degree of ease of use. However, from the perspective of workers, SUS cannot diagnose usability issues and we suggest we use another standard usability questionnaire to improve "Alerta Temprana" for future use.

Keywords: public health in informatics, mobile app, usability, self-report

Procedia PDF Downloads 84
196 The Crossroads of Corruption and Terrorism in the Global South

Authors: Stephen M. Magu

Abstract:

The 9/11 and Christmas bombing attacks in the United States are mostly associated with the inability of intelligence agencies to connect dots based on intelligence that was already available. The 1998, 2002, 2013 and several 2014 terrorist attacks in Kenya, on the other hand, are probably driven by a completely different dynamic: the invisible hand of corruption. The World Bank and Transparency International annually compute the Worldwide Governance Indicators and the Corruption Perception Index respectively. What perhaps is not adequately captured in the corruption metrics is the impact of corruption on terrorism. The World Bank data includes variables such as the control of corruption, (estimates of) government effectiveness, political stability and absence of violence/terrorism, regulatory quality, rule of law and voice and accountability. TI's CPI does not include measures related to terrorism, but it is plausible that there is an expectation of some terrorism impact arising from corruption. This paper, by examining the incidence, frequency and total number of terrorist attacks that have occurred especially since 1990, and further examining the specific cases of Kenya and Nigeria, argues that in addition to having major effects on governance, corruption has an even more frightening impact: that of facilitating and/or violating security mechanisms to the extent that foreign nationals can easily obtain identification that enables them to perpetuate major events, targeting powerful countries' interests in countries with weak corruption-fighting mechanisms. The paper aims to model interactions that demonstrate the cost/benefit analysis and agents' rational calculations as being non-rational calculations, given the ultimate impact. It argues that eradication of corruption is not just a matter of a better business environment, but that it is implicit in national security, and that for anti-corruption crusaders, this is an argument more potent than the economic cost / cost of doing business argument.

Keywords: corruption, global south, identification, passports, terrorism

Procedia PDF Downloads 397
195 A Systematic Review on Factors/Predictors and Outcomes of Parental Distress in Childhood Acute Lymphoblastic Leukemia

Authors: Ana Ferraz, Martim Santos, M. Graça Pereira

Abstract:

Distress among parents of children with acute lymphoblastic leukemia (ALL) is common during treatment and can persist several years post-diagnosis, impacting the adjustment of children and parents themselves. Current evidence is needed to examine the scope and nature of parental distress in childhood ALL. This review focused on associated variables, predictors, and outcomes of parental distress following their ALL diagnosis of their child. PubMed, Web of Science, and PsycINFO databases were searched for English and Spanish papers published from 1983 to 2021. PRISMA statement was followed, and papers were evaluated through a standardized methodological quality assessment tool (NHLBI). Of the 28 papers included, 16 were evaluated as fair, eight as good, and four as poor. Regarding results, 11 papers reported subgroup differences, and 15 found potential predictors of parental distress, including sociodemographic, psychosocial, psychological, family, health, and ALL-specific variables. Significant correlations were found between parental distress, social support, illness cognitions, and resilience, as well as contradictory results regarding the impact of sociodemographic variables on parental distress. Family cohesion and caregiver burden were associated with distress, and the use of healthy coping strategies was associated with less anxiety. Caregiver strain contributed to distress, and the overall impact of illness positively predicted anxiety in mothers and somatization in fathers. Differences in parental distress were found regarding group risk, time since diagnosis, and treatment phases. Thirteen papers explored the outcomes of parental distress on psychological, family, health, and social/education outcomes. Parental distress was the most important predictor of family strain. Significant correlations were found between parental distress at diagnosis and further psychological adjustment of parents themselves and their children. Most papers reported correlations between parental distress on children’s adjustment and quality of life, although few studies reported no association. Correlations between maternal depression and child participation in education and social life were also found. Longitudinal studies are needed to better understand parental distress and its consequences on health outcomes, in particular. Future interventions should focus mainly on parents on distress reduction and psychological adjustment, both in parents and children over time.

Keywords: childhood acute lymphoblastic leukemia, family, parental distress, psychological adjustment, quality of life

Procedia PDF Downloads 87
194 Maximization of Lifetime for Wireless Sensor Networks Based on Energy Efficient Clustering Algorithm

Authors: Frodouard Minani

Abstract:

Since last decade, wireless sensor networks (WSNs) have been used in many areas like health care, agriculture, defense, military, disaster hit areas and so on. Wireless Sensor Networks consist of a Base Station (BS) and more number of wireless sensors in order to monitor temperature, pressure, motion in different environment conditions. The key parameter that plays a major role in designing a protocol for Wireless Sensor Networks is energy efficiency which is a scarcest resource of sensor nodes and it determines the lifetime of sensor nodes. Maximizing sensor node’s lifetime is an important issue in the design of applications and protocols for Wireless Sensor Networks. Clustering sensor nodes mechanism is an effective topology control approach for helping to achieve the goal of this research. In this paper, the researcher presents an energy efficiency protocol to prolong the network lifetime based on Energy efficient clustering algorithm. The Low Energy Adaptive Clustering Hierarchy (LEACH) is a routing protocol for clusters which is used to lower the energy consumption and also to improve the lifetime of the Wireless Sensor Networks. Maximizing energy dissipation and network lifetime are important matters in the design of applications and protocols for wireless sensor networks. Proposed system is to maximize the lifetime of the Wireless Sensor Networks by choosing the farthest cluster head (CH) instead of the closest CH and forming the cluster by considering the following parameter metrics such as Node’s density, residual-energy and distance between clusters (inter-cluster distance). In this paper, comparisons between the proposed protocol and comparative protocols in different scenarios have been done and the simulation results showed that the proposed protocol performs well over other comparative protocols in various scenarios.

Keywords: base station, clustering algorithm, energy efficient, sensors, wireless sensor networks

Procedia PDF Downloads 111
193 Adding Business Value in Enterprise Applications through Quality Matrices Using Agile

Authors: Afshan Saad, Muhammad Saad, Shah Muhammad Emaduddin

Abstract:

Nowadays the business condition is so quick paced that enhancing ourselves consistently has turned into a huge factor for the presence of an undertaking. We can check this for structural building and significantly more so in the quick-paced universe of data innovation and programming designing. The lithe philosophies, similar to Scrum, have a devoted advance in the process that objectives the enhancement of the improvement procedure and programming items. Pivotal to process enhancement is to pick up data that grants you to assess the condition of the procedure and its items. From the status data, you can design activities for the upgrade and furthermore assess the accomplishment of those activities. This investigation builds a model that measures the product nature of the improvement procedure. The product quality is dependent on the useful and auxiliary nature of the product items, besides the nature of the advancement procedure is likewise vital to enhance programming quality. Utilitarian quality covers the adherence to client prerequisites, while the auxiliary quality tends to the structure of the product item's source code with reference to its practicality. The procedure quality is identified with the consistency and expectedness of the improvement procedure. The product quality model is connected in a business setting by social occasion the information for the product measurements in the model. To assess the product quality model, we investigate the information and present it to the general population engaged with the light-footed programming improvement process. The outcomes from the application and the client input recommend that the model empowers a reasonable evaluation of the product quality and that it very well may be utilized to help the persistent enhancement of the advancement procedure and programming items.

Keywords: Agile SDLC Tools, Agile Software development, business value, enterprise applications, IBM, IBM Rational Team Concert, RTC, software quality, software metrics

Procedia PDF Downloads 146
192 A Trend Based Forecasting Framework of the ATA Method and Its Performance on the M3-Competition Data

Authors: H. Taylan Selamlar, I. Yavuz, G. Yapar

Abstract:

It is difficult to make predictions especially about the future and making accurate predictions is not always easy. However, better predictions remain the foundation of all science therefore the development of accurate, robust and reliable forecasting methods is very important. Numerous number of forecasting methods have been proposed and studied in the literature. There are still two dominant major forecasting methods: Box-Jenkins ARIMA and Exponential Smoothing (ES), and still new methods are derived or inspired from them. After more than 50 years of widespread use, exponential smoothing is still one of the most practically relevant forecasting methods available due to their simplicity, robustness and accuracy as automatic forecasting procedures especially in the famous M-Competitions. Despite its success and widespread use in many areas, ES models have some shortcomings that negatively affect the accuracy of forecasts. Therefore, a new forecasting method in this study will be proposed to cope with these shortcomings and it will be called ATA method. This new method is obtained from traditional ES models by modifying the smoothing parameters therefore both methods have similar structural forms and ATA can be easily adapted to all of the individual ES models however ATA has many advantages due to its innovative new weighting scheme. In this paper, the focus is on modeling the trend component and handling seasonality patterns by utilizing classical decomposition. Therefore, ATA method is expanded to higher order ES methods for additive, multiplicative, additive damped and multiplicative damped trend components. The proposed models are called ATA trended models and their predictive performances are compared to their counter ES models on the M3 competition data set since it is still the most recent and comprehensive time-series data collection available. It is shown that the models outperform their counters on almost all settings and when a model selection is carried out amongst these trended models ATA outperforms all of the competitors in the M3- competition for both short term and long term forecasting horizons when the models’ forecasting accuracies are compared based on popular error metrics.

Keywords: accuracy, exponential smoothing, forecasting, initial value

Procedia PDF Downloads 157
191 Hyper Parameter Optimization of Deep Convolutional Neural Networks for Pavement Distress Classification

Authors: Oumaima Khlifati, Khadija Baba

Abstract:

Pavement distress is the main factor responsible for the deterioration of road structure durability, damage vehicles, and driver comfort. Transportation agencies spend a high proportion of their funds on pavement monitoring and maintenance. The auscultation of pavement distress was based on the manual survey, which was extremely time consuming, labor intensive, and required domain expertise. Therefore, the automatic distress detection is needed to reduce the cost of manual inspection and avoid more serious damage by implementing the appropriate remediation actions at the right time. Inspired by recent deep learning applications, this paper proposes an algorithm for automatic road distress detection and classification using on the Deep Convolutional Neural Network (DCNN). In this study, the types of pavement distress are classified as transverse or longitudinal cracking, alligator, pothole, and intact pavement. The dataset used in this work is composed of public asphalt pavement images. In order to learn the structure of the different type of distress, the DCNN models are trained and tested as a multi-label classification task. In addition, to get the highest accuracy for our model, we adjust the structural optimization hyper parameters such as the number of convolutions and max pooling, filers, size of filters, loss functions, activation functions, and optimizer and fine-tuning hyper parameters that conclude batch size and learning rate. The optimization of the model is executed by checking all feasible combinations and selecting the best performing one. The model, after being optimized, performance metrics is calculated, which describe the training and validation accuracies, precision, recall, and F1 score.

Keywords: distress pavement, hyperparameters, automatic classification, deep learning

Procedia PDF Downloads 60
190 Performance Evaluation of a Very High-Resolution Satellite Telescope

Authors: Walid A. Attia, Taher M. Bazan, Fawzy Eltohamy, Mahmoud Fathy

Abstract:

System performance evaluation is an essential stage in the design of high-resolution satellite telescopes prior to the development process. In this paper, a system performance evaluation of a very high-resolution satellite telescope is investigated. The evaluated system has a Korsch optical scheme design. This design has been discussed in another paper with respect to three-mirror anastigmat (TMA) scheme design and the former configuration showed better results. The investigated system is based on the Korsch optical design integrated with a time-delay and integration charge coupled device (TDI-CCD) sensor to achieve a ground sampling distance (GSD) of 25 cm. The key performance metrics considered are the spatial resolution, the signal to noise ratio (SNR) and the total modulation transfer function (MTF) of the system. In addition, the national image interpretability rating scale (NIIRS) metric is assessed to predict the image quality according to the modified general image quality equation (GIQE). Based on the orbital, optical and detector parameters, the estimated GSD is found to be 25 cm. The SNR has been analyzed at different illumination conditions of target albedos, sun and sensor angles. The system MTF has been computed including diffraction, aberration, optical manufacturing, smear and detector sampling as the main contributors for evaluation the MTF. Finally, the system performance evaluation results show that the computed MTF value is found to be around 0.08 at the Nyquist frequency, the SNR value was found to be 130 at albedo 0.2 with a nadir viewing angles and the predicted NIIRS is in the order of 6.5 which implies a very good system image quality.

Keywords: modulation transfer function, national image interpretability rating scale, signal to noise ratio, satellite telescope performance evaluation

Procedia PDF Downloads 358
189 An Automated Stock Investment System Using Machine Learning Techniques: An Application in Australia

Authors: Carol Anne Hargreaves

Abstract:

A key issue in stock investment is how to select representative features for stock selection. The objective of this paper is to firstly determine whether an automated stock investment system, using machine learning techniques, may be used to identify a portfolio of growth stocks that are highly likely to provide returns better than the stock market index. The second objective is to identify the technical features that best characterize whether a stock’s price is likely to go up and to identify the most important factors and their contribution to predicting the likelihood of the stock price going up. Unsupervised machine learning techniques, such as cluster analysis, were applied to the stock data to identify a cluster of stocks that was likely to go up in price – portfolio 1. Next, the principal component analysis technique was used to select stocks that were rated high on component one and component two – portfolio 2. Thirdly, a supervised machine learning technique, the logistic regression method, was used to select stocks with a high probability of their price going up – portfolio 3. The predictive models were validated with metrics such as, sensitivity (recall), specificity and overall accuracy for all models. All accuracy measures were above 70%. All portfolios outperformed the market by more than eight times. The top three stocks were selected for each of the three stock portfolios and traded in the market for one month. After one month the return for each stock portfolio was computed and compared with the stock market index returns. The returns for all three stock portfolios was 23.87% for the principal component analysis stock portfolio, 11.65% for the logistic regression portfolio and 8.88% for the K-means cluster portfolio while the stock market performance was 0.38%. This study confirms that an automated stock investment system using machine learning techniques can identify top performing stock portfolios that outperform the stock market.

Keywords: machine learning, stock market trading, logistic regression, cluster analysis, factor analysis, decision trees, neural networks, automated stock investment system

Procedia PDF Downloads 131
188 Load Forecasting in Microgrid Systems with R and Cortana Intelligence Suite

Authors: F. Lazzeri, I. Reiter

Abstract:

Energy production optimization has been traditionally very important for utilities in order to improve resource consumption. However, load forecasting is a challenging task, as there are a large number of relevant variables that must be considered, and several strategies have been used to deal with this complex problem. This is especially true also in microgrids where many elements have to adjust their performance depending on the future generation and consumption conditions. The goal of this paper is to present a solution for short-term load forecasting in microgrids, based on three machine learning experiments developed in R and web services built and deployed with different components of Cortana Intelligence Suite: Azure Machine Learning, a fully managed cloud service that enables to easily build, deploy, and share predictive analytics solutions; SQL database, a Microsoft database service for app developers; and PowerBI, a suite of business analytics tools to analyze data and share insights. Our results show that Boosted Decision Tree and Fast Forest Quantile regression methods can be very useful to predict hourly short-term consumption in microgrids; moreover, we found that for these types of forecasting models, weather data (temperature, wind, humidity and dew point) can play a crucial role in improving the accuracy of the forecasting solution. Data cleaning and feature engineering methods performed in R and different types of machine learning algorithms (Boosted Decision Tree, Fast Forest Quantile and ARIMA) will be presented, and results and performance metrics discussed.

Keywords: time-series, features engineering methods for forecasting, energy demand forecasting, Azure Machine Learning

Procedia PDF Downloads 276
187 Project Progress Prediction in Software Devlopment Integrating Time Prediction Algorithms and Large Language Modeling

Authors: Dong Wu, Michael Grenn

Abstract:

Managing software projects effectively is crucial for meeting deadlines, ensuring quality, and managing resources well. Traditional methods often struggle with predicting project timelines accurately due to uncertain schedules and complex data. This study addresses these challenges by combining time prediction algorithms with Large Language Models (LLMs). It makes use of real-world software project data to construct and validate a model. The model takes detailed project progress data such as task completion dynamic, team Interaction and development metrics as its input and outputs predictions of project timelines. To evaluate the effectiveness of this model, a comprehensive methodology is employed, involving simulations and practical applications in a variety of real-world software project scenarios. This multifaceted evaluation strategy is designed to validate the model's significant role in enhancing forecast accuracy and elevating overall management efficiency, particularly in complex software project environments. The results indicate that the integration of time prediction algorithms with LLMs has the potential to optimize software project progress management. These quantitative results suggest the effectiveness of the method in practical applications. In conclusion, this study demonstrates that integrating time prediction algorithms with LLMs can significantly improve the predictive accuracy and efficiency of software project management. This offers an advanced project management tool for the industry, with the potential to improve operational efficiency, optimize resource allocation, and ensure timely project completion.

Keywords: software project management, time prediction algorithms, large language models (LLMS), forecast accuracy, project progress prediction

Procedia PDF Downloads 51
186 Evaluation of IMERG Performance at Estimating the Rainfall Properties through Convective and Stratiform Rain Events in a Semi-Arid Region of Mexico

Authors: Eric Muñoz de la Torre, Julián González Trinidad, Efrén González Ramírez

Abstract:

Rain varies greatly in its duration, intensity, and spatial coverage, it is important to have sub-daily rainfall data for various applications, including risk prevention. However, the ground measurements are limited by the low and irregular density of rain gauges. An alternative to this problem are the Satellite Precipitation Products (SPPs) that use passive microwave and infrared sensors to estimate rainfall, as IMERG, however, these SPPs have to be validated before their application. The aim of this study is to evaluate the performance of the IMERG: Integrated Multi-satellitE Retrievals for Global Precipitation Measurament final run V06B SPP in a semi-arid region of Mexico, using 4 automatic rain gauges (pluviographs) sub-daily data of October 2019 and June to September 2021, using the Minimum inter-event Time (MIT) criterion to separate unique rain events with a dry period of 10 hrs. for the purpose of evaluating the rainfall properties (depth, duration and intensity). Point to pixel analysis, continuous, categorical, and volumetric statistical metrics were used. Results show that IMERG is capable to estimate the rainfall depth with a slight overestimation but is unable to identify the real duration and intensity of the rain events, showing large overestimations and underestimations, respectively. The study zone presented 80 to 85 % of convective rain events, the rest were stratiform rain events, classified by the depth magnitude variation of IMERG pixels and pluviographs. IMERG showed poorer performance at detecting the first ones but had a good performance at estimating stratiform rain events that are originated by Cold Fronts.

Keywords: IMERG, rainfall, rain gauge, remote sensing, statistical evaluation

Procedia PDF Downloads 40
185 Optimizing Energy Efficiency: Leveraging Big Data Analytics and AWS Services for Buildings and Industries

Authors: Gaurav Kumar Sinha

Abstract:

In an era marked by increasing concerns about energy sustainability, this research endeavors to address the pressing challenge of energy consumption in buildings and industries. This study delves into the transformative potential of AWS services in optimizing energy efficiency. The research is founded on the recognition that effective management of energy consumption is imperative for both environmental conservation and economic viability. Buildings and industries account for a substantial portion of global energy use, making it crucial to develop advanced techniques for analysis and reduction. This study sets out to explore the integration of AWS services with big data analytics to provide innovative solutions for energy consumption analysis. Leveraging AWS's cloud computing capabilities, scalable infrastructure, and data analytics tools, the research aims to develop efficient methods for collecting, processing, and analyzing energy data from diverse sources. The core focus is on creating predictive models and real-time monitoring systems that enable proactive energy management. By harnessing AWS's machine learning and data analytics capabilities, the research seeks to identify patterns, anomalies, and optimization opportunities within energy consumption data. Furthermore, this study aims to propose actionable recommendations for reducing energy consumption in buildings and industries. By combining AWS services with metrics-driven insights, the research strives to facilitate the implementation of energy-efficient practices, ultimately leading to reduced carbon emissions and cost savings. The integration of AWS services not only enhances the analytical capabilities but also offers scalable solutions that can be customized for different building and industrial contexts. The research also recognizes the potential for AWS-powered solutions to promote sustainable practices and support environmental stewardship.

Keywords: energy consumption analysis, big data analytics, AWS services, energy efficiency

Procedia PDF Downloads 38
184 Semi-Automatic Segmentation of Mitochondria on Transmission Electron Microscopy Images Using Live-Wire and Surface Dragging Methods

Authors: Mahdieh Farzin Asanjan, Erkan Unal Mumcuoglu

Abstract:

Mitochondria are cytoplasmic organelles of the cell, which have a significant role in the variety of cellular metabolic functions. Mitochondria act as the power plants of the cell and are surrounded by two membranes. Significant morphological alterations are often due to changes in mitochondrial functions. A powerful technique in order to study the three-dimensional (3D) structure of mitochondria and its alterations in disease states is Electron microscope tomography. Detection of mitochondria in electron microscopy images due to the presence of various subcellular structures and imaging artifacts is a challenging problem. Another challenge is that each image typically contains more than one mitochondrion. Hand segmentation of mitochondria is tedious and time-consuming and also special knowledge about the mitochondria is needed. Fully automatic segmentation methods lead to over-segmentation and mitochondria are not segmented properly. Therefore, semi-automatic segmentation methods with minimum manual effort are required to edit the results of fully automatic segmentation methods. Here two editing tools were implemented by applying spline surface dragging and interactive live-wire segmentation tools. These editing tools were applied separately to the results of fully automatic segmentation. 3D extension of these tools was also studied and tested. Dice coefficients of 2D and 3D for surface dragging using splines were 0.93 and 0.92. This metric for 2D and 3D for live-wire method were 0.94 and 0.91 respectively. The root mean square symmetric surface distance values of 2D and 3D for surface dragging was measured as 0.69, 0.93. The same metrics for live-wire tool were 0.60 and 2.11. Comparing the results of these editing tools with the results of automatic segmentation method, it shows that these editing tools, led to better results and these results were more similar to ground truth image but the required time was higher than hand-segmentation time

Keywords: medical image segmentation, semi-automatic methods, transmission electron microscopy, surface dragging using splines, live-wire

Procedia PDF Downloads 142
183 Towards an Enhanced Quality of IPTV Media Server Architecture over Software Defined Networking

Authors: Esmeralda Hysenbelliu

Abstract:

The aim of this paper is to present the QoE (Quality of Experience) IPTV SDN-based media streaming server enhanced architecture for configuring, controlling, management and provisioning the improved delivery of IPTV service application with low cost, low bandwidth, and high security. Furthermore, it is given a virtual QoE IPTV SDN-based topology to provide an improved IPTV service based on QoE Control and Management of multimedia services functionalities. Inside OpenFlow SDN Controller there are enabled in high flexibility and efficiency Service Load-Balancing Systems; based on the Loading-Balance module and based on GeoIP Service. This two Load-balancing system improve IPTV end-users Quality of Experience (QoE) with optimal management of resources greatly. Through the key functionalities of OpenFlow SDN controller, this approach produced several important features, opportunities for overcoming the critical QoE metrics for IPTV Service like achieving incredible Fast Zapping time (Channel Switching time) < 0.1 seconds. This approach enabled Easy and Powerful Transcoding system via FFMPEG encoder. It has the ability to customize streaming dimensions bitrates, latency management and maximum transfer rates ensuring delivering of IPTV streaming services (Audio and Video) in high flexibility, low bandwidth and required performance. This QoE IPTV SDN-based media streaming architecture unlike other architectures provides the possibility of Channel Exchanging between several IPTV service providers all over the word. This new functionality brings many benefits as increasing the number of TV channels received by end –users with low cost, decreasing stream failure time (Channel Failure time < 0.1 seconds) and improving the quality of streaming services.

Keywords: improved quality of experience (QoE), OpenFlow SDN controller, IPTV service application, softwarization

Procedia PDF Downloads 122
182 Design and Validation of the 'Teachers' Resilience Scale' for Assessing Protective Factors

Authors: Athena Daniilidou, Maria Platsidou

Abstract:

Resilience is considered to greatly affect the personal and occupational wellbeing and efficacy of individuals; therefore, it has been widely studied in the social and behavioral sciences. Given its significance, several scales have been created to assess resilience of children and adults. However, most of these scales focus on examining only the internal protective or risk factors that affect the levels of resilience. The aim of the present study is to create a reliable scale that assesses both the internal and the external protective factors that affect Greek teachers’ levels of resilience. Participants were 136 secondary school teachers (89 females, 47 males) from urban areas of Greece. Connor-Davidson Resilience Scale (CD-Risc) and Resilience Scale for Adults (RSA) were used to collect the data. First, exploratory factor analysis was employed to investigate the inner structure of each scale. For both scales, the analyses revealed a differentiated factor solution compared to the ones proposed by the creators. That prompt us to create a scale that would combine the best fitting subscales of the CD-Risc and the RSA. To this end, the items of the four factors with the best fit and highest reliability were used to create the ‘Teachers' resilience scale’. Exploratory factor analysis revealed that the scale assesses the following protective/risk factors: Personal Competence and Strength (9 items, α=.83), Family Cohesion Spiritual Influences (7 items, α=.80), Social Competence and Peers Support (7 items, α=.78) and Spiritual Influence (3 items, α=.58). This four-factor model explained 49,50% of the total variance. In the next step, a confirmatory factor analysis was performed on the 26 items of the derived scale to test the above factor solution. The fit of the model to the data was good (χ2/292 = 1.245, CFI = .921, GFI = .829, SRMR = .074, CI90% = .026-,056, RMSEA = 0.43), indicating that the proposed scale can validly measure the aforementioned four aspects of teachers' resilience and thus confirmed its factorial validity. Finally, analyses of variance were performed to check for individual differences in the levels of teachers' resilience in relation to their gender, age, marital status, level of studies, and teaching specialty. Results were consistent to previous findings, thus providing an indication of discriminant validity for the instrument. This scale has the advantage of assessing both the internal and the external protective factors of resilience in a brief yet comprehensive way, since it consists 26 items instead of the total of 58 of the CD-Risc and RSA scales. Its factorial inner structure is supported by the relevant literature on resilience, as it captures the major protective factors of resilience identified in previous studies.

Keywords: protective factors, resilience, scale development, teachers

Procedia PDF Downloads 277
181 From Vegetarian to Cannibal: A Literary Analysis of a Journey of Innocence in ‘Life of Pi’

Authors: Visvaganthie Moodley

Abstract:

Language use and aesthetic appreciation are integral to meaning-making in prose, as they are in poetry. However, in comparison to poetic analysis, a literary analysis of prose that focuses on linguistics and stylistics is somewhat scarce as it generally requires the study of lengthy texts. Nevertheless, the effect of linguistic and stylistic features in prose as conscious design by authors for creating specific effects and conveying preconceived messages is drawing increasing attention of linguists and literary experts. A close examination of language use in prose can, among a host of literary purposes, convey emotive and cognitive values and contribute to making interpretations about how fictional characters are represented to the imaginative reader. This paper provides a literary analysis of Yann Martel’s narrative of a 14-year-old Indian boy, Pi, who had survived the wreck of a Japanese cargo ship, by focusing on his 227-day journey of tribulations, along with a Bengal tiger, on a lifeboat. The study favours a pluralistic approach blending literary criticism, linguistic analysis and stylistic description. It adopts Leech and Short’s (2007) broad framework of linguistic and stylistic categories (lexical categories, grammatical categories, figures of speech etc. [sic] and context and cohesion) as well as a range of other relevant linguistic phenomena to show how the narrator, Pi, and the author influence the reader’s interpretations of Pi’s character. Such interpretations are made using the lens of Freud’s psychoanalytical theory (which focuses on the interplay of the instinctual id, the ego and the moralistic superego) and Blake’s philosophy of innocence and experience (the two contrary states of the human soul). The paper traces Pi’s transformation from animal-loving, God-fearing vegetarian to brutal animal slayer and cannibal in his journey of survival. By a close examination of the linguistic and stylistic features of the narrative, it argues that, despite evidence of butchery and cannibalism, Pi’s gruesome behaviour is motivated by extreme physiological and psychological duress and not intentional malice. Finally, the paper concludes that the voice of the narrator, Pi, and that of the author, Martel, act as powerful persuasive agents in influencing the reader to respond with a sincere flow of sympathy for Pi and judge him as having retained his innocence in his instinctual need for survival.

Keywords: foregrounding, innocence and experience, lexis, literary analysis, psychoanalytical lens, style

Procedia PDF Downloads 138
180 Quality of Service Based Routing Algorithm for Real Time Applications in MANETs Using Ant Colony and Fuzzy Logic

Authors: Farahnaz Karami

Abstract:

Routing is an important, challenging task in mobile ad hoc networks due to node mobility, lack of central control, unstable links, and limited resources. An ant colony has been found to be an attractive technique for routing in Mobile Ad Hoc Networks (MANETs). However, existing swarm intelligence based routing protocols find an optimal path by considering only one or two route selection metrics without considering correlations among such parameters making them unsuitable lonely for routing real time applications. Fuzzy logic combines multiple route selection parameters containing uncertain information or imprecise data in nature, but does not have multipath routing property naturally in order to provide load balancing. The objective of this paper is to design a routing algorithm using fuzzy logic and ant colony that can solve some of routing problems in mobile ad hoc networks, such as nodes energy consumption optimization to increase network lifetime, link failures rate reduction to increase packet delivery reliability and providing load balancing to optimize available bandwidth. In proposed algorithm, the path information will be given to fuzzy inference system by ants. Based on the available path information and considering the parameters required for quality of service (QoS), the fuzzy cost of each path is calculated and the optimal paths will be selected. NS2.35 simulation tools are used for simulation and the results are compared and evaluated with the newest QoS based algorithms in MANETs according to packet delivery ratio, end-to-end delay and routing overhead ratio criterions. The simulation results show significant improvement in the performance of these networks in terms of decreasing end-to-end delay, and routing overhead ratio, and also increasing packet delivery ratio.

Keywords: mobile ad hoc networks, routing, quality of service, ant colony, fuzzy logic

Procedia PDF Downloads 36
179 Improving Chest X-Ray Disease Detection with Enhanced Data Augmentation Using Novel Approach of Diverse Conditional Wasserstein Generative Adversarial Networks

Authors: Malik Muhammad Arslan, Muneeb Ullah, Dai Shihan, Daniyal Haider, Xiaodong Yang

Abstract:

Chest X-rays are instrumental in the detection and monitoring of a wide array of diseases, including viral infections such as COVID-19, tuberculosis, pneumonia, lung cancer, and various cardiac and pulmonary conditions. To enhance the accuracy of diagnosis, artificial intelligence (AI) algorithms, particularly deep learning models like Convolutional Neural Networks (CNNs), are employed. However, these deep learning models demand a substantial and varied dataset to attain optimal precision. Generative Adversarial Networks (GANs) can be employed to create new data, thereby supplementing the existing dataset and enhancing the accuracy of deep learning models. Nevertheless, GANs have their limitations, such as issues related to stability, convergence, and the ability to distinguish between authentic and fabricated data. In order to overcome these challenges and advance the detection and classification of CXR normal and abnormal images, this study introduces a distinctive technique known as DCWGAN (Diverse Conditional Wasserstein GAN) for generating synthetic chest X-ray (CXR) images. The study evaluates the effectiveness of this Idiosyncratic DCWGAN technique using the ResNet50 model and compares its results with those obtained using the traditional GAN approach. The findings reveal that the ResNet50 model trained on the DCWGAN-generated dataset outperformed the model trained on the classic GAN-generated dataset. Specifically, the ResNet50 model utilizing DCWGAN synthetic images achieved impressive performance metrics with an accuracy of 0.961, precision of 0.955, recall of 0.970, and F1-Measure of 0.963. These results indicate the promising potential for the early detection of diseases in CXR images using this Inimitable approach.

Keywords: CNN, classification, deep learning, GAN, Resnet50

Procedia PDF Downloads 54