Search results for: commercial real estate
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7477

Search results for: commercial real estate

1117 Challenges Brought about by Integrating Multiple Stakeholders into Farm Management Mentorship of Land Reform Beneficiaries in South Africa

Authors: Carlu Van Der Westhuizen

Abstract:

The South African Agricultural Sector is of major socio-economic importance to the country due to its contribution in maintaining stability in food production and food security, providing labour opportunities, eradicating poverty and earning foreign currency. Against this reality, this paper investigates within the Agricultural Sector in South Africa the changes in Land Policies that the new democratically elected government (African National Congress) brought about since their takeover in 1994. The change in the agricultural environment is decidedly dualistic, with 1) a commercial sector, and 2) a subsistence and emerging farmer sector. The future demands and challenges are mostly identified as those of land redistribution and social upliftment. Opportunities that arose from the challenge of change are, among others, the small-holder participation in the value chain, while the challenge of change in Agriculture and the opportunities that were identified could serve as a yardstick against which the Sectors’ (Agriculture) Performance could be measured in future. Unfortunately, despite all Governments’ Policies, Programmes and Projects and inputs of the Private Sector, the outcomes are, to a large extend, unsuccessful. The urgency with the Land Redistribution Programme is that, for the period 1994 – 2014, only 7.5% of the 30% aim in the redistribution of land was achieved. Another serious aspect of concern is that 90% of the Land Redistribution Projects are not in a state of productive use by emerging farmers. Several reasons may be offered for these failures, amongst others the uncoordinated way in which different stakeholders are involved in a specific farming project. These stakeholders could generally in most cases be identified as: - The Government as the policy maker; - The Private Sector that has the potential to contribute to the sustainable pre- and post-settlement stages of the Programme by cooperating the supporting services to Government; - Inputs from the communities in rural areas where the settlement takes place; - The landowners as sellers of land (e.g. a Traditional Council); and - The emerging beneficiaries as the receivers of land. Mentorship is mostly the medium with which the support are coordinated. In this paper focus will be on three scenarios of different types of mentorship (or management support) namely: - The Taung Irrigation Scheme (TIS) where multiple new land beneficiaries were established by sharing irrigation pivots and receiving mentorship support from commodity organisations within a traditional land sharing system; - Projects whereby the mentor is a strategic partner (mostly a major agricultural 'cooperative' which is also providing inputs to the farmer and responsible for purchasing/marketing all commodities produced); and - An individual mentor who is a private person focussing mainly on farm management mentorship without direct gain other than a monthly stipend paid to the mentor by Government. Against this introduction the focus of the study is investigating the process for the sustainable implementation of Governments’ Land Redistribution in South African Agriculture. To achieve this, the research paper is presented under the themes of problem statement, objectives, methodology and limitations, outline of the research process, as well as proposing possible solutions.

Keywords: land reform, role-players, failures, mentorship, management models

Procedia PDF Downloads 271
1116 Discovery of Exoplanets in Kepler Data Using a Graphics Processing Unit Fast Folding Method and a Deep Learning Model

Authors: Kevin Wang, Jian Ge, Yinan Zhao, Kevin Willis

Abstract:

Kepler has discovered over 4000 exoplanets and candidates. However, current transit planet detection techniques based on the wavelet analysis and the Box Least Squares (BLS) algorithm have limited sensitivity in detecting minor planets with a low signal-to-noise ratio (SNR) and long periods with only 3-4 repeated signals over the mission lifetime of 4 years. This paper presents a novel precise-period transit signal detection methodology based on a new Graphics Processing Unit (GPU) Fast Folding algorithm in conjunction with a Convolutional Neural Network (CNN) to detect low SNR and/or long-period transit planet signals. A comparison with BLS is conducted on both simulated light curves and real data, demonstrating that the new method has higher speed, sensitivity, and reliability. For instance, the new system can detect transits with SNR as low as three while the performance of BLS drops off quickly around SNR of 7. Meanwhile, the GPU Fast Folding method folds light curves 25 times faster than BLS, a significant gain that allows exoplanet detection to occur at unprecedented period precision. This new method has been tested with all known transit signals with 100% confirmation. In addition, this new method has been successfully applied to the Kepler of Interest (KOI) data and identified a few new Earth-sized Ultra-short period (USP) exoplanet candidates and habitable planet candidates. The results highlight the promise for GPU Fast Folding as a replacement to the traditional BLS algorithm for finding small and/or long-period habitable and Earth-sized planet candidates in-transit data taken with Kepler and other space transit missions such as TESS(Transiting Exoplanet Survey Satellite) and PLATO(PLAnetary Transits and Oscillations of stars).

Keywords: algorithms, astronomy data analysis, deep learning, exoplanet detection methods, small planets, habitable planets, transit photometry

Procedia PDF Downloads 225
1115 Rapid Building Detection in Population-Dense Regions with Overfitted Machine Learning Models

Authors: V. Mantey, N. Findlay, I. Maddox

Abstract:

The quality and quantity of global satellite data have been increasing exponentially in recent years as spaceborne systems become more affordable and the sensors themselves become more sophisticated. This is a valuable resource for many applications, including disaster management and relief. However, while more information can be valuable, the volume of data available is impossible to manually examine. Therefore, the question becomes how to extract as much information as possible from the data with limited manpower. Buildings are a key feature of interest in satellite imagery with applications including telecommunications, population models, and disaster relief. Machine learning tools are fast becoming one of the key resources to solve this problem, and models have been developed to detect buildings in optical satellite imagery. However, by and large, most models focus on affluent regions where buildings are generally larger and constructed further apart. This work is focused on the more difficult problem of detection in populated regions. The primary challenge with detecting small buildings in densely populated regions is both the spatial and spectral resolution of the optical sensor. Densely packed buildings with similar construction materials will be difficult to separate due to a similarity in color and because the physical separation between structures is either non-existent or smaller than the spatial resolution. This study finds that training models until they are overfitting the input sample can perform better in these areas than a more robust, generalized model. An overfitted model takes less time to fine-tune from a generalized pre-trained model and requires fewer input data. The model developed for this study has also been fine-tuned using existing, open-source, building vector datasets. This is particularly valuable in the context of disaster relief, where information is required in a very short time span. Leveraging existing datasets means that little to no manpower or time is required to collect data in the region of interest. The training period itself is also shorter for smaller datasets. Requiring less data means that only a few quality areas are necessary, and so any weaknesses or underpopulated regions in the data can be skipped over in favor of areas with higher quality vectors. In this study, a landcover classification model was developed in conjunction with the building detection tool to provide a secondary source to quality check the detected buildings. This has greatly reduced the false positive rate. The proposed methodologies have been implemented and integrated into a configurable production environment and have been employed for a number of large-scale commercial projects, including continent-wide DEM production, where the extracted building footprints are being used to enhance digital elevation models. Overfitted machine learning models are often considered too specific to have any predictive capacity. However, this study demonstrates that, in cases where input data is scarce, overfitted models can be judiciously applied to solve time-sensitive problems.

Keywords: building detection, disaster relief, mask-RCNN, satellite mapping

Procedia PDF Downloads 169
1114 Interactive Glare Visualization Model for an Architectural Space

Authors: Florina Dutt, Subhajit Das, Matthew Swartz

Abstract:

Lighting design and its impact on indoor comfort conditions are an integral part of good interior design. Impact of lighting in an interior space is manifold and it involves many sub components like glare, color, tone, luminance, control, energy efficiency, flexibility etc. While other components have been researched and discussed multiple times, this paper discusses the research done to understand the glare component from an artificial lighting source in an indoor space. Consequently, the paper discusses a parametric model to convey real time glare level in an interior space to the designer/ architect. Our end users are architects and likewise for them it is of utmost importance to know what impression the proposed lighting arrangement and proposed furniture layout will have on indoor comfort quality. This involves specially those furniture elements (or surfaces) which strongly reflect light around the space. Essentially, the designer needs to know the ramification of the ‘discomfortable glare’ at the early stage of design cycle, when he still can afford to make changes to his proposed design and consider different routes of solution for his client. Unfortunately, most of the lighting analysis tools that are present, offer rigorous computation and analysis on the back end eventually making it challenging for the designer to analyze and know the glare from interior light quickly. Moreover, many of them do not focus on glare aspect of the artificial light. That is why, in this paper, we explain a novel approach to approximate interior glare data. Adding to that we visualize this data in a color coded format, expressing the implications of their proposed interior design layout. We focus on making this analysis process very fluid and fast computationally, enabling complete user interaction with the capability to vary different ranges of user inputs adding more degrees of freedom for the user. We test our proposed parametric model on a case study, a Computer Lab space in our college facility.

Keywords: computational geometry, glare impact in interior space, info visualization, parametric lighting analysis

Procedia PDF Downloads 350
1113 Determination of Rare Earth Element Patterns in Uranium Matrix for Nuclear Forensics Application: Method Development for Inductively Coupled Plasma Mass Spectrometry (ICP-MS) Measurements

Authors: Bernadett Henn, Katalin Tálos, Éva Kováss Széles

Abstract:

During the last 50 years, the worldwide permeation of the nuclear techniques induces several new problems in the environmental and in the human life. Nowadays, due to the increasing of the risk of terrorism worldwide, the potential occurrence of terrorist attacks using also weapon of mass destruction containing radioactive or nuclear materials as e.g. dirty bombs, is a real threat. For instance, the uranium pellets are one of the potential nuclear materials which are suitable for making special weapons. The nuclear forensics mainly focuses on the determination of the origin of the confiscated or found nuclear and other radioactive materials, which could be used for making any radioactive dispersive device. One of the most important signatures in nuclear forensics to find the origin of the material is the determination of the rare earth element patterns (REE) in the seized or found radioactive or nuclear samples. The concentration and the normalized pattern of the REE can be used as an evidence of uranium origin. The REE are the fourteen Lanthanides in addition scandium and yttrium what are mostly found together and really low concentration in uranium pellets. The problems of the REE determination using ICP-MS technique are the uranium matrix (high concentration of uranium) and the interferences among Lanthanides. In this work, our aim was to develop an effective chemical sample preparation process using extraction chromatography for separation the uranium matrix and the rare earth elements from each other following some publications can be found in the literature and modified them. Secondly, our purpose was the optimization of the ICP-MS measuring process for REE concentration. During method development, in the first step, a REE model solution was used in two different types of extraction chromatographic resins (LN® and TRU®) and different acidic media for environmental testing the Lanthanides separation. Uranium matrix was added to the model solution and was proved in the same conditions. Methods were tested and validated using REE UOC (uranium ore concentrate) reference materials. Samples were analyzed by sector field mass spectrometer (ICP-SFMS).

Keywords: extraction chromatography, nuclear forensics, rare earth elements, uranium

Procedia PDF Downloads 309
1112 Estimation of Fragility Curves Using Proposed Ground Motion Selection and Scaling Procedure

Authors: Esra Zengin, Sinan Akkar

Abstract:

Reliable and accurate prediction of nonlinear structural response requires specification of appropriate earthquake ground motions to be used in nonlinear time history analysis. The current research has mainly focused on selection and manipulation of real earthquake records that can be seen as the most critical step in the performance based seismic design and assessment of the structures. Utilizing amplitude scaled ground motions that matches with the target spectra is commonly used technique for the estimation of nonlinear structural response. Representative ground motion ensembles are selected to match target spectrum such as scenario-based spectrum derived from ground motion prediction equations, Uniform Hazard Spectrum (UHS), Conditional Mean Spectrum (CMS) or Conditional Spectrum (CS). Different sets of criteria exist among those developed methodologies to select and scale ground motions with the objective of obtaining robust estimation of the structural performance. This study presents ground motion selection and scaling procedure that considers the spectral variability at target demand with the level of ground motion dispersion. The proposed methodology provides a set of ground motions whose response spectra match target median and corresponding variance within a specified period interval. The efficient and simple algorithm is used to assemble the ground motion sets. The scaling stage is based on the minimization of the error between scaled median and the target spectra where the dispersion of the earthquake shaking is preserved along the period interval. The impact of the spectral variability on nonlinear response distribution is investigated at the level of inelastic single degree of freedom systems. In order to see the effect of different selection and scaling methodologies on fragility curve estimations, results are compared with those obtained by CMS-based scaling methodology. The variability in fragility curves due to the consideration of dispersion in ground motion selection process is also examined.

Keywords: ground motion selection, scaling, uncertainty, fragility curve

Procedia PDF Downloads 583
1111 Real World Cancer Pain Incidence and Treatment in Daily Hospital

Authors: Alexandru Grigorescu, Alexandra Protesanu

Abstract:

Background: Approximately 34-67 percent of cancer patients experience an episode of uncontrolled pain during the course of their disease, depending on the stage. The aim is to provide evidence-based data for pain prevalence, diagnosis and treatment recommendations on an integrative model of medical oncology and palliative care for patients with cancer diagnostic in a day hospital. Patients and method: Consultation registers and electronic records of 166 Patients (Pts) were studied from April 2022 to March 2023. Pts with pain syndrome were selected. The pain was objectified by the visual pain scale. To elucidate the causes of the pain, investigations were carried out: bone scintigraphy, CT scan, and PET-CT. The analgesic treatments were represented by weak and strong morphine, radiotherapy, and bisphosphonates. Result: During the mentioned period, 166 oncological patients (74 women and 92 men) were treated in the oncology day hospitalization service. There were 1,500 consultations, 40 of which were only for pain. The neoplastic locations were: gynecological, malignant melanoma, breast, gastric, bronchopulmonary, colorectal, liver, pancreatic, bladder, and kidney. 70 Pts presented pain syndrome. The causes of the pain were represented by bone metastases, compressive tumors, and post-surgical status. Drug treatment: Tramadol 47 Pts, of which 10 switched to a major opioid (Oxycodonum, Morphine sulfate), 20 Pts were treated with Oxycodonum as the first intention. In 5 patients ry to rotated morphine, 20 Pts received palliative radiotherapy, 10 Pts were treated with bisphosphonates. 2 Pts required neurosurgery consultation for an antalgic intervention. 5 Pts had important adverse reactions to morphine. All patients and their families were advised by a medical oncologist and psychologist for a lifestyle change. Conclusions: The prevalence of pain was similar to that described in the literature. In most cases, the pain could be managed in the day hospital. Weak and strong morphine represented the main pain therapy. Palliative radiotherapy was the second most effective therapy. Treatment with bisphosphonates was useful. Surgical interventions were rarely indicated. Discussions with patients and their families regarding the lifestyle change were important.

Keywords: cancer pain, opioids, medical oncology, palliative care

Procedia PDF Downloads 66
1110 Community Development and Empowerment

Authors: Shahin Marjan Nanaje

Abstract:

The present century is the time that social worker faced complicated issues in the area of their work. All the focus are on bringing change in the life of those that they live in margin or live in poverty became the cause that we have forgotten to look at ourselves and start to bring change in the way we address issues. It seems that there is new area of needs that social worker should response to that. In need of dialogue and collaboration, to address the issues and needs of community both individually and as a group we need to have new method of dialogue as tools to reach to collaboration. The social worker as link between community, organization and government play multiple roles. They need to focus in the area of communication with new ability, to transfer all the narration of the community to those organization and government and vice versa. It is not relate only in language but it is about changing dialogue. Migration for survival by job seeker to the big cities created its own issues and difficulty and therefore created new need. Collaboration is not only requiring between government sector and non-government sectors but also it could be in new way between government, non-government and communities. To reach to this collaboration we need healthy, productive and meaningful dialogue. In this new collaboration there will not be any hierarchy between members. The methodology that selected by researcher were focusing on observation at the first place, and used questionnaire in the second place. Duration of the research was three months and included home visits, group discussion and using communal narrations which helped to bring enough evidence to understand real need of community. The sample selected randomly was included 70 immigrant families which work as sweepers in the slum community in Bangalore, Karnataka. The result reveals that there is a gap between what a community is and what organizations, government and members of society apart from this community think about them. Consequently, it is learnt that to supply any service or bring any change to slum community, we need to apply new skill to have dialogue and understand each other before providing any services. Also to bring change in the life of those marginal groups at large we need to have collaboration as their challenges are collective and need to address by different group and collaboration will be necessary. The outcome of research helped researcher to see the area of need for new method of dialogue and collaboration as well as a framework for collaboration and dialogue that were main focus of the paper. The researcher used observation experience out of ten NGO’s and their activities to create framework for dialogue and collaboration.

Keywords: collaboration, dialogue, community development, empowerment

Procedia PDF Downloads 588
1109 Role of Higher Education Commission (HEC) in Strengthening the Academia and Industry Relationships: The Case of Pakistan

Authors: Shah Awan, Fahad Sultan, Shahid Jan Kakakhel

Abstract:

Higher education in the 21st century has been faced with game-changing developments impacting teaching and learning and also strengthening the academia and industry relationship. The academia and industry relationship plays a key role in economic development in developed, developing and emerging economies. The partnership not only explores innovation but also provide a real time experience of the theoretical knowledge. For this purpose, the paper assessing the role of HEC in the Pakistan and discusses the way in academia and industry contribute their role in improving Pakistani economy. Successive studies have reported the importance of innovation and technology , research development initiatives in public sector universities, and the significance of role of higher education commission in strengthening the academia and industrial relationship to improve performance and minimize failure. The paper presents the results of interviews conducted, using semi-structured interviews amongst 26 staff members of two public sector universities, higher education commission and managers from corporate sector.The study shows public sector universities face the several barriers in developing economy like Pakistan, to establish the successful collaboration between universities and industry. Of the participants interviewed, HEC provides an insufficient road map to improve organisational capabilities in facilitating and enhance the performance. The results of this study have demonstrated that HEC has to embrace and internalize support to industry and public sector universities to compete in the era of globalization. Publication of this research paper will help higher education sector to further strengthen research sector through industry and university collaboration. The research findings corroborate the findings of Dooley and Kirk who highlights the features of university-industry collaboration. Enhanced communication has implications for the quality of the product and human resource. Crucial for developing economies, feasible organisational design and framework is essential for the university-industry relationship.

Keywords: higher education commission, role, academia and industry relationship, Pakistan

Procedia PDF Downloads 467
1108 The Consequence of Being Perceived as An 'Immodest Woman': The Kuwaiti Criminal Justice System’s Response to Allegations of Sexual Violence

Authors: Eiman Alqattan

Abstract:

Kuwaiti criminal justice system’s responses to allegations of sexual violence against women during the pre-trial process, suggesting that the system in Kuwait is affected by an ethos that is male dominated and patriarchal, and which results in prejudicial, unfair, and unequal treatment of female victims of serious sexual offenses. Data derived from qualitative semi-structured face-to-face interviews with four main groups of criminal justice system personnel in Kuwait (prosecutors, police investigators, police officers, and investigators) reveal the characteristics of a complaint of sexual violence that contribute to cases being either sent to court or dismissed. This proposed paper will suggest that Arab cultural views of women appear to influence and even shape the views, perceptions, and conduct of the interviewed Kuwaiti criminal justice system personnel regarding complaints of sexual violence made by citizens. Data from the interviews show how the image of the ‘modest woman’ that exists within Arabic cultural views and norms greatly contributes to shaping the characteristics of what the majority of the interviewed officials considered to be a ‘credible’ allegation of sexual violence. In addition, it is clear that the interviewees’ definitions of ‘modesty’ varied. Yet the problem is not only about the stereotypical perceptions of complainants or the consequences of those perceptions on the decision to send the case to court. These perceptions also affected the behaviours of criminal justice system personnel towards citizen complainants. When complainants’ allegations were questioned, investigators went as far as abusing the women verbally or physically, often in order to force them to withdraw the so-called ‘false’ complaint in order to protect the ‘real’ victim: the ‘innocent defendant’. The proposed presentation will discuss these police approaches to women and the techniques used in assessing the credibility of their accusations, including how they differ depending on whether the complainant was under or over 21 years old.

Keywords: criminal justice system, law and Arab culture, modest woman, sexual violence

Procedia PDF Downloads 296
1107 Dynamic Model for Forecasting Rainfall Induced Landslides

Authors: R. Premasiri, W. A. H. A. Abeygunasekara, S. M. Hewavidana, T. Jananthan, R. M. S. Madawala, K. Vaheeshan

Abstract:

Forecasting the potential for disastrous events such as landslides has become one of the major necessities in the current world. Most of all, the landslides occurred in Sri Lanka are found to be triggered mostly by intense rainfall events. The study area is the landslide near Gerandiella waterfall which is located by the 41st kilometer post on Nuwara Eliya-Gampala main road in Kotmale Division in Sri Lanka. The landslide endangers the entire Kotmale town beneath the slope. Geographic Information System (GIS) platform is very much useful when it comes to the need of emulating the real-world processes. The models are used in a wide array of applications ranging from simple evaluations to the levels of forecast future events. This project investigates the possibility of developing a dynamic model to map the spatial distribution of the slope stability. The model incorporates several theoretical models including the infinite slope model, Green Ampt infiltration model and Perched ground water flow model. A series of rainfall values can be fed to the model as the main input to simulate the dynamics of slope stability. Hydrological model developed using GIS is used to quantify the perched water table height, which is one of the most critical parameters affecting the slope stability. Infinite slope stability model is used to quantify the degree of slope stability in terms of factor of safety. DEM was built with the use of digitized contour data. Stratigraphy was modeled in Surfer using borehole data and resistivity images. Data available from rainfall gauges and piezometers were used in calibrating the model. During the calibration, the parameters were adjusted until a good fit between the simulated ground water levels and the piezometer readings was obtained. This model equipped with the predicted rainfall values can be used to forecast of the slope dynamics of the area of interest. Therefore it can be investigated the slope stability of rainfall induced landslides by adjusting temporal dimensions.

Keywords: factor of safety, geographic information system, hydrological model, slope stability

Procedia PDF Downloads 423
1106 Cost Efficient Receiver Tube Technology for Eco-Friendly Concentrated Solar Thermal Applications

Authors: M. Shiva Prasad, S. R. Atchuta, T. Vijayaraghavan, S. Sakthivel

Abstract:

The world is in need of efficient energy conversion technologies which are affordable, accessible, and sustainable with eco-friendly nature. Solar energy is one of the cornerstones for the world’s economic growth because of its abundancy with zero carbon pollution. Among the various solar energy conversion technologies, solar thermal technology has attracted a substantial renewed interest due to its diversity and compatibility in various applications. Solar thermal systems employ concentrators, tracking systems and heat engines for electricity generation which lead to high cost and complexity in comparison with photovoltaics; however, it is compatible with distinct thermal energy storage capability and dispatchable electricity which creates a tremendous attraction. Apart from that, employing cost-effective solar selective receiver tube in a concentrating solar thermal (CST) system improves the energy conversion efficiency and directly reduces the cost of technology. In addition, the development of solar receiver tubes by low cost methods which can offer high optical properties and corrosion resistance in an open-air atmosphere would be beneficial for low and medium temperature applications. In this regard, our work opens up an approach which has the potential to achieve cost-effective energy conversion. We have developed a highly selective tandem absorber coating through a facile wet chemical route by a combination of chemical oxidation, sol-gel, and nanoparticle coating methods. The developed tandem absorber coating has gradient refractive index nature on stainless steel (SS 304) and exhibited high optical properties (α ≤ 0.95 & ε ≤ 0.14). The first absorber layer (Cr-Mn-Fe oxides) developed by controlled oxidation of SS 304 in a chemical bath reactor. A second composite layer of ZrO2-SiO2 has been applied on the chemically oxidized substrate by So-gel dip coating method to serve as optical enhancing and corrosion resistant layer. Finally, an antireflective layer (MgF2) has been deposited on the second layer, to achieve > 95% of absorption. The developed tandem layer exhibited good thermal stability up to 250 °C in open air atmospheric condition and superior corrosion resistance (withstands for > 200h in salt spray test (ASTM B117)). After the successful development of a coating with targeted properties at a laboratory scale, a prototype of the 1 m tube has been demonstrated with excellent uniformity and reproducibility. Moreover, it has been validated under standard laboratory test condition as well as in field condition with a comparison of the commercial receiver tube. The presented strategy can be widely adapted to develop highly selective coatings for a variety of CST applications ranging from hot water, solar desalination, and industrial process heat and power generation. The high-performance, cost-effective medium temperature receiver tube technology has attracted many industries, and recently the technology has been transferred to Indian industry.

Keywords: concentrated solar thermal system, solar selective coating, tandem absorber, ultralow refractive index

Procedia PDF Downloads 89
1105 Exploring Regularity Results in the Context of Extremely Degenerate Elliptic Equations

Authors: Zahid Ullah, Atlas Khan

Abstract:

This research endeavors to explore the regularity properties associated with a specific class of equations, namely extremely degenerate elliptic equations. These equations hold significance in understanding complex physical systems like porous media flow, with applications spanning various branches of mathematics. The focus is on unraveling and analyzing regularity results to gain insights into the smoothness of solutions for these highly degenerate equations. Elliptic equations, fundamental in expressing and understanding diverse physical phenomena through partial differential equations (PDEs), are particularly adept at modeling steady-state and equilibrium behaviors. However, within the realm of elliptic equations, the subset of extremely degenerate cases presents a level of complexity that challenges traditional analytical methods, necessitating a deeper exploration of mathematical theory. While elliptic equations are celebrated for their versatility in capturing smooth and continuous behaviors across different disciplines, the introduction of degeneracy adds a layer of intricacy. Extremely degenerate elliptic equations are characterized by coefficients approaching singular behavior, posing non-trivial challenges in establishing classical solutions. Still, the exploration of extremely degenerate cases remains uncharted territory, requiring a profound understanding of mathematical structures and their implications. The motivation behind this research lies in addressing gaps in the current understanding of regularity properties within solutions to extremely degenerate elliptic equations. The study of extreme degeneracy is prompted by its prevalence in real-world applications, where physical phenomena often exhibit characteristics defying conventional mathematical modeling. Whether examining porous media flow or highly anisotropic materials, comprehending the regularity of solutions becomes crucial. Through this research, the aim is to contribute not only to the theoretical foundations of mathematics but also to the practical applicability of mathematical models in diverse scientific fields.

Keywords: elliptic equations, extremely degenerate, regularity results, partial differential equations, mathematical modeling, porous media flow

Procedia PDF Downloads 73
1104 The Crossroad of Identities in Wajdi Mouawad's 'Littoral': A Rhizomatic Approach of Identity Reconstruction through Theatre and Performance

Authors: Mai Hussein

Abstract:

'Littoral' is an original voice in Québécois theatre, spanning the cultural gaps that can exist between the playwrights’ native Lebanon, North America, Quebec, and Europe. Littoral is a 'crossroad' of cultures and themes, a 'bridge' connecting cultures and languages. It represents a new form of theatrical writing that combines the verbal, the vocal and the pantomimic, calling upon the stage to question the real, to engage characters in a quest, in a journey of mourning, of reconstructing identity and a collective memory despite ruins and wars. A theatre of witness, a theatre denouncing irrationality of racism and war, a theatre 'performing' the symptoms of the stress disorders of characters passing from resistance and anger to reconciliation and giving voice to the silenced victims, these are some of the pillars that this play has to offer. In this corrida between life and death, the identity seems like a work-in-progress that is shaped in the presence of the Self and the Other. This trajectory will lead to re-open widely the door to questions, interrogations, and reflections to show how this play is at the nexus of contemporary preoccupations of the 21st century: the importance of memory, the search for meaning, the pursuit of the infinite. It also shows how a play can create bridges between languages, cultures, societies, and movements. To what extent does it mediate between the words and the silence, and how does it burn the bridges or the gaps between the textual and the performative while investigating the power of intermediality to confront racism and segregation. It also underlines the centrality of confrontation between cultures, languages, writing and representation techniques to challenge the characters in their quest to restructure their shattered, but yet intertwined identities. The goal of this theatre would then be to invite everyone involved in the process of a journey of self-discovery away from their comfort zone. Everyone will have to explore the liminal space, to read in between the lines of the written text as well as in between the text and the performance to explore the gaps and the tensions that exist between what is said, and what is played, between the 'parole' and the performative body.

Keywords: identity, memory, performance, testimony, trauma

Procedia PDF Downloads 115
1103 Offline Parameter Identification and State-of-Charge Estimation for Healthy and Aged Electric Vehicle Batteries Based on the Combined Model

Authors: Xiaowei Zhang, Min Xu, Saeid Habibi, Fengjun Yan, Ryan Ahmed

Abstract:

Recently, Electric Vehicles (EVs) have received extensive consideration since they offer a more sustainable and greener transportation alternative compared to fossil-fuel propelled vehicles. Lithium-Ion (Li-ion) batteries are increasingly being deployed in EVs because of their high energy density, high cell-level voltage, and low rate of self-discharge. Since Li-ion batteries represent the most expensive component in the EV powertrain, accurate monitoring and control strategies must be executed to ensure their prolonged lifespan. The Battery Management System (BMS) has to accurately estimate parameters such as the battery State-of-Charge (SOC), State-of-Health (SOH), and Remaining Useful Life (RUL). In order for the BMS to estimate these parameters, an accurate and control-oriented battery model has to work collaboratively with a robust state and parameter estimation strategy. Since battery physical parameters, such as the internal resistance and diffusion coefficient change depending on the battery state-of-life (SOL), the BMS has to be adaptive to accommodate for this change. In this paper, an extensive battery aging study has been conducted over 12-months period on 5.4 Ah, 3.7 V Lithium polymer cells. Instead of using fixed charging/discharging aging cycles at fixed C-rate, a set of real-world driving scenarios have been used to age the cells. The test has been interrupted every 5% capacity degradation by a set of reference performance tests to assess the battery degradation and track model parameters. As battery ages, the combined model parameters are optimized and tracked in an offline mode over the entire batteries lifespan. Based on the optimized model, a state and parameter estimation strategy based on the Extended Kalman Filter (EKF) and the relatively new Smooth Variable Structure Filter (SVSF) have been applied to estimate the SOC at various states of life.

Keywords: lithium-ion batteries, genetic algorithm optimization, battery aging test, parameter identification

Procedia PDF Downloads 268
1102 Climate Change and Health in Policies

Authors: Corinne Kowalski, Lea de Jong, Rainer Sauerborn, Niamh Herlihy, Anneliese Depoux, Jale Tosun

Abstract:

Climate change is considered one of the biggest threats to human health of the 21st century. The link between climate change and health has received relatively little attention in the media, in research and in policy-making. A long term and broad overview of how health is represented in the legislation on climate change is missing in the legislative literature. It is unknown if or how the argument for health is referred in legal clauses addressing climate change, in national and European legislation. Integrating scientific based evidence into policies regarding the impacts of climate change on health could be a key step to inciting the political and societal changes necessary to decelerate global warming. This may also drive the implementation of new strategies to mitigate the consequences on health systems. To provide an overview of this issue, we are analyzing the Global Climate Legislation Database provided by the Grantham Research Institute on Climate Change and the Environment. This institution was established in 2008 at the London School of Economics and Political Science. The database consists of (updated as of 1st January 2015) legislations on climate change in 99 countries around the world. This tool offers relevant information about the state of climate related policies. We will use the database to systematically analyze the 829 identified legislations to identify how health is represented as a relevant aspect of climate change legislation. We are conducting explorative research of national and supranational legislations and anticipate health to be addressed in various forms. The goal is to highlight how often, in what specific terms, which aspects of health or health risks of climate change are mentioned in various legislations. The position and recurrence of the mention of health is also of importance. Data will be extracted with complete quotation of the sentence which mentions health, which will allow for second qualitative stage to analyze which aspects of health are represented and in what context. This study is part of an interdisciplinary project called 4CHealth that confronts results of the research done on scientific, political and press literature to better understand how the knowledge on climate change and health circulates within those different fields and whether and how it is translated to real world change.

Keywords: climate change, explorative research, health, policies

Procedia PDF Downloads 365
1101 Gene Expressions in Left Ventricle Heart Tissue of Rat after 150 Mev Proton Irradiation

Authors: R. Fardid, R. Coppes

Abstract:

Introduction: In mediastinal radiotherapy and to a lesser extend also in total-body irradiation (TBI) radiation exposure may lead to development of cardiac diseases. Radiation-induced heart disease is dose-dependent and it is characterized by a loss of cardiac function, associated with progressive heart cells degeneration. We aimed to determine the in-vivo radiation effects on fibronectin, ColaA1, ColaA2, galectin and TGFb1 gene expression levels in left ventricle heart tissues of rats after irradiation. Material and method: Four non-treatment adult Wistar rats as control group (group A) were selected. In group B, 4 adult Wistar rats irradiated to 20 Gy single dose of 150 Mev proton beam locally in heart only. In heart plus lung irradiate group (group C) 4 adult rats was irradiated by 50% of lung laterally plus heart radiation that mentioned in before group. At 8 weeks after radiation animals sacrificed and left ventricle heart dropped in liquid nitrogen for RNA extraction by Absolutely RNA® Miniprep Kit (Stratagen, Cat no. 400800). cDNA was synthesized using M-MLV reverse transcriptase (Life Technologies, Cat no. 28025-013). We used Bio-Rad machine (Bio Rad iQ5 Real Time PCR) for QPCR testing by relative standard curve method. Results: We found that gene expression of fibronectin in group C significantly increased compared to control group, but it was not showed significant change in group B compared to group A. The levels of gene expressions of Cola1 and Cola2 in mRNA did not show any significant changes between normal and radiation groups. Changes of expression of galectin target significantly increased only in group C compared to group A. TGFb1 expressions in group C more than group B showed significant enhancement compared to group A. Conclusion: In summary we can say that 20 Gy of proton exposure of heart tissue may lead to detectable damages in heart cells and may distribute function of them as a component of heart tissue structure in molecular level.

Keywords: gene expression, heart damage, proton irradiation, radiotherapy

Procedia PDF Downloads 489
1100 Soybean Lecithin Based Reverse Micellar Extraction of Pectinase from Synthetic Solution

Authors: Sivananth Murugesan, I. Regupathi, B. Vishwas Prabhu, Ankit Devatwal, Vishnu Sivan Pillai

Abstract:

Pectinase is an important enzyme which has a wide range of applications including textile processing and bioscouring of cotton fibers, coffee and tea fermentation, purification of plant viruses, oil extraction etc. Selective separation and purification of pectinase from fermentation broth and recover the enzyme form process stream for reuse are cost consuming process in most of the enzyme based industries. It is difficult to identify a suitable medium to enhance enzyme activity and retain its enzyme characteristics during such processes. The cost effective, selective separation of enzymes through the modified Liquid-liquid extraction is of current research interest worldwide. Reverse micellar extraction, globally acclaimed Liquid-liquid extraction technique is well known for its separation and purification of solutes from the feed which offers higher solute specificity and partitioning, ease of operation and recycling of extractants used. Surfactant concentrations above critical micelle concentration to an apolar solvent form micelles and addition of micellar phase to water in turn forms reverse micelles or water-in-oil emulsions. Since, electrostatic interaction plays a major role in the separation/purification of solutes using reverse micelles. These interaction parameters can be altered with the change in pH, addition of cosolvent, surfactant and electrolyte and non-electrolyte. Even though many chemical based commercial surfactant had been utilized for this purpose, the biosurfactants are more suitable for the purification of enzymes which are used in food application. The present work focused on the partitioning of pectinase from the synthetic aqueous solution within the reverse micelle phase formed by a biosurfactant, Soybean Lecithin dissolved in chloroform. The critical micelle concentration of soybean lecithin/chloroform solution was identified through refractive index and density measurements. Effect of surfactant concentrations above and below the critical micelle concentration was considered to study its effect on enzyme activity, enzyme partitioning within the reverse micelle phase. The effect of pH and electrolyte salts on the partitioning behavior was studied by varying the system pH and concentration of different salts during forward and back extraction steps. It was observed that lower concentrations of soybean lecithin enhanced the enzyme activity within the water core of the reverse micelle with maximizing extraction efficiency. The maximum yield of pectinase of 85% with a partitioning coefficient of 5.7 was achieved at 4.8 pH during forward extraction and 88% yield with a partitioning coefficient of 7.1 was observed during backward extraction at a pH value of 5.0. However, addition of salt decreased the enzyme activity and especially at higher salt concentrations enzyme activity declined drastically during both forward and back extraction steps. The results proved that reverse micelles formed by Soybean Lecithin and chloroform may be used for the extraction of pectinase from aqueous solution. Further, the reverse micelles can be considered as nanoreactors to enhance enzyme activity and maximum utilization of substrate at optimized conditions, which are paving a way to process intensification and scale-down.

Keywords: pectinase, reverse micelles, soybean lecithin, selective partitioning

Procedia PDF Downloads 372
1099 Seek First to Regulate, Then to Understand: The Case for Preemptive Regulation of Robots

Authors: Catherine McWhorter

Abstract:

Robotics is a fast-evolving field lacking comprehensive and harm-mitigating regulation; it also lacks critical data on how human-robot interaction (HRI) may affect human psychology. As most anthropomorphic robots are intended as substitutes for humans, this paper asserts that the commercial robotics industry should be preemptively regulated at the federal level such that robots capable of embodying a victim role in criminal scenarios (“vicbots”) are prohibited until clinical studies determine their effects on the user and society. The results of these studies should then inform more permanent legislation that strives to mitigate risks of harm without infringing upon fundamental rights or stifling innovation. This paper explores these concepts through the lens of the sex robot industry. The sexbot industry offers some of the most realistic, interactive, and customizable robots for sale today. From approximately 2010 until 2017, some sex robot producers, such as True Companion, actively promoted ‘vicbot’ culture with personalities like “Frigid Farrah” and “Young Yoko” but received significant public backlash for fetishizing rape and pedophilia. Today, “Frigid Farrah” and “Young Yoko” appear to have vanished. Sexbot producers have replaced preprogrammed vicbot personalities in favor of one generic, customizable personality. According to the manufacturer ainidoll.com, when asked, there is only one thing the user won’t be able to program the sexbot to do – “…give you drama”. The ability to customize vicbot personas is possible with today’s generic personality sexbots and may undermine the intent of some current legislative efforts. Current debate on the effects of vicbots indicates a lack of consensus. Some scholars suggest vicbots may reduce the rate of actual sex crimes, and some suggest that vicbots will, in fact, create sex criminals, while others cite their potential for rehabilitation. Vicbots may have value in some instances when prescribed by medical professionals, but the overall uncertainty and lack of data further underscore the need for preemptive regulation and clinical research. Existing literature on exposure to media violence and its effects on prosocial behavior, human aggression, and addiction may serve as launch points for specific studies into the hyperrealism of vicbots. Of course, the customization, anthropomorphism and artificial intelligence of sexbots, and therefore more mainstream robots, will continue to evolve. The existing sexbot industry offers an opportunity to preemptively regulate and to research answers to these and many more questions before this type of technology becomes even more advanced and mainstream. Robots pose complicated moral, ethical, and legal challenges, most of which are beyond the scope of this paper. By examining the possibility for custom vicbots via the sexbots industry, reviewing existing literature on regulation, media violence, and vicbot user effects, this paper strives to underscore the need for preemptive federal regulation prohibiting vicbot capabilities in robots while advocating for further research into the potential for the user and societal harm by the same.

Keywords: human-robot interaction effects, regulation, research, robots

Procedia PDF Downloads 206
1098 Automation of Finite Element Simulations for the Design Space Exploration and Optimization of Type IV Pressure Vessel

Authors: Weili Jiang, Simon Cadavid Lopera, Klaus Drechsler

Abstract:

Fuel cell vehicle has become the most competitive solution for the transportation sector in the hydrogen economy. Type IV pressure vessel is currently the most popular and widely developed technology for the on-board storage, based on their high reliability and relatively low cost. Due to the stringent requirement on mechanical performance, the pressure vessel is subject to great amount of composite material, a major cost driver for the hydrogen tanks. Evidently, the optimization of composite layup design shows great potential in reducing the overall material usage, yet requires comprehensive understanding on underlying mechanisms as well as the influence of different design parameters on mechanical performance. Given the type of materials and manufacturing processes by which the type IV pressure vessels are manufactured, the design and optimization are a nuanced subject. The manifold of stacking sequence and fiber orientation variation possibilities have an out-standing effect on vessel strength due to the anisotropic property of carbon fiber composites, which make the design space high dimensional. Each variation of design parameters requires computational resources. Using finite element analysis to evaluate different designs is the most common method, however, the model-ing, setup and simulation process can be very time consuming and result in high computational cost. For this reason, it is necessary to build a reliable automation scheme to set up and analyze the di-verse composite layups. In this research, the simulation process of different tank designs regarding various parameters is conducted and automatized in a commercial finite element analysis framework Abaqus. Worth mentioning, the modeling of the composite overwrap is automatically generated using an Abaqus-Python scripting interface. The prediction of the winding angle of each layer and corresponding thickness variation on dome region is the most crucial step of the modeling, which is calculated and implemented using analytical methods. Subsequently, these different composites layups are simulated as axisymmetric models to facilitate the computational complexity and reduce the calculation time. Finally, the results are evaluated and compared regarding the ultimate tank strength. By automatically modeling, evaluating and comparing various composites layups, this system is applicable for the optimization of the tanks structures. As mentioned above, the mechanical property of the pressure vessel is highly dependent on composites layup, which requires big amount of simulations. Consequently, to automatize the simulation process gains a rapid way to compare the various designs and provide an indication of the optimum one. Moreover, this automation process can also be operated for creating a data bank of layups and corresponding mechanical properties with few preliminary configuration steps for the further case analysis. Subsequently, using e.g. machine learning to gather the optimum by the data pool directly without the simulation process.

Keywords: type IV pressure vessels, carbon composites, finite element analy-sis, automation of simulation process

Procedia PDF Downloads 135
1097 Quantitative Analysis of Contract Variations Impact on Infrastructure Project Performance

Authors: Soheila Sadeghi

Abstract:

Infrastructure projects often encounter contract variations that can significantly deviate from the original tender estimates, leading to cost overruns, schedule delays, and financial implications. This research aims to quantitatively assess the impact of changes in contract variations on project performance by conducting an in-depth analysis of a comprehensive dataset from the Regional Airport Car Park project. The dataset includes tender budget, contract quantities, rates, claims, and revenue data, providing a unique opportunity to investigate the effects of variations on project outcomes. The study focuses on 21 specific variations identified in the dataset, which represent changes or additions to the project scope. The research methodology involves establishing a baseline for the project's planned cost and scope by examining the tender budget and contract quantities. Each variation is then analyzed in detail, comparing the actual quantities and rates against the tender estimates to determine their impact on project cost and schedule. The claims data is utilized to track the progress of work and identify deviations from the planned schedule. The study employs statistical analysis using R to examine the dataset, including tender budget, contract quantities, rates, claims, and revenue data. Time series analysis is applied to the claims data to track progress and detect variations from the planned schedule. Regression analysis is utilized to investigate the relationship between variations and project performance indicators, such as cost overruns and schedule delays. The research findings highlight the significance of effective variation management in construction projects. The analysis reveals that variations can have a substantial impact on project cost, schedule, and financial outcomes. The study identifies specific variations that had the most significant influence on the Regional Airport Car Park project's performance, such as PV03 (additional fill, road base gravel, spray seal, and asphalt), PV06 (extension to the commercial car park), and PV07 (additional box out and general fill). These variations contributed to increased costs, schedule delays, and changes in the project's revenue profile. The study also examines the effectiveness of project management practices in managing variations and mitigating their impact. The research suggests that proactive risk management, thorough scope definition, and effective communication among project stakeholders can help minimize the negative consequences of variations. The findings emphasize the importance of establishing clear procedures for identifying, assessing, and managing variations throughout the project lifecycle. The outcomes of this research contribute to the body of knowledge in construction project management by demonstrating the value of analyzing tender, contract, claims, and revenue data in variation impact assessment. However, the research acknowledges the limitations imposed by the dataset, particularly the absence of detailed contract and tender documents. This constraint restricts the depth of analysis possible in investigating the root causes and full extent of variations' impact on the project. Future research could build upon this study by incorporating more comprehensive data sources to further explore the dynamics of variations in construction projects.

Keywords: contract variation impact, quantitative analysis, project performance, claims analysis

Procedia PDF Downloads 40
1096 Novel Hole-Bar Standard Design and Inter-Comparison for Geometric Errors Identification on Machine-Tool

Authors: F. Viprey, H. Nouira, S. Lavernhe, C. Tournier

Abstract:

Manufacturing of freeform parts may be achieved on 5-axis machine tools currently considered as a common means of production. In particular, the geometrical quality of the freeform parts depends on the accuracy of the multi-axis structural loop, which is composed of several component assemblies maintaining the relative positioning between the tool and the workpiece. Therefore, to reach high quality of the geometries of the freeform parts the geometric errors of the 5 axis machine should be evaluated and compensated, which leads one to master the deviations between the tool and the workpiece (volumetric accuracy). In this study, a novel hole-bar design was developed and used for the characterization of the geometric errors of a RRTTT 5-axis machine tool. The hole-bar standard design is made of Invar material, selected since it is less sensitive to thermal drift. The proposed design allows once to extract 3 intrinsic parameters: one linear positioning and two straightnesses. These parameters can be obtained by measuring the cylindricity of 12 holes (bores) and 11 cylinders located on a perpendicular plane. By mathematical analysis, twelve 3D points coordinates can be identified and correspond to the intersection of each hole axis with the least square plane passing through two perpendicular neighbour cylinders axes. The hole-bar was calibrated using a precision CMM at LNE traceable the SI meter definition. The reversal technique was applied in order to separate the error forms of the hole bar from the motion errors of the mechanical guiding systems. An inter-comparison was additionally conducted between four NMIs (National Metrology Institutes) within the EMRP IND62: JRP-TIM project. Afterwards, the hole-bar was integrated in RRTTT 5-axis machine tool to identify its volumetric errors. Measurements were carried out in real time and combine raw data acquired by the Renishaw RMP600 touch probe and the linear and rotary encoders. The geometric errors of the 5 axis machine were also evaluated by an accurate laser tracer interferometer system. The results were compared to those obtained with the hole bar.

Keywords: volumetric errors, CMM, 3D hole-bar, inter-comparison

Procedia PDF Downloads 384
1095 PLO-AIM: Potential-Based Lane Organization in Autonomous Intersection Management

Authors: Berk Ecer, Ebru Akcapinar Sezer

Abstract:

Traditional management models of intersections, such as no-light intersections or signalized intersection, are not the most effective way of passing the intersections if the vehicles are intelligent. To this end, Dresner and Stone proposed a new intersection control model called Autonomous Intersection Management (AIM). In the AIM simulation, they were examining the problem from a multi-agent perspective, demonstrating that intelligent intersection control can be made more efficient than existing control mechanisms. In this study, autonomous intersection management has been investigated. We extended their works and added a potential-based lane organization layer. In order to distribute vehicles evenly to each lane, this layer triggers vehicles to analyze near lanes, and they change their lane if other lanes have an advantage. We can observe this behavior in real life, such as drivers, change their lane by considering their intuitions. Basic intuition on selecting the correct lane for traffic is selecting a less crowded lane in order to reduce delay. We model that behavior without any change in the AIM workflow. Experiment results show us that intersection performance is directly connected with the vehicle distribution in lanes of roads of intersections. We see the advantage of handling lane management with a potential approach in performance metrics such as average delay of intersection and average travel time. Therefore, lane management and intersection management are problems that need to be handled together. This study shows us that the lane through which vehicles enter the intersection is an effective parameter for intersection management. Our study draws attention to this parameter and suggested a solution for it. We observed that the regulation of AIM inputs, which are vehicles in lanes, was as effective as contributing to aim intersection management. PLO-AIM model outperforms AIM in evaluation metrics such as average delay of intersection and average travel time for reasonable traffic rates, which is in between 600 vehicle/hour per lane to 1300 vehicle/hour per lane. The proposed model reduced the average travel time reduced in between %0.2 - %17.3 and reduced the average delay of intersection in between %1.6 - %17.1 for 4-lane and 6-lane scenarios.

Keywords: AIM project, autonomous intersection management, lane organization, potential-based approach

Procedia PDF Downloads 139
1094 Formulation and Evaluation of Antioxidant Cream Containing Nepalese Medicinal Plants

Authors: Ajaya Acharya, Prem Narayan Paudel, Rajendra Gyawali

Abstract:

Due to strong tyrosinase inhibition and antioxidant effects, green tea and Licorice are valuable in cosmetics for the skin. However, data on the addition of essential oils to green tea and Licorice in cream formulation to examine antioxidant activities are limited. The purpose of this study was to develop and assess a phytocosmetic cream’s antioxidant and tyrosinase inhibitory characteristics using crude aqueous extracts of green tea, Licorice, and loaded with essential oils. To load the best concentration on cream formulations, plant aqueous extracts were designed, evaluated, and correlated in terms of total phenolic content (TPC), total flavonoids content (TFC), and 2, 2-diphenyl-1-picrylhydrazyl (DPPH) scavenging activity. Moreover, o. tenuiflorum and o. basilicum essential oils were extracted and added to a cream formulation. The spreadability profile, water washability, centrifugation test, and organoleptic characteristics of formulated oil in water cream were all satisfactory. The cream exhibited a non-Newtonian rheological profile and pH range of 6.353 ± 0.065 to 6.467±0.050 over successive 0, 1, 2, and 3 months at normal room temperature. The 50% inhibition concentrations shown by herbal cream were 13.764 ± 0.153 µg/ml, 301.445 ± 1.709 µg/ml and 8.082 ± 0.055 respectively for 2, 2-diphenyl-1-picrylhydrazyl (DPPH) scavenging activity, ferric (Fe³⁺) reducing antioxidant power (FRAP) and 2, 2’-azinobis-3-ethylbenzothiazoline-6-sulfonic acid (ABTS) radical scavenging activity, and that of standard ascorbic acid were 6.716 ± 0.077 µg/ml, 171.604 ± 1.551µg/ml and 5.645±0.034µg/ml which showed formulated cream had strong antioxidant characteristics. The formulated herbal cream with a 50% tyrosinase inhibition concentration of 22.254 ± 0.369µg/ml compared to standard Kojic acid 12.535 ± 0.098µg/ml demonstrated a satisfactory tyrosinase inhibition profile for skin whitening property. Herbal cream was reportedly stable in physical and chemical parameters for successive 0, 1, 2, and 3 months at both real and accelerated time study zones, according to obtained stability study results.

Keywords: crude extracts, antioxidant, tyrosinase inhibition, green tea polyphenols

Procedia PDF Downloads 21
1093 Healthcare-SignNet: Advanced Video Classification for Medical Sign Language Recognition Using CNN and RNN Models

Authors: Chithra A. V., Somoshree Datta, Sandeep Nithyanandan

Abstract:

Sign Language Recognition (SLR) is the process of interpreting and translating sign language into spoken or written language using technological systems. It involves recognizing hand gestures, facial expressions, and body movements that makeup sign language communication. The primary goal of SLR is to facilitate communication between hearing- and speech-impaired communities and those who do not understand sign language. Due to the increased awareness and greater recognition of the rights and needs of the hearing- and speech-impaired community, sign language recognition has gained significant importance over the past 10 years. Technological advancements in the fields of Artificial Intelligence and Machine Learning have made it more practical and feasible to create accurate SLR systems. This paper presents a distinct approach to SLR by framing it as a video classification problem using Deep Learning (DL), whereby a combination of Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) has been used. This research targets the integration of sign language recognition into healthcare settings, aiming to improve communication between medical professionals and patients with hearing impairments. The spatial features from each video frame are extracted using a CNN, which captures essential elements such as hand shapes, movements, and facial expressions. These features are then fed into an RNN network that learns the temporal dependencies and patterns inherent in sign language sequences. The INCLUDE dataset has been enhanced with more videos from the healthcare domain and the model is evaluated on the same. Our model achieves 91% accuracy, representing state-of-the-art performance in this domain. The results highlight the effectiveness of treating SLR as a video classification task with the CNN-RNN architecture. This approach not only improves recognition accuracy but also offers a scalable solution for real-time SLR applications, significantly advancing the field of accessible communication technologies.

Keywords: sign language recognition, deep learning, convolution neural network, recurrent neural network

Procedia PDF Downloads 28
1092 The Computational Psycholinguistic Situational-Fuzzy Self-Controlled Brain and Mind System Under Uncertainty

Authors: Ben Khayut, Lina Fabri, Maya Avikhana

Abstract:

The models of the modern Artificial Narrow Intelligence (ANI) cannot: a) independently and continuously function without of human intelligence, used for retraining and reprogramming the ANI’s models, and b) think, understand, be conscious, cognize, infer, and more in state of Uncertainty, and changes in situations, and environmental objects. To eliminate these shortcomings and build a new generation of Artificial Intelligence systems, the paper proposes a Conception, Model, and Method of Computational Psycholinguistic Cognitive Situational-Fuzzy Self-Controlled Brain and Mind System (CPCSFSCBMSUU) using a neural network as its computational memory, operating under uncertainty, and activating its functions by perception, identification of real objects, fuzzy situational control, forming images of these objects, modeling their psychological, linguistic, cognitive, and neural values of properties and features, the meanings of which are identified, interpreted, generated, and formed taking into account the identified subject area, using the data, information, knowledge, and images, accumulated in the Memory. The functioning of the CPCSFSCBMSUU is carried out by its subsystems of the: fuzzy situational control of all processes, computational perception, identifying of reactions and actions, Psycholinguistic Cognitive Fuzzy Logical Inference, Decision making, Reasoning, Systems Thinking, Planning, Awareness, Consciousness, Cognition, Intuition, Wisdom, analysis and processing of the psycholinguistic, subject, visual, signal, sound and other objects, accumulation and using the data, information and knowledge in the Memory, communication, and interaction with other computing systems, robots and humans in order of solving the joint tasks. To investigate the functional processes of the proposed system, the principles of Situational Control, Fuzzy Logic, Psycholinguistics, Informatics, and modern possibilities of Data Science were applied. The proposed self-controlled System of Brain and Mind is oriented on use as a plug-in in multilingual subject Applications.

Keywords: computational brain, mind, psycholinguistic, system, under uncertainty

Procedia PDF Downloads 177
1091 Teaching English in Low Resource-Environments: Problems and Prospects

Authors: Gift Chidi-Onwuta, Iwe Nkem Nkechinyere, Chikamadu Christabelle Chinyere

Abstract:

The teaching of English is a resource-driven activity that requires rich resource-classroom settings for the delivery of effective lessons and the acquisition of interpersonal skills for integration in a target-language environment. However, throughout the world, English is often taught in low-resource classrooms. This paper is aimed to reveal the common problems associated with teaching English in low-resource environments and the prospects for teachers who found themselves in such undefined teaching settings. Self-structured and validated questionnaire in a closed-ended format, open question format and scaling format was administered to teachers across five countries: Nigeria, Cameroun, Iraq, Turkey, and Sudan. The study adopts situational language teaching theory (SLTT), which emphasizes a performance improvement imperative. This study inclines to this model because it maintains that learning must be fun and enjoyable like playing a favorite sport, just as in real life. Since teaching resources make learning engaging, we found this model apt for the current study. The perceptions of teachers about accessibility and functionality of teaching material resources, the nature of teaching outcomes in resource-less environments, their levels of involvement in improvisation and the prospects associated with resource limitations were sourced. Data were analysed using percentages and presented in frequency tables. Results: showed that a greater number of teachers across these nations do not have access to sufficient productive resource materials that can aid effective English language teaching. Teaching outcomes, from the findings, are affected by low material resources; however, results show certain advantages to teaching English with limited resources: flexibility and autonomy with students and creativity and innovation amongst teachers. Results further revealed group work, story, critical thinking strategy, flex, cardboards and flashcards, dictation and dramatization as common teaching strategies, as well as materials adopted by teachers to overcome low resource-related challenges in classrooms.

Keywords: teaching materials, low-resource environments, English language teaching, situational language theory

Procedia PDF Downloads 131
1090 Railway Process Automation to Ensure Human Safety with the Aid of IoT and Image Processing

Authors: K. S. Vedasingha, K. K. M. T. Perera, K. I. Hathurusinghe, H. W. I. Akalanka, Nelum Chathuranga Amarasena, Nalaka R. Dissanayake

Abstract:

Railways provide the most convenient and economically beneficial mode of transportation, and it has been the most popular transportation method among all. According to the past analyzed data, it reveals a considerable number of accidents which occurred at railways and caused damages to not only precious lives but also to the economy of the countries. There are some major issues which need to be addressed in railways of South Asian countries since they fall under the developing category. The goal of this research is to minimize the influencing aspect of railway level crossing accidents by developing the “railway process automation system”, as there are high-risk areas that are prone to accidents, and safety at these places is of utmost significance. This paper describes the implementation methodology and the success of the study. The main purpose of the system is to ensure human safety by using the Internet of Things (IoT) and image processing techniques. The system can detect the current location of the train and close the railway gate automatically. And it is possible to do the above-mentioned process through a decision-making system by using past data. The specialty is both processes working parallel. As usual, if the system fails to close the railway gate due to technical or a network failure, the proposed system can identify the current location and close the railway gate through a decision-making system, which is a revolutionary feature. The proposed system introduces further two features to reduce the causes of railway accidents. Railway track crack detection and motion detection are those features which play a significant role in reducing the risk of railway accidents. Moreover, the system is capable of detecting rule violations at a level crossing by using sensors. The proposed system is implemented through a prototype, and it is tested with real-world scenarios to gain the above 90% of accuracy.

Keywords: crack detection, decision-making, image processing, Internet of Things, motion detection, prototype, sensors

Procedia PDF Downloads 177
1089 Investigation of Contact Pressure Distribution at Expanded Polystyrene Geofoam Interfaces Using Tactile Sensors

Authors: Chen Liu, Dawit Negussey

Abstract:

EPS (Expanded Polystyrene) geofoam as light-weight material in geotechnical applications are made of pre-expanded resin beads that form fused cellular micro-structures. The strength and deformation properties of geofoam blocks are determined by unconfined compression of small test samples between rigid loading plates. Applied loads are presumed to be supported uniformly over the entire mating end areas. Predictions of field performance on the basis of such laboratory tests widely over-estimate actual post-construction settlements and exaggerate predictions of long-term creep deformations. This investigation examined the development of contact pressures at a large number of discrete points at low and large strain levels for different densities of geofoam. Development of pressure patterns for fine and coarse interface material textures as well as for molding skin and hot wire cut geofoam surfaces were examined. The lab testing showed that I-Scan tactile sensors are useful for detailed observation of contact pressures at a large number of discrete points simultaneously. At low strain level (1%), the lower density EPS block presents low variations in localized stress distribution compared to higher density EPS. At high strain level (10%), the dense geofoam reached the sensor cut-off limit. The imprint and pressure patterns for different interface textures can be distinguished with tactile sensing. The pressure sensing system can be used in many fields with real-time pressure detection. The research findings provide a better understanding of EPS geofoam behavior for improvement of design methods and performance prediction of critical infrastructures, which will be anticipated to guide future improvements in design and rapid construction of critical transportation infrastructures with geofoam in geotechnical applications.

Keywords: geofoam, pressure distribution, tactile pressure sensors, interface

Procedia PDF Downloads 173
1088 Robustness of the Deep Chroma Extractor and Locally-Normalized Quarter Tone Filters in Automatic Chord Estimation under Reverberant Conditions

Authors: Luis Alvarado, Victor Poblete, Isaac Gonzalez, Yetzabeth Gonzalez

Abstract:

In MIREX 2016 (http://www.music-ir.org/mirex), the deep neural network (DNN)-Deep Chroma Extractor, proposed by Korzeniowski and Wiedmer, reached the highest score in an audio chord recognition task. In the present paper, this tool is assessed under acoustic reverberant environments and distinct source-microphone distances. The evaluation dataset comprises The Beatles and Queen datasets. These datasets are sequentially re-recorded with a single microphone in a real reverberant chamber at four reverberation times (0 -anechoic-, 1, 2, and 3 s, approximately), as well as four source-microphone distances (32, 64, 128, and 256 cm). It is expected that the performance of the trained DNN will dramatically decrease under these acoustic conditions with signals degraded by room reverberation and distance to the source. Recently, the effect of the bio-inspired Locally-Normalized Cepstral Coefficients (LNCC), has been assessed in a text independent speaker verification task using speech signals degraded by additive noise at different signal-to-noise ratios with variations of recording distance, and it has also been assessed under reverberant conditions with variations of recording distance. LNCC showed a performance so high as the state-of-the-art Mel Frequency Cepstral Coefficient filters. Based on these results, this paper proposes a variation of locally-normalized triangular filters called Locally-Normalized Quarter Tone (LNQT) filters. By using the LNQT spectrogram, robustness improvements of the trained Deep Chroma Extractor are expected, compared with classical triangular filters, and thus compensating the music signal degradation improving the accuracy of the chord recognition system.

Keywords: chord recognition, deep neural networks, feature extraction, music information retrieval

Procedia PDF Downloads 232