Search results for: three step search
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4734

Search results for: three step search

714 In vitro Establishment and Characterization of Oral Squamous Cell Carcinoma Derived Cancer Stem-Like Cells

Authors: Varsha Salian, Shama Rao, N. Narendra, B. Mohana Kumar

Abstract:

Evolving evidence proposes the existence of a highly tumorigenic subpopulation of undifferentiated, self-renewing cancer stem cells, responsible for exhibiting resistance to conventional anti-cancer therapy, recurrence, metastasis and heterogeneous tumor formation. Importantly, the mechanisms exploited by cancer stem cells to resist chemotherapy are very less understood. Oral squamous cell carcinoma (OSCC) is one of the most regularly diagnosed cancer types in India and is associated commonly with alcohol and tobacco use. Therefore, the isolation and in vitro characterization of cancer stem-like cells from patients with OSCC is a critical step to advance the understanding of the chemoresistance processes and for designing therapeutic strategies. With this, the present study aimed to establish and characterize cancer stem-like cells in vitro from OSCC. The primary cultures of cancer stem-like cell lines were established from the tissue biopsies of patients with clinical evidence of an ulceroproliferative lesion and histopathological confirmation of OSCC. The viability of cells assessed by trypan blue exclusion assay showed more than 95% at passage 1 (P1), P2 and P3. Replication rate was performed by plating cells in 12-well plate and counting them at various time points of culture. Cells had a more marked proliferative activity and the average doubling time was less than 20 hrs. After being cultured for 10 to 14 days, cancer stem-like cells gradually aggregated and formed sphere-like bodies. More spheroid bodies were observed when cultured in DMEM/F-12 under low serum conditions. Interestingly, cells with higher proliferative activity had a tendency to form more sphere-like bodies. Expression of specific markers, including membrane proteins or cell enzymes, such as CD24, CD29, CD44, CD133, and aldehyde dehydrogenase 1 (ALDH1) is being explored for further characterization of cancer stem-like cells. To summarize the findings, the establishment of OSCC derived cancer stem-like cells may provide scope for better understanding the cause for recurrence and metastasis in oral epithelial malignancies. Particularly, identification and characterization studies on cancer stem-like cells in Indian population seem to be lacking thus provoking the need for such studies in a population where alcohol consumption and tobacco chewing are major risk habits.

Keywords: cancer stem-like cells, characterization, in vitro, oral squamous cell carcinoma

Procedia PDF Downloads 221
713 Application of Lattice Boltzmann Method to Different Boundary Conditions in a Two Dimensional Enclosure

Authors: Jean Yves Trepanier, Sami Ammar, Sagnik Banik

Abstract:

Lattice Boltzmann Method has been advantageous in simulating complex boundary conditions and solving for fluid flow parameters by streaming and collision processes. This paper includes the study of three different test cases in a confined domain using the method of the Lattice Boltzmann model. 1. An SRT (Single Relaxation Time) approach in the Lattice Boltzmann model is used to simulate Lid Driven Cavity flow for different Reynolds Number (100, 400 and 1000) with a domain aspect ratio of 1, i.e., square cavity. A moment-based boundary condition is used for more accurate results. 2. A Thermal Lattice BGK (Bhatnagar-Gross-Krook) Model is developed for the Rayleigh Benard convection for both test cases - Horizontal and Vertical Temperature difference, considered separately for a Boussinesq incompressible fluid. The Rayleigh number is varied for both the test cases (10^3 ≤ Ra ≤ 10^6) keeping the Prandtl number at 0.71. A stability criteria with a precise forcing scheme is used for a greater level of accuracy. 3. The phase change problem governed by the heat-conduction equation is studied using the enthalpy based Lattice Boltzmann Model with a single iteration for each time step, thus reducing the computational time. A double distribution function approach with D2Q9 (density) model and D2Q5 (temperature) model are used for two different test cases-the conduction dominated melting and the convection dominated melting. The solidification process is also simulated using the enthalpy based method with a single distribution function using the D2Q5 model to provide a better understanding of the heat transport phenomenon. The domain for the test cases has an aspect ratio of 2 with some exceptions for a square cavity. An approximate velocity scale is chosen to ensure that the simulations are within the incompressible regime. Different parameters like velocities, temperature, Nusselt number, etc. are calculated for a comparative study with the existing works of literature. The simulated results demonstrate excellent agreement with the existing benchmark solution within an error limit of ± 0.05 implicates the viability of this method for complex fluid flow problems.

Keywords: BGK, Nusselt, Prandtl, Rayleigh, SRT

Procedia PDF Downloads 128
712 Accurate Mass Segmentation Using U-Net Deep Learning Architecture for Improved Cancer Detection

Authors: Ali Hamza

Abstract:

Accurate segmentation of breast ultrasound images is of paramount importance in enhancing the diagnostic capabilities of breast cancer detection. This study presents an approach utilizing the U-Net architecture for segmenting breast ultrasound images aimed at improving the accuracy and reliability of mass identification within the breast tissue. The proposed method encompasses a multi-stage process. Initially, preprocessing techniques are employed to refine image quality and diminish noise interference. Subsequently, the U-Net architecture, a deep learning convolutional neural network (CNN), is employed for pixel-wise segmentation of regions of interest corresponding to potential breast masses. The U-Net's distinctive architecture, characterized by a contracting and expansive pathway, enables accurate boundary delineation and detailed feature extraction. To evaluate the effectiveness of the proposed approach, an extensive dataset of breast ultrasound images is employed, encompassing diverse cases. Quantitative performance metrics such as the Dice coefficient, Jaccard index, sensitivity, specificity, and Hausdorff distance are employed to comprehensively assess the segmentation accuracy. Comparative analyses against traditional segmentation methods showcase the superiority of the U-Net architecture in capturing intricate details and accurately segmenting breast masses. The outcomes of this study emphasize the potential of the U-Net-based segmentation approach in bolstering breast ultrasound image analysis. The method's ability to reliably pinpoint mass boundaries holds promise for aiding radiologists in precise diagnosis and treatment planning. However, further validation and integration within clinical workflows are necessary to ascertain their practical clinical utility and facilitate seamless adoption by healthcare professionals. In conclusion, leveraging the U-Net architecture for breast ultrasound image segmentation showcases a robust framework that can significantly enhance diagnostic accuracy and advance the field of breast cancer detection. This approach represents a pivotal step towards empowering medical professionals with a more potent tool for early and accurate breast cancer diagnosis.

Keywords: mage segmentation, U-Net, deep learning, breast cancer detection, diagnostic accuracy, mass identification, convolutional neural network

Procedia PDF Downloads 84
711 Modelling Soil Inherent Wind Erodibility Using Artifical Intellligent and Hybrid Techniques

Authors: Abbas Ahmadi, Bijan Raie, Mohammad Reza Neyshabouri, Mohammad Ali Ghorbani, Farrokh Asadzadeh

Abstract:

In recent years, vast areas of Urmia Lake in Dasht-e-Tabriz has dried up leading to saline sediments exposure on the surface lake coastal areas being highly susceptible to wind erosion. This study was conducted to investigate wind erosion and its relevance to soil physicochemical properties and also modeling of wind erodibility (WE) using artificial intelligence techniques. For this purpose, 96 soil samples were collected from 0-5 cm depth in 414000 hectares using stratified random sampling method. To measure the WE, all samples (<8 mm) were exposed to 5 different wind velocities (9.5, 11, 12.5, 14.1 and 15 m s-1 at the height of 20 cm) in wind tunnel and its relationship with soil physicochemical properties was evaluated. According to the results, WE varied within the range of 76.69-9.98 (g m-2 min-1)/(m s-1) with a mean of 10.21 and coefficient of variation of 94.5% showing a relatively high variation in the studied area. WE was significantly (P<0.01) affected by soil physical properties, including mean weight diameter, erodible fraction (secondary particles smaller than 0.85 mm) and percentage of the secondary particle size classes 2-4.75, 1.7-2 and 0.1-0.25 mm. Results showed that the mean weight diameter, erodible fraction and percentage of size class 0.1-0.25 mm demonstrated stronger relationship with WE (coefficients of determination were 0.69, 0.67 and 0.68, respectively). This study also compared efficiency of multiple linear regression (MLR), gene expression programming (GEP), artificial neural network (MLP), artificial neural network based on genetic algorithm (MLP-GA) and artificial neural network based on whale optimization algorithm (MLP-WOA) in predicting of soil wind erodibility in Dasht-e-Tabriz. Among 32 measured soil variable, percentages of fine sand, size classes of 1.7-2.0 and 0.1-0.25 mm (secondary particles) and organic carbon were selected as the model inputs by step-wise regression. Findings showed MLP-WOA as the most powerful artificial intelligence techniques (R2=0.87, NSE=0.87, ME=0.11 and RMSE=2.9) to predict soil wind erodibility in the study area; followed by MLP-GA, MLP, GEP and MLR and the difference between these methods were significant according to the MGN test. Based on the above finding MLP-WOA may be used as a promising method to predict soil wind erodibility in the study area.

Keywords: wind erosion, erodible fraction, gene expression programming, artificial neural network

Procedia PDF Downloads 71
710 Investigation of Deep Eutectic Solvents for Microwave Assisted Extraction and Headspace Gas Chromatographic Determination of Hexanal in Fat-Rich Food

Authors: Birute Bugelyte, Ingrida Jurkute, Vida Vickackaite

Abstract:

The most complicated step of the determination of volatile compounds in complex matrices is the separation of analytes from the matrix. Traditional analyte separation methods (liquid extraction, Soxhlet extraction) require a lot of time and labour; moreover, there is a risk to lose the volatile analytes. In recent years, headspace gas chromatography has been used to determine volatile compounds. To date, traditional extraction solvents have been used in headspace gas chromatography. As a rule, such solvents are rather volatile; therefore, a large amount of solvent vapour enters into the headspace together with the analyte. Because of that, the determination sensitivity of the analyte is reduced, a huge solvent peak in the chromatogram can overlap with the peaks of the analyts. The sensitivity is also limited by the fact that the sample can’t be heated at a higher temperature than the solvent boiling point. In 2018 it was suggested to replace traditional headspace gas chromatographic solvents with non-volatile, eco-friendly, biodegradable, inexpensive, and easy to prepare deep eutectic solvents (DESs). Generally, deep eutectic solvents have low vapour pressure, a relatively wide liquid range, much lower melting point than that of any of their individual components. Those features make DESs very attractive as matrix media for application in headspace gas chromatography. Also, DESs are polar compounds, so they can be applied for microwave assisted extraction. The aim of this work was to investigate the possibility of applying deep eutectic solvents for microwave assisted extraction and headspace gas chromatographic determination of hexanal in fat-rich food. Hexanal is considered one of the most suitable indicators of lipid oxidation degree as it is the main secondary oxidation product of linoleic acid, which is one of the principal fatty acids of many edible oils. Eight hydrophilic and hydrophobic deep eutectic solvents have been synthesized, and the influence of the temperature and microwaves on their headspace gas chromatographic behaviour has been investigated. Using the most suitable DES, microwave assisted extraction conditions and headspace gas chromatographic conditions have been optimized for the determination of hexanal in potato chips. Under optimized conditions, the quality parameters of the prepared technique have been determined. The suggested technique was applied for the determination of hexanal in potato chips and other fat-rich food.

Keywords: deep eutectic solvents, headspace gas chromatography, hexanal, microwave assisted extraction

Procedia PDF Downloads 195
709 Surge in U. S. Citizens Expatriation: Testing Structual Equation Modeling to Explain the Underlying Policy Rational

Authors: Marco Sewald

Abstract:

Comparing present to past the numbers of Americans expatriating U. S. citizenship have risen. Even though these numbers are small compared to the immigrants, U. S. citizens expatriations have historically been much lower, making the uptick worrisome. In addition, the published lists and numbers from the U.S. government seems incomplete, with many not counted. Different branches of the U. S. government report different numbers and no one seems to know exactly how big the real number is, even though the IRS and the FBI both track and/or publish numbers of Americans who renounce. Since there is no single explanation, anecdotal evidence suggests this uptick is caused by global tax law and increased compliance burdens imposed by the U.S. lawmakers on U.S. citizens abroad. Within a research project the question arose about the reasons why a constant growing number of U.S. citizens are expatriating – the answers are believed helping to explain the underlying governmental policy rational, leading to such activities. While it is impossible to locate former U.S. citizens to conduct a survey on the reasons and the U.S. government is not commenting on the reasons given within the process of expatriation, the chosen methodology is Structural Equation Modeling (SEM), in the first step by re-using current surveys conducted by different researchers within the population of U. S. citizens residing abroad during the last years. Surveys questioning the personal situation in the context of tax, compliance, citizenship and likelihood to repatriate to the U. S. In general SEM allows: (1) Representing, estimating and validating a theoretical model with linear (unidirectional or not) relationships. (2) Modeling causal relationships between multiple predictors (exogenous) and multiple dependent variables (endogenous). (3) Including unobservable latent variables. (4) Modeling measurement error: the degree to which observable variables describe latent variables. Moreover SEM seems very appealing since the results can be represented either by matrix equations or graphically. Results: the observed variables (items) of the construct are caused by various latent variables. The given surveys delivered a high correlation and it is therefore impossible to identify the distinct effect of each indicator on the latent variable – which was one desired result. Since every SEM comprises two parts: (1) measurement model (outer model) and (2) structural model (inner model), it seems necessary to extend the given data by conducting additional research and surveys to validate the outer model to gain the desired results.

Keywords: expatriation of U. S. citizens, SEM, structural equation modeling, validating

Procedia PDF Downloads 221
708 Depictions of Human Cannibalism and the Challenge They Pose to the Understanding of Animal Rights

Authors: Desmond F. Bellamy

Abstract:

Discourses about animal rights usually assume an ontological abyss between human and animal. This supposition of non-animality allows us to utilise and exploit non-humans, particularly those with commercial value, with little regard for their rights or interests. We can and do confine them, inflict painful treatments such as castration and branding, and slaughter them at an age determined only by financial considerations. This paper explores the way images and texts depicting human cannibalism reflect this deprivation of rights back onto our species and examines how this offers new perspectives on our granting or withholding of rights to farmed animals. The animals we eat – sheep, pigs, cows, chickens and a small handful of other species – are during processing de-animalised, turned into commodities, and made unrecognisable as formerly living beings. To do the same to a human requires the cannibal to enact another step – humans must first be considered as animals before they can be commodified or de-animalised. Different iterations of cannibalism in a selection of fiction and non-fiction texts will be considered: survivalism (necessitated by catastrophe or dystopian social collapse), the primitive savage of colonial discourses, and the inhuman psychopath. Each type of cannibalism shows alternative ways humans can be animalised and thereby dispossessed of both their human and animal rights. Human rights, summarised in the UN Universal Declaration of Human Rights as ‘life, liberty, and security of person’ are stubbornly denied to many humans, and are refused to virtually all farmed non-humans. How might this paradigm be transformed by seeing the animal victim replaced by an animalised human? People are fascinated as well as repulsed by cannibalism, as demonstrated by the upsurge of films on the subject in the last few decades. Cannibalism is, at its most basic, about envisaging and treating humans as objects: meat. It is on the dinner plate that the abyss between human and ‘animal’ is most challenged. We grasp at a conscious level that we are a species of animal and may become, if in the wrong place (e.g., shark-infested water), ‘just food’. Culturally, however, strong traditions insist that humans are much more than ‘just meat’ and deserve a better fate than torment and death. The billions of animals on death row awaiting human consumption would ask the same if they could. Depictions of cannibalism demonstrate in graphic ways that humans are animals, made of meat and that we can also be butchered and eaten. These depictions of us as having the same fleshiness as non-human animals reminds us that they have the same capacities for pain and pleasure as we do. Depictions of cannibalism, therefore, unconsciously aid in deconstructing the human/animal binary and give a unique glimpse into the often unnoticed repudiation of animal rights.

Keywords: animal rights, cannibalism, human/animal binary, objectification

Procedia PDF Downloads 138
707 Gender Equality at Workplace in Iran - Strategies and Successes Against Systematic Bias

Authors: Leila Sadeghi

Abstract:

Gender equality is a critical concern in the workplace, particularly in Iran, where legal and social barriers contribute to significant disparities. This abstract presents a case study of Dahi Bondad Co., a company based in Tehran, Iran that recognized the urgency of addressing the gender gap within its organization. Through a comprehensive investigation, the company identified issues related to biased recruitment, pay disparities, promotion biases, internal barriers, and everyday boundaries. This abstract highlights the strategies implemented by Dahi Bondad Co. to combat these challenges and foster gender equality. The company revised its recruitment policies, eliminated gender-specific language in job advertisements, and implemented blind resume screening to ensure equal opportunities for all applicants. Comprehensive pay equity analyses were conducted, leading to salary adjustments based on qualifications and experience to rectify pay disparities. Clear and transparent promotion criteria were established, and training programs were provided to decision-makers to raise awareness about unconscious biases. Additionally, mentorship and coaching programs were introduced to support female employees in overcoming self-limiting beliefs and imposter syndrome. At the same time, practical workshops and gamification techniques were employed to boost confidence and encourage women to step out of their comfort zones. The company also recognized the importance of dress codes and allowed optional hijab-wearing, respecting local traditions while promoting individual freedom. As a result of these strategies, Dahi Bondad Co. successfully fostered a more equitable and empowering work environment, leading to increased job satisfaction for both male and female employees within a short timeframe. This case study serves as an example of practical approaches that human resource managers can adopt to address gender inequality in the workplace, providing valuable insights for organizations seeking to promote gender equality in similar contexts.

Keywords: gender equality, human resource strategies, legal barrier, social barrier, successful result, successful strategies, workplace in Iran

Procedia PDF Downloads 67
706 Structuring Highly Iterative Product Development Projects by Using Agile-Indicators

Authors: Guenther Schuh, Michael Riesener, Frederic Diels

Abstract:

Nowadays, manufacturing companies are faced with the challenge of meeting heterogeneous customer requirements in short product life cycles with a variety of product functions. So far, some of the functional requirements remain unknown until late stages of the product development. A way to handle these uncertainties is the highly iterative product development (HIP) approach. By structuring the development project as a highly iterative process, this method provides customer oriented and marketable products. There are first approaches for combined, hybrid models comprising deterministic-normative methods like the Stage-Gate process and empirical-adaptive development methods like SCRUM on a project management level. However, almost unconsidered is the question, which development scopes can preferably be realized with either empirical-adaptive or deterministic-normative approaches. In this context, a development scope constitutes a self-contained section of the overall development objective. Therefore, this paper focuses on a methodology that deals with the uncertainty of requirements within the early development stages and the corresponding selection of the most appropriate development approach. For this purpose, internal influencing factors like a company’s technology ability, the prototype manufacturability and the potential solution space as well as external factors like the market accuracy, relevance and volatility will be analyzed and combined into an Agile-Indicator. The Agile-Indicator is derived in three steps. First of all, it is necessary to rate each internal and external factor in terms of the importance for the overall development task. Secondly, each requirement has to be evaluated for every single internal and external factor appropriate to their suitability for empirical-adaptive development. Finally, the total sums of internal and external side are composed in the Agile-Indicator. Thus, the Agile-Indicator constitutes a company-specific and application-related criterion, on which the allocation of empirical-adaptive and deterministic-normative development scopes can be made. In a last step, this indicator will be used for a specific clustering of development scopes by application of the fuzzy c-means (FCM) clustering algorithm. The FCM-method determines sub-clusters within functional clusters based on the empirical-adaptive environmental impact of the Agile-Indicator. By means of the methodology presented in this paper, it is possible to classify requirements, which are uncertainly carried out by the market, into empirical-adaptive or deterministic-normative development scopes.

Keywords: agile, highly iterative development, agile-indicator, product development

Procedia PDF Downloads 246
705 The Impact of External Technology Acquisition and Exploitation on Firms' Process Innovation Performance

Authors: Thammanoon Charmjuree, Yuosre F. Badir, Umar Safdar

Abstract:

There is a consensus among innovation scholars that knowledge is a vital antecedent for firm’s innovation; e.g., process innovation. Recently, there has been an increasing amount of attention to more open approaches to innovation. This open model emphasizes the use of purposive flows of knowledge across the organization boundaries. Firms adopt open innovation strategy to improve their innovation performance by bringing knowledge into the organization (inbound open innovation) to accelerate internal innovation or transferring knowledge outside (outbound open innovation) to expand the markets for external use of innovation. Reviewing open innovation research reveals the following. First, the majority of existing studies have focused on inbound open innovation and less on outbound open innovation. Second, limited research has considered the possible interaction between both and how this interaction may impact the firm’s innovation performance. Third, scholars have focused mainly on the impact of open innovation strategy on product innovation and less on process innovation. Therefore, our knowledge of the relationship between firms’ inbound and outbound open innovation and how these two impact process innovation is still limited. This study focuses on the firm’s external technology acquisition (ETA) and external technology exploitation (ETE) and the firm’s process innovation performance. The ETA represents inbound openness in which firms rely on the acquisition and absorption of external technologies to complement their technology portfolios. The ETE, on the other hand, refers to commercializing technology assets exclusively or in addition to their internal application. This study hypothesized that both ETA and ETE have a positive relationship with process innovation performance and that ETE fully mediates the relationship between ETA and process innovation performance, i.e., ETA has a positive impact on ETE, and turn, ETE has a positive impact on process innovation performance. This study empirically explored these hypotheses in software development firms in Thailand. These firms were randomly selected from a list of Software firms registered with the Department of Business Development, Ministry of Commerce of Thailand. The questionnaires were sent to 1689 firms. After follow-ups and periodic reminders, we obtained 329 (19.48%) completed usable questionnaires. The structure question modeling (SEM) has been used to analyze the data. An analysis of the outcome of 329 firms provides support for our three hypotheses: First, the firm’s ETA has a positive impact on its process innovation performance. Second, the firm’s ETA has a positive impact its ETE. Third, the firm’s ETE fully mediates the relationship between the firm’s ETA and its process innovation performance. This study fills up the gap in open innovation literature by examining the relationship between inbound (ETA) and outbound (ETE) open innovation and suggest that in order to benefits from the promises of openness, firms must engage in both. The study went one step further by explaining the mechanism through which ETA influence process innovation performance.

Keywords: process innovation performance, external technology acquisition, external technology exploitation, open innovation

Procedia PDF Downloads 202
704 Evaluation of the CRISP-DM Business Understanding Step: An Approach for Assessing the Predictive Power of Regression versus Classification for the Quality Prediction of Hydraulic Test Results

Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter

Abstract:

Digitalisation in production technology is a driver for the application of machine learning methods. Through the application of predictive quality, the great potential for saving necessary quality control can be exploited through the data-based prediction of product quality and states. However, the serial use of machine learning applications is often prevented by various problems. Fluctuations occur in real production data sets, which are reflected in trends and systematic shifts over time. To counteract these problems, data preprocessing includes rule-based data cleaning, the application of dimensionality reduction techniques, and the identification of comparable data subsets to extract stable features. Successful process control of the target variables aims to centre the measured values around a mean and minimise variance. Competitive leaders claim to have mastered their processes. As a result, much of the real data has a relatively low variance. For the training of prediction models, the highest possible generalisability is required, which is at least made more difficult by this data availability. The implementation of a machine learning application can be interpreted as a production process. The CRoss Industry Standard Process for Data Mining (CRISP-DM) is a process model with six phases that describes the life cycle of data science. As in any process, the costs to eliminate errors increase significantly with each advancing process phase. For the quality prediction of hydraulic test steps of directional control valves, the question arises in the initial phase whether a regression or a classification is more suitable. In the context of this work, the initial phase of the CRISP-DM, the business understanding, is critically compared for the use case at Bosch Rexroth with regard to regression and classification. The use of cross-process production data along the value chain of hydraulic valves is a promising approach to predict the quality characteristics of workpieces. Suitable methods for leakage volume flow regression and classification for inspection decision are applied. Impressively, classification is clearly superior to regression and achieves promising accuracies.

Keywords: classification, CRISP-DM, machine learning, predictive quality, regression

Procedia PDF Downloads 144
703 Emoji, the Language of the Future: An Analysis of the Usage and Understanding of Emoji across User-Groups

Authors: Sakshi Bhalla

Abstract:

On the one hand, given their seemingly simplistic, near universal usage and understanding, emoji are discarded as a potential step back in the evolution of communication. On the other, their effectiveness, pervasiveness, and adaptability across and within contexts are undeniable. In this study, the responses of 40 people (categorized by age) were recorded based on a uniform two-part questionnaire where they were required to a) identify the meaning of 15 emoji when placed in isolation, and b) interpret the meaning of the same 15 emoji when placed in a context-defining posting on Twitter. Their responses were studied on the basis of deviation from their responses that identified the emoji in isolation, as well as the originally intended meaning ascribed to the emoji. Based on an analysis of these results, it was discovered that each of the five age categories uses, understands and perceives emoji differently, which could be attributed to the degree of exposure they have undergone. For example, in the case of the youngest category (aged < 20), it was observed that they were the least accurate at correctly identifying emoji in isolation (~55%). Further, their proclivity to change their response with respect to the context was also the least (~31%). However, an analysis of each of their individual responses showed that these first-borns of social media seem to have reached a point where emojis no longer inspire their most literal meanings to them. The meaning and implication of these emoji have evolved to imply their context-derived meanings, even when placed in isolation. These trends carry forward meaningfully for the other four groups as well. In the case of the oldest category (aged > 35), however, the trends indicated inaccuracy and therefore, a higher incidence of a proclivity to change their responses. When studied in a continuum, the responses indicate that slowly and steadily, emoji are evolving from pictograms to ideograms. That is to suggest that they do not just indicate a one-to-one relation between a singular form and singular meaning. In fact, they communicate increasingly complicated ideas. This is much like the evolution of ancient hieroglyphics on papyrus reed or cuneiform on Sumerian clay tablets, which evolved from simple pictograms to progressively more complex ideograms. This evolution within communication is parallel to and contingent on the simultaneous evolution of communication. What’s astounding is the capacity of humans to leverage different platforms to facilitate such changes. Twiterese, as it is now called, is one of the instances where language is adapting to the demands of the digital world. That it does not have a spoken component, an ostensible grammar, and lacks standardization of use and meaning, as some might suggest, may seem like impediments in qualifying it as the 'language' of the digital world. However, that kind of a declarative remains a function of time, and time alone.

Keywords: communication, emoji, language, Twitter

Procedia PDF Downloads 95
702 Understanding the Relationship between Community and the Preservation of Cultural Landscape - Focusing on Organically Evolved Landscapes

Authors: Adhithy Menon E., Biju C. A.

Abstract:

Heritage monuments were first introduced to the public in the 1960s when the concept of preserving them was introduced. As a result of the 1990s, the concept of cultural landscapes gained importance, emphasizing the importance of culture and heritage in the context of the landscape. It is important to note that this paper is primarily concerned with the second category of ecological landscapes, which is organically evolving landscapes, as they represent a complex network of tangible, intangible, and environment, and the connections they share with the communities in which they are situated. The United Nations Educational, Scientific, and Cultural Organization has identified 39 cultural sites as being in danger, including the Iranian city of Bam and the historic city of Zabid in Yemen. To ensure its protection in the future, it is necessary to conduct a detailed analysis of the factors contributing to this degradation. An analysis of selected cultural landscapes from around the world is conducted to determine which parameters cause their degradation. The paper follows the objectives of understanding cultural landscapes and their importance for development, followed by examining various criteria for identifying cultural landscapes, their various classifications, as well as agencies that focus on their protection. To identify and analyze the parameters contributing to the deterioration of cultural landscapes based on literature and case studies (cultural landscape of Sintra, Rio de Janeiro, and Varanasi). As a final step, strategies should be developed to enhance deteriorating cultural landscapes based on these parameters. The major findings of the study are the impact of community in the parameters derived - integrity (natural factors, natural disasters, demolition of structures, deterioration of materials), authenticity (living elements, sense of place, building techniques, religious context, artistic expression) public participation (revenue, dependence on locale), awareness (demolition of structures, resource management) disaster management, environmental impact, maintenance of cultural landscape (linkages with other sites, dependence on locale, revenue, resource management). The parameters of authenticity, public participation, awareness, and maintenance of the cultural landscape are directly related to the community in which the cultural landscape is located. Therefore, by focusing on the community and addressing the parameters identified, the deterioration curve of cultural landscapes can be altered.

Keywords: community, cultural landscapes, heritage, organically evolved, public participation

Procedia PDF Downloads 87
701 Symphony of Healing: Exploring Music and Art Therapy’s Impact on Chemotherapy Patients with Cancer

Authors: Sunidhi Sood, Drashti Narendrakumar Shah, Aakarsh Sharma, Nirali Harsh Panchal, Maria Karizhenskaia

Abstract:

Cancer is a global health concern, causing a significant number of deaths, with chemotherapy being a standard treatment method. However, chemotherapy often induces side effects that profoundly impact the physical and emotional well-being of patients, lowering their overall quality of life (QoL). This research aims to investigate the potential of music and art therapy as holistic adjunctive therapy for cancer patients undergoing chemotherapy, offering non-pharmacological support. This is achieved through a comprehensive review of existing literature with a focus on the following themes, including stress and anxiety alleviation, emotional expression and coping skill development, transformative changes, and pain management with mood upliftment. A systematic search was conducted using Medline, Google Scholar, and St. Lawrence College Library, considering original, peer-reviewed research papers published from 2014 to 2023. The review solely incorporated studies focusing on the impact of music and art therapy on the health and overall well-being of cancer patients undergoing chemotherapy in North America. The findings from 16 studies involving pediatric oncology patients, females affected by breast cancer, and general oncology patients show that music and art therapies significantly reduce anxiety (standardized mean difference: -1.10) and improve perceived stress (median change: -4.0) and overall quality of life in cancer patients undergoing chemotherapy. Furthermore, music therapy has demonstrated the potential to decrease anxiety, depression, and pain during infusion treatments (average changes in resilience scale: 3.4 and 4.83 for instrumental and vocal music therapy, respectively). This data calls for consideration of the integration of music and art therapy into supportive care programs for cancer patients undergoing chemotherapy. Moreover, it provides guidance to healthcare professionals and policymakers, facilitating the development of patient-centered strategies for cancer care in Canada. Further research is needed in collaboration with qualified therapists to examine its applicability and explore and evaluate patients' perceptions and expectations in order to optimize the therapeutic benefits and overall patient experience. In conclusion, integrating music and art therapy in cancer care promises to substantially enhance the well-being and psychosocial state of patients undergoing chemotherapy. However, due to the small population size considered in existing studies, further research is needed to bridge the knowledge gap and ensure a comprehensive, patient-centered approach, ultimately enhancing the quality of life (QoL) for individuals facing the challenges of cancer treatment.

Keywords: anxiety, cancer, chemotherapy, depression, music and art therapy, pain management, quality of life

Procedia PDF Downloads 74
700 Arc Interruption Design for DC High Current/Low SC Fuses via Simulation

Authors: Ali Kadivar, Kaveh Niayesh

Abstract:

This report summarizes a simulation-based approach to estimate the current interruption behavior of a fuse element utilized in a DC network protecting battery banks under different stresses. Due to internal resistance of the battries, the short circuit current in very close to the nominal current, and it makes the fuse designation tricky. The base configuration considered in this report consists of five fuse units in parallel. The simulations are performed using a multi-physics software package, COMSOL® 5.6, and the necessary material parameters have been calculated using two other software packages.The first phase of the simulation starts with the heating of the fuse elements resulted from the current flow through the fusing element. In this phase, the heat transfer between the metallic strip and the adjacent materials results in melting and evaporation of the filler and housing before the aluminum strip is evaporated and the current flow in the evaporated strip is cut-off, or an arc is eventually initiated. The initiated arc starts to expand, so the entire metallic strip is ablated, and a long arc of around 20 mm is created within the first 3 milliseconds after arc initiation (v_elongation = 6.6 m/s. The final stage of the simulation is related to the arc simulation and its interaction with the external circuitry. Because of the strong ablation of the filler material and venting of the arc caused by the melting and evaporation of the filler and housing before an arc initiates, the arc is assumed to burn in almost pure ablated material. To be able to precisely model this arc, one more step related to the derivation of the transport coefficients of the plasma in ablated urethane was necessary. The results indicate that an arc current interruption, in this case, will not be achieved within the first tens of milliseconds. In a further study, considering two series elements, the arc was interrupted within few milliseconds. A very important aspect in this context is the potential impact of many broken strips parallel to the one where the arc occurs. The generated arcing voltage is also applied to the other broken strips connected in parallel with arcing path. As the gap between the other strips is very small, a large voltage of a few hundred volts generated during the current interruption may eventually lead to a breakdown of another gap. As two arcs in parallel are not stable, one of the arcs will extinguish, and the total current will be carried by one single arc again. This process may be repeated several times if the generated voltage is very large. The ultimate result would be that the current interruption may be delayed.

Keywords: DC network, high current / low SC fuses, FEM simulation, paralle fuses

Procedia PDF Downloads 65
699 Relaxor Ferroelectric Lead-Free Na₀.₅₂K₀.₄₄Li₀.₀₄Nb₀.₈₄Ta₀.₁₀Sb₀.₀₆O₃ Ceramic: Giant Electromechanical Response with Intrinsic Polarization and Resistive Leakage Analyses

Authors: Abid Hussain, Binay Kumar

Abstract:

Environment-friendly lead-free Na₀.₅₂K₀.₄₄Li₀.₀₄Nb₀.₈₄Ta₀.₁₀Sb₀.₀₆O₃ (NKLNTS) ceramic was synthesized by solid-state reaction method in search of a potential candidate to replace lead-based ceramics such as PbZrO₃-PbTiO₃ (PZT), Pb(Mg₁/₃Nb₂/₃)O₃-PbTiO₃ (PMN-PT) etc., for various applications. The ceramic was calcined at temperature 850 ᵒC and sintered at 1090 ᵒC. The powder X-Ray Diffraction (XRD) pattern revealed the formation of pure perovskite phase having tetragonal symmetry with space group P4mm of the synthesized ceramic. The surface morphology of the ceramic was studied using Field Emission Scanning Electron Microscopy (FESEM) technique. The well-defined grains with homogeneous microstructure were observed. The average grain size was found to be ~ 0.6 µm. A very large value of piezoelectric charge coefficient (d₃₃ ~ 754 pm/V) was obtained for the synthesized ceramic which indicated its potential for use in transducers and actuators. In dielectric measurements, a high value of ferroelectric to paraelectric phase transition temperature (Tm~305 ᵒC), a high value of maximum dielectric permittivity ~ 2110 (at 1 kHz) and a very small value of dielectric loss ( < 0.6) were obtained which suggested the utility of NKLNTS ceramic in high-temperature ferroelectric devices. Also, the degree of diffuseness (γ) was found to be 1.61 which confirmed a relaxor ferroelectric behavior in NKLNTS ceramic. P-E hysteresis loop was traced and the value of spontaneous polarization was found to be ~11μC/cm² at room temperature. The pyroelectric coefficient was obtained to be very high (p ∼ 1870 μCm⁻² ᵒC⁻¹) for the present case indicating its applicability in pyroelectric detector applications including fire and burglar alarms, infrared imaging, etc. NKLNTS ceramic showed fatigue free behavior over 107 switching cycles. Remanent hysteresis task was performed to determine the true-remanent (or intrinsic) polarization of NKLNTS ceramic by eliminating non-switchable components which showed that a major portion (83.10 %) of the remanent polarization (Pr) is switchable in the sample which makes NKLNTS ceramic a suitable material for memory switching devices applications. Time-Dependent Compensated (TDC) hysteresis task was carried out which revealed resistive leakage free nature of the ceramic. The performance of NKLNTS ceramic was found to be superior to many lead based piezoceramics and hence can effectively replace them for use in piezoelectric, pyroelectric and long duration ferroelectric applications.

Keywords: dielectric properties, ferroelectric properties , lead free ceramic, piezoelectric property, solid state reaction, true-remanent polarization

Procedia PDF Downloads 136
698 Bandgap Engineering of CsMAPbI3-xBrx Quantum Dots for Intermediate Band Solar Cell

Authors: Deborah Eric, Abbas Ahmad Khan

Abstract:

Lead halide perovskites quantum dots have attracted immense scientific and technological interest for successful photovoltaic applications because of their remarkable optoelectronic properties. In this paper, we have simulated CsMAPbI3-xBrx based quantum dots to implement their use in intermediate band solar cells (IBSC). These types of materials exhibit optical and electrical properties distinct from their bulk counterparts due to quantum confinement. The conceptual framework provides a route to analyze the electronic properties of quantum dots. This layer of quantum dots optimizes the position and bandwidth of IB that lies in the forbidden region of the conventional bandgap. A three-dimensional MAPbI3 quantum dot (QD) with geometries including spherical, cubic, and conical has been embedded in the CsPbBr3 matrix. Bound energy wavefunction gives rise to miniband, which results in the formation of IB. If there is more than one miniband, then there is a possibility of having more than one IB. The optimization of QD size results in more IBs in the forbidden region. One band time-independent Schrödinger equation using the effective mass approximation with step potential barrier is solved to compute the electronic states. Envelope function approximation with BenDaniel-Duke boundary condition is used in combination with the Schrödinger equation for the calculation of eigen energies and Eigen energies are solved for the quasi-bound states using an eigenvalue study. The transfer matrix method is used to study the quantum tunneling of MAPbI3 QD through neighbor barriers of CsPbI3. Electronic states are computed using Schrödinger equation with effective mass approximation by considering quantum dot and wetting layer assembly. Results have shown the varying the quantum dot size affects the energy pinning of QD. Changes in the ground, first, second state energies have been observed. The QD is non-zero at the center and decays exponentially to zero at boundaries. Quasi-bound states are characterized by envelope functions. It has been observed that conical quantum dots have maximum ground state energy at a small radius. Increasing the wetting layer thickness exhibits energy signatures similar to bulk material for each QD size.

Keywords: perovskite, intermediate bandgap, quantum dots, miniband formation

Procedia PDF Downloads 163
697 Biodegradation of Phenazine-1-Carboxylic Acid by Rhodanobacter sp. PCA2 Proceeds via Decarboxylation and Cleavage of Nitrogen-Containing Ring

Authors: Miaomiao Zhang, Sabrina Beckmann, Haluk Ertan, Rocky Chau, Mike Manefield

Abstract:

Phenazines are a large class of nitrogen-containing aromatic heterocyclic compounds, which are almost exclusively produced by bacteria from diverse genera including Pseudomonas and Streptomyces. Phenazine-1-carboxylic acid (PCA) as one of 'core' phenazines are converted from chorismic acid before modified to other phenazine derivatives in different cells. Phenazines have attracted enormous interests because of their multiple roles on biocontrol, bacterial interaction, biofilm formation and fitness of their producers. However, in spite of ecological importance, degradation as a part of phenazines’ fate only have extremely limited attention now. Here, to isolate PCA-degrading bacteria, 200 mg L-1 PCA was supplied as sole carbon, nitrogen and energy source in minimal mineral medium. Quantitative PCR and Reverse-transcript PCR were employed to study abundance and activity of functional gene MFORT 16269 in PCA degradation, respectively. Intermediates and products of PCA degradation were identified with LC-MS/MS. After enrichment and isolation, a PCA-degrading strain was selected from soil and was designated as Rhodanobacter sp. PCA2 based on full 16S rRNA sequencing. As determined by HPLC, strain PCA2 consumed 200 mg L-1 (836 µM) PCA at a rate of 17.4 µM h-1, accompanying with significant cells yield from 1.92 × 105 to 3.11 × 106 cells per mL. Strain PCA2 was capable of degrading other phenazines as well, including phenazine (4.27 µM h-1), pyocyanin (2.72 µM h-1), neutral red (1.30 µM h-1) and 1-hydroxyphenazine (0.55 µM h-1). Moreover, during the incubation, transcript copies of MFORT 16269 gene increased significantly from 2.13 × 106 to 8.82 × 107 copies mL-1, which was 2.77 times faster than that of the corresponding gene copy number (2.20 × 106 to 3.32 × 107 copies mL-1), indicating that MFORT 16269 gene was activated and played roles on PCA degradation. As analyzed by LC-MS/MS, decarboxylation from the ring structure was determined as the first step of PCA degradation, followed by cleavage of nitrogen-containing ring by dioxygenase which catalyzed phenazine to nitrosobenzene. Subsequently, phenylhydroxylamine was detected after incubation for two days and was then transferred to aniline and catechol. Additionally, genomic and proteomic analyses were also carried out for strain PCA2. Overall, the findings presented here showed that a newly isolated strain Rhodanobacter sp. PCA2 was capable of degrading phenazines through decarboxylation and cleavage of nitrogen-containing ring, during which MFORT 16269 gene was activated and played important roles.

Keywords: decarboxylation, MFORT16269 gene, phenazine-1-carboxylic acid degradation, Rhodanobacter sp. PCA2

Procedia PDF Downloads 223
696 Attributable Mortality of Nosocomial Infection: A Nested Case Control Study in Tunisia

Authors: S. Ben Fredj, H. Ghali, M. Ben Rejeb, S. Layouni, S. Khefacha, L. Dhidah, H. Said

Abstract:

Background: The Intensive Care Unit (ICU) provides continuous care and uses a high level of treatment technologies. Although developed country hospitals allocate only 5–10% of beds in critical care areas, approximately 20% of nosocomial infections (NI) occur among patients treated in ICUs. Whereas in the developing countries the situation is still less accurate. The aim of our study is to assess mortality rates in ICUs and to determine its predictive factors. Methods: We carried out a nested case-control study in a 630-beds public tertiary care hospital in Eastern Tunisia. We included in the study all patients hospitalized for more than two days in the surgical or medical ICU during the entire period of the surveillance. Cases were patients who died before ICU discharge, whereas controls were patients who survived to discharge. NIs were diagnosed according to the definitions of ‘Comité Technique des Infections Nosocomiales et les Infections Liées aux Soins’ (CTINLIS, France). Data collection was based on the protocol of Rea-RAISIN 2009 of the National Institute for Health Watch (InVS, France). Results: Overall, 301 patients were enrolled from medical and surgical ICUs. The mean age was 44.8 ± 21.3 years. The crude ICU mortality rate was 20.6% (62/301). It was 35.8% for patients who acquired at least one NI during their stay in ICU and 16.2% for those without any NI, yielding an overall crude excess mortality rate of 19.6% (OR= 2.9, 95% CI, 1.6 to 5.3). The population-attributable fraction due to ICU-NI in patients who died before ICU discharge was 23.46% (95% CI, 13.43%–29.04%). Overall, 62 case-patients were compared to 239 control patients for the final analysis. Case patients and control patients differed by age (p=0,003), simplified acute physiology score II (p < 10-3), NI (p < 10-3), nosocomial pneumonia (p=0.008), infection upon admission (p=0.002), immunosuppression (p=0.006), days of intubation (p < 10-3), tracheostomy (p=0.004), days with urinary catheterization (p < 10-3), days with CVC ( p=0.03), and length of stay in ICU (p=0.003). Multivariate analysis demonstrated 3 factors: age older than 65 years (OR, 5.78 [95% CI, 2.03-16.05] p=0.001), duration of intubation 1-10 days (OR, 6.82 [95% CI, [1.90-24.45] p=0.003), duration of intubation > 10 days (OR, 11.11 [95% CI, [2.85-43.28] p=0.001), duration of CVC 1-7 days (OR, 6.85[95% CI, [1.71-27.45] p=0.007) and duration of CVC > 7 days (OR, 5.55[95% CI, [1.70-18.04] p=0.004). Conclusion: While surveillance provides important baseline data, successful trials with more active intervention protocols, adopting multimodal approach for the prevention of nosocomial infection incited us to think about the feasibility of similar trial in our context. Therefore, the implementation of an efficient infection control strategy is a crucial step to improve the quality of care.

Keywords: intensive care unit, mortality, nosocomial infection, risk factors

Procedia PDF Downloads 406
695 Structural and Morphological Characterization of the Biomass of Aquatics Macrophyte (Egeria densa) Submitted to Thermal Pretreatment

Authors: Joyce Cruz Ferraz Dutra, Marcele Fonseca Passos, Rubens Maciel Filho, Douglas Fernandes Barbin, Gustavo Mockaitis

Abstract:

The search for alternatives to control hunger in the world, generated a major environmental problem. Intensive systems of fish production can cause an imbalance in the aquatic environment, triggering the phenomenon of eutrophication. Currently, there are many forms of growth control aquatic plants, such as mechanical withdrawal, however some difficulties arise for their final destination. The Egeria densa is a species of submerged aquatic macrophyte-rich in cellulose and low concentrations of lignin. By applying the concept of second generation energy, which uses lignocellulose for energy production, the reuse of these aquatic macrophytes (Egeria densa) in the biofuels production can turn an interesting alternative. In order to make lignocellulose sugars available for effective fermentation, it is important to use pre-treatments in order to separate the components and modify the structure of the cellulose and thus facilitate the attack of the microorganisms responsible for the fermentation. Therefore, the objective of this research work was to evaluate the structural and morphological transformations occurring in the biomass of aquatic macrophytes (E.densa) submitted to a thermal pretreatment. The samples were collected in an intensive fish growing farm, in the low São Francisco dam, in the northeastern region of Brazil. After collection, the samples were dried in a 65 0C ventilation oven and milled in a 5mm micron knife mill. A duplicate assay was carried, comparing the in natural biomass with the pretreated biomass with heat (MT). The sample (MT) was submitted to an autoclave with a temperature of 1210C and a pressure of 1.1 atm, for 30 minutes. After this procedure, the biomass was characterized in terms of degree of crystallinity and morphology, using X-ray diffraction (XRD) techniques and scanning electron microscopy (SEM), respectively. The results showed that there was a decrease of 11% in the crystallinity index (% CI) of the pretreated biomass, leading to the structural modification in the cellulose and greater presence of amorphous structures. Increases in porosity and surface roughness of the samples were also observed. These results suggest that biomass may become more accessible to the hydrolytic enzymes of fermenting microorganisms. Therefore, the morphological transformations caused by the thermal pretreatment may be favorable for a subsequent fermentation and, consequently, a higher yield of biofuels. Thus, the use of thermally pretreated aquatic macrophytes (E.densa) can be an environmentally, financially and socially sustainable alternative. In addition, it represents a measure of control for the aquatic environment, which can generate income (biogas production) and maintenance of fish farming activities in local communities.

Keywords: aquatics macrophyte, biofuels, crystallinity, morphology, pretreatment thermal

Procedia PDF Downloads 330
694 Design and Assessment of Base Isolated Structures under Spectrum-Compatible Bidirectional Earthquakes

Authors: Marco Furinghetti, Alberto Pavese, Michele Rinaldi

Abstract:

Concave Surface Slider devices have been more and more used in real applications for seismic protection of both bridge and building structures. Several research activities have been carried out, in order to investigate the lateral response of such a typology of devices, and a reasonably high level of knowledge has been reached. If radial analysis is performed, the frictional force is always aligned with respect to the restoring force, whereas under bidirectional seismic events, a bi-axial interaction of the directions of motion occurs, due to the step-wise projection of the main frictional force, which is assumed to be aligned to the trajectory of the isolator. Nonetheless, if non-linear time history analyses have to be performed, standard codes provide precise rules for the definition of an averagely spectrum-compatible set of accelerograms in radial conditions, whereas for bidirectional motions different combinations of the single components spectra can be found. Moreover, nowadays software for the adjustment of natural accelerograms are available, which lead to a higher quality of spectrum-compatibility and to a smaller dispersion of results for radial motions. In this endeavor a simplified design procedure is defined, for building structures, base-isolated by means of Concave Surface Slider devices. Different case study structures have been analyzed. In a first stage, the capacity curve has been computed, by means of non-linear static analyses on the fixed-base structures: inelastic fiber elements have been adopted and different direction angles of lateral forces have been studied. Thanks to these results, a linear elastic Finite Element Model has been defined, characterized by the same global stiffness of the linear elastic branch of the non-linear capacity curve. Then, non-linear time history analyses have been performed on the base-isolated structures, by applying seven bidirectional seismic events. The spectrum-compatibility of bidirectional earthquakes has been studied, by considering different combinations of single components and adjusting single records: thanks to the proposed procedure, results have shown a small dispersion and a good agreement in comparison to the assumed design values.

Keywords: concave surface slider, spectrum-compatibility, bidirectional earthquake, base isolation

Procedia PDF Downloads 292
693 Strategic Analysis of Energy and Impact Assessment of Microalgae Based Biodiesel and Biogas Production in Outdoor Raceway Pond: A Life Cycle Perspective

Authors: T. Sarat Chandra, M. Maneesh Kumar, S. N. Mudliar, V. S. Chauhan, S. Mukherji, R. Sarada

Abstract:

The life cycle assessment (LCA) of biodiesel production from freshwater microalgae Scenedesmus dimorphus cultivated in open raceway pond is performed. Various scenarios for biodiesel production were simulated using primary and secondary data. The parameters varied in the modelled scenarios were related to biomass productivity, mode of culture mixing and type of energy source. The process steps included algae cultivation in open raceway ponds, harvesting by chemical flocculation, dewatering by mechanical drying option (MDO) followed by extraction, reaction and purification. Anaerobic digestion of defatted algal biomass (DAB) for biogas generation is considered as a co-product allocation and the energy derived from DAB was thereby used in the upstream of the process. The scenarios were analysed for energy demand, emissions and environmental impacts within the boundary conditions grounded on "cradle to gate" inventory. Across all the Scenarios, cultivation via raceway pond was observed to be energy intensive process. The mode of culture mixing and biomass productivity determined the energy requirements of the cultivation step. Emissions to Freshwater were found to be maximum contributing to 93-97% of total emissions in all the scenarios. Global warming potential (GWP) was the found to be major environmental impact accounting to about 99% of total environmental impacts in all the modelled scenarios. It was noticed that overall emissions and impacts were directly related to energy demand and an inverse relationship was observed with biomass productivity. The geographic location of an energy source affected the environmental impact of a given process. The integration of defatted algal remnants derived electricity with the cultivation system resulted in a 2% reduction in overall energy demand. Direct biogas generation from microalgae post harvesting is also analysed. Energy surplus was observed after using part of the energy in upstream for biomass production. Results suggest biogas production from microalgae post harvesting as an environmentally viable and sustainable option compared to biodiesel production.

Keywords: biomass productivity, energy demand, energy source, Lifecycle Assessment (LCA), microalgae, open raceway pond

Procedia PDF Downloads 288
692 Flood Mapping Using Height above the Nearest Drainage Model: A Case Study in Fredericton, NB, Canada

Authors: Morteza Esfandiari, Shabnam Jabari, Heather MacGrath, David Coleman

Abstract:

Flood is a severe issue in different places in the world as well as the city of Fredericton, New Brunswick, Canada. The downtown area of Fredericton is close to the Saint John River, which is susceptible to flood around May every year. Recently, the frequency of flooding seems to be increased, especially after the fact that the downtown area and surrounding urban/agricultural lands got flooded in two consecutive years in 2018 and 2019. In order to have an explicit vision of flood span and damage to affected areas, it is necessary to use either flood inundation modelling or satellite data. Due to contingent availability and weather dependency of optical satellites, and limited existing data for the high cost of hydrodynamic models, it is not always feasible to rely on these sources of data to generate quality flood maps after or during the catastrophe. Height Above the Nearest Drainage (HAND), a state-of-the-art topo-hydrological index, normalizes the height of a basin based on the relative elevation along with the stream network and specifies the gravitational or the relative drainage potential of an area. HAND is a relative height difference between the stream network and each cell on a Digital Terrain Model (DTM). The stream layer is provided through a multi-step, time-consuming process which does not always result in an optimal representation of the river centerline depending on the topographic complexity of that region. HAND is used in numerous case studies with quite acceptable and sometimes unexpected results because of natural and human-made features on the surface of the earth. Some of these features might cause a disturbance in the generated model, and consequently, the model might not be able to predict the flow simulation accurately. We propose to include a previously existing stream layer generated by the province of New Brunswick and benefit from culvert maps to improve the water flow simulation and accordingly the accuracy of HAND model. By considering these parameters in our processing, we were able to increase the accuracy of the model from nearly 74% to almost 92%. The improved model can be used for generating highly accurate flood maps, which is necessary for future urban planning and flood damage estimation without any need for satellite imagery or hydrodynamic computations.

Keywords: HAND, DTM, rapid floodplain, simplified conceptual models

Procedia PDF Downloads 151
691 A Method to Evaluate and Compare Web Information Extractors

Authors: Patricia Jiménez, Rafael Corchuelo, Hassan A. Sleiman

Abstract:

Web mining is gaining importance at an increasing pace. Currently, there are many complementary research topics under this umbrella. Their common theme is that they all focus on applying knowledge discovery techniques to data that is gathered from the Web. Sometimes, these data are relatively easy to gather, chiefly when it comes from server logs. Unfortunately, there are cases in which the data to be mined is the data that is displayed on a web document. In such cases, it is necessary to apply a pre-processing step to first extract the information of interest from the web documents. Such pre-processing steps are performed using so-called information extractors, which are software components that are typically configured by means of rules that are tailored to extracting the information of interest from a web page and structuring it according to a pre-defined schema. Paramount to getting good mining results is that the technique used to extract the source information is exact, which requires to evaluate and compare the different proposals in the literature from an empirical point of view. According to Google Scholar, about 4 200 papers on information extraction have been published during the last decade. Unfortunately, they were not evaluated within a homogeneous framework, which leads to difficulties to compare them empirically. In this paper, we report on an original information extraction evaluation method. Our contribution is three-fold: a) this is the first attempt to provide an evaluation method for proposals that work on semi-structured documents; the little existing work on this topic focuses on proposals that work on free text, which has little to do with extracting information from semi-structured documents. b) It provides a method that relies on statistically sound tests to support the conclusions drawn; the previous work does not provide clear guidelines or recommend statistically sound tests, but rather a survey that collects many features to take into account as well as related work; c) We provide a novel method to compute the performance measures regarding unsupervised proposals; otherwise they would require the intervention of a user to compute them by using the annotations on the evaluation sets and the information extracted. Our contributions will definitely help researchers in this area make sure that they have advanced the state of the art not only conceptually, but from an empirical point of view; it will also help practitioners make informed decisions on which proposal is the most adequate for a particular problem. This conference is a good forum to discuss on our ideas so that we can spread them to help improve the evaluation of information extraction proposals and gather valuable feedback from other researchers.

Keywords: web information extractors, information extraction evaluation method, Google scholar, web

Procedia PDF Downloads 248
690 Sustainable Production of Algae through Nutrient Recovery in the Biofuel Conversion Process

Authors: Bagnoud-Velásquez Mariluz, Damergi Eya, Grandjean Dominique, Frédéric Vogel, Ludwig Christian

Abstract:

The sustainability of algae to biofuel processes is seriously affected by the energy intensive production of fertilizers. Large amounts of nitrogen and phosphorus are required for a large-scale production resulting in many cases in a negative impact of the limited mineral resources. In order to meet the algal bioenergy opportunity it appears crucial the promotion of processes applying a nutrient recovery and/or making use of renewable sources including waste. Hydrothermal (HT) conversion is a promising and suitable technology for microalgae to generate biofuels. Besides the fact that water is used as a “green” reactant and solvent and that no biomass drying is required, the technology offers a great potential for nutrient recycling. This study evaluated the possibility to treat the water HT effluent by the growth of microalgae while producing renewable algal biomass. As already demonstrated in previous works by the authors, the HT aqueous product besides having N, P and other important nutrients, presents a small fraction of organic compounds rarely studied. Therefore, extracted heteroaromatic compounds in the HT effluent were the target of the present research; they were profiled using GC-MS and LC-MS-MS. The results indicate the presence of cyclic amides, piperazinediones, amines and their derivatives. The most prominent nitrogenous organic compounds (NOC’s) in the extracts were carefully examined by their effect on microalgae, namely 2-pyrrolidinone and β-phenylethylamine (β-PEA). These two substances were prepared at three different concentrations (10, 50 and 150 ppm). This toxicity bioassay used three different microalgae strains: Phaeodactylum tricornutum, Chlorella sorokiniana and Scenedesmus vacuolatus. The confirmed IC50 was for all cases ca. 75ppm. Experimental conditions were set up for the growth of microalgae in the aqueous phase by adjusting the nitrogen concentration (the key nutrient for algae) to fit that one established for a known commercial medium. The values of specific NOC’s were lowered at concentrations of 8.5 mg/L 2-pyrrolidinone; 1mg/L δ-valerolactam and 0.5 mg/L β-PEA. The growth with the diluted HT solution was kept constant with no inhibition evidence. An additional ongoing test is addressing the possibility to apply an integrated water cleanup step making use of the existent hydrothermal catalytic facility.

Keywords: hydrothermal process, microalgae, nitrogenous organic compounds, nutrient recovery, renewable biomass

Procedia PDF Downloads 410
689 Fabrication of Al/Al2O3 Functionally Graded Composites via Centrifugal Method by Using a Polymeric Suspension

Authors: Majid Eslami

Abstract:

Functionally graded materials (FGMs) exhibit heterogeneous microstructures in which the composition and properties gently change in specified directions. The common type of FGMs consist of a metal in which ceramic particles are distributed with a graded concentration. There are many processing routes for FGMs. An important group of these methods is casting techniques (gravity or centrifugal). However, the main problem of casting molten metal slurry with dispersed ceramic particles is a destructive chemical reaction between these two phases which deteriorates the properties of the materials. In order to overcome this problem, in the present investigation a suspension of 6061 aluminum and alumina powders in a liquid polymer was used as the starting material and subjected to centrifugal force for making FGMs. The size rang of these powders was 45-63 and 106-125 μm. The volume percent of alumina in the Al/Al2O3 powder mixture was in the range of 5 to 20%. PMMA (Plexiglas) in different concentrations (20-50 g/lit) was dissolved in toluene and used as the suspension liquid. The glass mold contaning the suspension of Al/Al2O3 powders in the mentioned liquid was rotated at 1700 rpm for different times (4-40 min) while the arm length was kept constant (10 cm) for all the experiments. After curing the polymer, burning out the binder, cold pressing and sintering , cylindrical samples (φ=22 mm h=20 mm) were produced. The density of samples before and after sintering was quantified by Archimedes method. The results indicated that by using the same sized alumina and aluminum powders particles, FGM sample can be produced by rotation times exceeding 7 min. However, by using coarse alumina and fine alumina powders the sample exhibits step concentration. On the other hand, using fine alumina and coarse alumina results in a relatively uniform concentration of Al2O3 along the sample height. These results are attributed to the effects of size and density of different powders on the centrifugal force induced on the powders during rotation. The PMMA concentration and the vol.% of alumina in the suspension did not have any considerable effect on the distribution of alumina particles in the samples. The hardness profiles along the height of samples were affected by both the alumina vol.% and porosity content. The presence of alumina particles increased the hardness while increased porosity reduced the hardness. Therefore, the hardness values did not show the expected gradient in same sample. The sintering resulted in decreased porosity for all the samples investigated.

Keywords: FGM, powder metallurgy, centrifugal method, polymeric suspension

Procedia PDF Downloads 210
688 Effects of Using a Recurrent Adverse Drug Reaction Prevention Program on Safe Use of Medicine among Patients Receiving Services at the Accident and Emergency Department of Songkhla Hospital Thailand

Authors: Thippharat Wongsilarat, Parichat tuntilanon, Chonlakan Prataksitorn

Abstract:

Recurrent adverse drug reactions are harmful to patients with mild to fatal illnesses, and affect not only patients but also their relatives, and organizations. To compare safe use of medicine among patients before and after using the recurrent adverse drug reaction prevention program . Quasi-experimental research with the target population of 598 patients with drug allergy history. Data were collected through an observation form tested for its validity by three experts (IOC = 0.87), and analyzed with a descriptive statistic (percentage). The research was conducted jointly with a multidisciplinary team to analyze and determine the weak points and strong points in the recurrent adverse drug reaction prevention system during the past three years, and 546, 329, and 498 incidences, respectively, were found. Of these, 379, 279, and 302 incidences, or 69.4; 84.80; and 60.64 percent of the patients with drug allergy history, respectively, were found to have caused by incomplete warning system. In addition, differences in practice in caring for patients with drug allergy history were found that did not cover all the steps of the patient care process, especially a lack of repeated checking, and a lack of communication between the multidisciplinary team members. Therefore, the recurrent adverse drug reaction prevention program was developed with complete warning points in the information technology system, the repeated checking step, and communication among related multidisciplinary team members starting from the hospital identity card room, patient history recording officers, nurses, physicians who prescribe the drugs, and pharmacists. Including in the system were surveillance, nursing, recording, and linking the data to referring units. There were also training concerning adverse drug reactions by pharmacists, monthly meetings to explain the process to practice personnel, creating safety culture, random checking of practice, motivational encouragement, supervising, controlling, following up, and evaluating the practice. The rate of prescribing drugs to which patients were allergic per 1,000 prescriptions was 0.08, and the incidence rate of recurrent drug reaction per 1,000 prescriptions was 0. Surveillance of recurrent adverse drug reactions covering all service providing points can ensure safe use of medicine for patients.

Keywords: recurrent drug, adverse reaction, safety, use of medicine

Procedia PDF Downloads 456
687 Support for Refugee Entrepreneurs Through International Aid

Authors: Julien Benomar

Abstract:

The World Bank report published in April 2023 called “Migrants, Refugees and Society” allows us to first distinguish migrants in search of economic opportunities and refugees that flee a situation of danger and choose their destination based on their immediate need for safety. Amongst those two categories, the report distinguished people having professional skills adapted to the labor market of the host country and those who have not. Out of that distinction of four categories, we choose to focus our research on refugees that do not have professional skills adapted to the labor market of the host country. Given that refugees generally have no recourse to public assistance schemes and cannot count on the support of their entourage or support network, we propose to examine the extent to which external assistance, such as international humanitarian action, is likely to accompany refugees' transition to financial empowerment through entrepreneurship. To this end, we propose to carry out a case study structured in three stages: (i) an exchange with a Non-Governmental Organisation (NGO) active in supporting refugee populations from Congo and Burundi to Rwanda, enabling us to (i.i) define together a financial empowerment income, and (i. ii) learn about the content of the support measures taken for the beneficiaries of the humanitarian project; (ii) monitor the population of 118 beneficiaries, including 73 refugees and 45 Rwandans (reference population); (iii) conduct a participatory analysis to identify the level of performance of the project and areas for improvement. The case study thus involved the staff of an international NGO active in helping refugees from Rwanda since 2015 and the staff of a Luxembourg NGO that has been funding this economic aid project through entrepreneurship since 2021. The case study thus involved the staff of an international NGO active in helping refugees from Rwanda since 2015 and the staff of a Luxembourg NGO, which has been funding this economic aid through an entrepreneurship project since 2021, and took place over a 48-day period between April and May 2023. The main results are of two types: (i) the need to associate indicators for monitoring the impact of the project on the indirect beneficiaries of the project (refugee community) and (ii) the identification of success factors making it possible to bring concrete and relevant responses to the constraints encountered. The first result thus made it possible to identify the following indicators: Indicator of community potential ((jobs, training or mentoring) promoted by the activity of the entrepreneur), Indicator of social contribution (tax paid by the entrepreneur), Indicator of resilience (savings and loan capacity generated, and finally impact on social cohesion. The second result made it possible to identify that among the 7 success factors tested, the sector of activity chosen and the level of experience in the sector of the future activity are those that stand out the most clearly.

Keywords: entrepreuneurship, refugees, financial empowerment, international aid

Procedia PDF Downloads 78
686 Targeted Delivery of Docetaxel Drug Using Cetuximab Conjugated Vitamin E TPGS Micelles Increases the Anti-Tumor Efficacy and Inhibit Migration of MDA-MB-231 Triple Negative Breast Cancer

Authors: V. K. Rajaletchumy, S. L. Chia, M. I. Setyawati, M. S. Muthu, S. S. Feng, D. T. Leong

Abstract:

Triple negative breast cancers (TNBC) can be classified as one of the most aggressive with a high rate of local recurrences and systematic metastases. TNBCs are insensitive to existing hormonal therapy or targeted therapies such as the use of monoclonal antibodies, due to the lack of oestrogen receptor (ER) and progesterone receptor (PR) and the absence of overexpression of human epidermal growth factor receptor 2 (HER2) compared with other types of breast cancers. The absence of targeted therapies for selective delivery of therapeutic agents into tumours, led to the search for druggable targets in TNBC. In this study, we developed a targeted micellar system of cetuximab-conjugated micelles of D-α-tocopheryl polyethylene glycol succinate (vitamin E TPGS) for targeted delivery of docetaxel as a model anticancer drug for the treatment of TNBCs. We examined the efficacy of our micellar system in xenograft models of triple negative breast cancers and explored the effect of the micelles on post-treatment tumours in order to elucidate the mechanism underlying the nanomedicine treatment in oncology. The targeting micelles were found preferentially accumulated in tumours immediately after the administration of the micelles compare to normal tissue. The fluorescence signal gradually increased up to 12 h at the tumour site and sustained for up to 24 h, reflecting the increases in targeted micelles (TPFC) micelles in MDA-MB-231/Luc cells. In comparison, for the non-targeting micelles (TPF), the fluorescence signal was evenly distributed all over the body of the mice. Only a slight increase in fluorescence at the chest area was observed after 24 h post-injection, reflecting the moderate uptake of micelles by the tumour. The successful delivery of docetaxel into tumour by the targeted micelles (TPDC) exhibited a greater degree of tumour growth inhibition than Taxotere® after 15 days of treatment. The ex vivo study has demonstrated that tumours treated with targeting micelles exhibit enhanced cell cycle arrest and attenuated proliferation compared with the control and with those treated non-targeting micelles. Furthermore, the ex vivo investigation revealed that both the targeting and non-targeting micellar formulations shows significant inhibition of cell migration with migration indices reduced by 0.098- and 0.28-fold, respectively, relative to the control. Overall, both the in vivo and ex vivo data increased the confidence that our micellar formulations effectively targeted and inhibited EGF-overexpressing MDA-MB-231 tumours.

Keywords: biodegradable polymers, cancer nanotechnology, drug targeting, molecular biomaterials, nanomedicine

Procedia PDF Downloads 281
685 Bioremediation of Phenol in Wastewater Using Polymer-Supported Bacteria

Authors: Areej K. Al-Jwaid, Dmitiry Berllio, Andrew Cundy, Irina Savina, Jonathan L. Caplin

Abstract:

Phenol is a toxic compound that is widely distributed in the environment including the atmosphere, water and soil, due to the release of effluents from the petrochemical and pharmaceutical industries, coking plants and oil refineries. Moreover, a range of daily products, using phenol as a raw material, may find their way into the environment without prior treatment. The toxicity of phenol effects both human and environment health, and various physio-chemical methods to remediate phenol contamination have been used. While these techniques are effective, their complexity and high cost had led to search for alternative strategies to reduce and eliminate high concentrations of phenolic compounds in the environment. Biological treatments are preferable because they are environmentally friendly and cheaper than physico-chemical approaches. Some microorganisms such as Pseudomonas sp., Rhodococus sp., Acinetobacter sp. and Bacillus sp. have shown a high ability to degrade phenolic compounds to provide a sole source of energy. Immobilisation process utilising various materials have been used to protect and enhance the viability of cells, and to provide structural support for the bacterial cells. The aim of this study is to develop a new approach to the bioremediation of phenol based on an immobilisation strategy that can be used in wastewater. In this study, two bacterial species known to be phenol degrading bacteria (Pseudomonas mendocina and Rhodococus koreensis) were purchased from National Collection of Industrial, Food and Marine Bacteria (NCIMB). The two species and mixture of them were immobilised to produce macro porous crosslinked cell cryogels samples by using four types of cross-linker polymer solutions in a cryogelation process. The samples were used in a batch culture to degrade phenol at an initial concentration of 50mg/L at pH 7.5±0.3 and a temperature of 30°C. The four types of polymer solution - i. glutaraldehyde (GA), ii. Polyvinyl alcohol with glutaraldehyde (PVA+GA), iii. Polyvinyl alcohol–aldehyde (PVA-al) and iv. Polyetheleneimine–aldehyde (PEI-al), were used at different concentrations, ranging from 0.5 to 1.5% to crosslink the cells. The results of SEM and rheology analysis indicated that cell-cryogel samples crosslinked with the four cross-linker polymers formed monolithic macro porous cryogels. The samples were evaluated for their ability to degrade phenol. Macro porous cell–cryogels crosslinked with GA and PVA+GA showed an ability to degrade phenol for only one week, while the other samples crosslinked with a combination of PVA-al + PEI-al at two different concentrations have shown higher stability and viability to reuse to degrade phenol at concentration (50 mg/L) for five weeks. The initial results of using crosslinked cell cryogel samples to degrade phenol indicate that is a promising tool for bioremediation strategies especially to eliminate and remove the high concentration of phenol in wastewater.

Keywords: bioremediation, crosslinked cells, immobilisation, phenol degradation

Procedia PDF Downloads 234