Search results for: miRNA:mRNA interaction network
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8614

Search results for: miRNA:mRNA interaction network

1324 Melaninic Discrimination among Primary School Children

Authors: Margherita Cardellini

Abstract:

To our knowledge, dark skinned children are often victims of discrimination from adults and society, but few studies specifically focus on skin color discrimination on children coming from the same children. Even today, the 'color blind children' ideology is widespread among adults, teachers, and educators and maybe also among scholars, which seem really careful about study expressions of racism in childhood. This social and cultural belief let people think that all the children, because of their age and their brief experience in the world, are disinterested in skin color. Sometimes adults think that children are even incapable of perceiving skin colors and that it could be dangerous to talk about melaninic differences with them because they finally could notice this difference, producing prejudices and racism. Psychology and neurology research projects are showing for many years that even the newborns are already capable of perceiving skin color and ethnic differences by the age of 3 months. Starting from this theoretical framework we conducted a research project to understand if and how primary school children talk about skin colors, picking up any stereotypes or prejudices. Choosing to use the focus group as a methodology to stimulate the group dimension and interaction, several stories about skin color discrimination's episodes within their classroom or school have emerged. Using the photo elicitation technique we chose to stimulate talk about the research object, which is the skin color, asking the children what was ‘the first two things that come into your mind’ when they look the photographs presented during the focus group, which represented dark and light skinned women and men. So, this paper will present some of these stories about episodes of discrimination with an escalation grade of proximity related to the discriminatory act. It will be presented a story of discrimination happened within the school, in an after-school daycare, in the classroom and even episode of discrimination that children tell during the focus groups in the presence of the discriminated child. If it is true that the Declaration of the Right of the Child state that every child should be discrimination free, it’s also true that every adult should protect children from every form of discrimination. How, as adults, can we defend children against discrimination if we cannot admit that even children are potential discrimination’s actors? Without awareness, we risk to devalue these episodes, implicitly confident that the only way to fight against discrimination is to keep her quiet. The right not to be discriminated goes through the right to talk about its own experiences of discrimination and the right to perceive the unfairness of the constant depreciation about skin color or any element of physical diversity. Intercultural education could act as spokesperson for this mission in the belief that difference and plurality could really become elements of potential enrichment for humanity, starting from children.

Keywords: colorism, experiences of discrimination, primary school children, skin color discrimination

Procedia PDF Downloads 192
1323 Detect Critical Thinking Skill in Written Text Analysis. The Use of Artificial Intelligence in Text Analysis vs Chat/Gpt

Authors: Lucilla Crosta, Anthony Edwards

Abstract:

Companies and the market place nowadays struggle to find employees with adequate skills in relation to anticipated growth of their businesses. At least half of workers will need to undertake some form of up-skilling process in the next five years in order to remain aligned with the requests of the market . In order to meet these challenges, there is a clear need to explore the potential uses of AI (artificial Intelligence) based tools in assessing transversal skills (critical thinking, communication and soft skills of different types in general) of workers and adult students while empowering them to develop those same skills in a reliable trustworthy way. Companies seek workers with key transversal skills that can make a difference between workers now and in the future. However, critical thinking seems to be the one of the most imprtant skill, bringing unexplored ideas and company growth in business contexts. What employers have been reporting since years now, is that this skill is lacking in the majority of workers and adult students, and this is particularly visible trough their writing. This paper investigates how critical thinking and communication skills are currently developed in Higher Education environments through use of AI tools at postgraduate levels. It analyses the use of a branch of AI namely Machine Learning and Big Data and of Neural Network Analysis. It also examines the potential effect the acquisition of these skills through AI tools and what kind of effects this has on employability This paper will draw information from researchers and studies both at national (Italy & UK) and international level in Higher Education. The issues associated with the development and use of one specific AI tool Edulai, will be examined in details. Finally comparisons will be also made between these tools and the more recent phenomenon of Chat GPT and forthcomings and drawbacks will be analysed.

Keywords: critical thinking, artificial intelligence, higher education, soft skills, chat GPT

Procedia PDF Downloads 103
1322 Quantum Chemical Investigation of Hydrogen Isotopes Adsorption on Metal Ion Functionalized Linde Type A and Faujasite Type Zeolites

Authors: Gayathri Devi V, Aravamudan Kannan, Amit Sircar

Abstract:

In the inner fuel cycle system of a nuclear fusion reactor, the Hydrogen Isotopes Removal System (HIRS) plays a pivoted role. It enables the effective extraction of the hydrogen isotopes from the breeder purge gas which helps to maintain the tritium breeding ratio and sustain the fusion reaction. One of the components of HIRS, Cryogenic Molecular Sieve Bed (CMSB) columns with zeolites adsorbents are considered for the physisorption of hydrogen isotopes at 1 bar and 77 K. Even though zeolites have good thermal stability and reduced activation properties making them ideal for use in nuclear reactor applications, their modest capacity for hydrogen isotopes adsorption is a cause of concern. In order to enhance the adsorbent capacity in an informed manner, it is helpful to understand the adsorption phenomena at the quantum electronic structure level. Physicochemical modifications of the adsorbent material enhances the adsorption capacity through the incorporation of active sites. This may be accomplished through the incorporation of suitable metal ions in the zeolite framework. In this work, molecular hydrogen isotopes adsorption on the active sites of functionalized zeolites are investigated in detail using Density Functional Theory (DFT) study. This involves the utilization of hybrid Generalized Gradient Approximation (GGA) with dispersion correction to account for the exchange and correlation functional of DFT. The electronic energies, adsorption enthalpy, adsorption free energy, Highest Occupied Molecular Orbital (HOMO), Lowest Unoccupied Molecular Orbital (LUMO) energies are computed on the stable 8T zeolite clusters as well as the periodic structure functionalized with different active sites. The characteristics of the dihydrogen bond with the active metal sites and the isotopic effects are also studied in detail. Validation studies with DFT will also be presented for adsorption of hydrogen on metal ion functionalized zeolites. The ab-inito screening analysis gave insights regarding the mechanism of hydrogen interaction with the zeolites under study and also the effect of the metal ion on adsorption. This detailed study provides guidelines for selection of the appropriate metal ions that may be incorporated in the zeolites framework for effective adsorption of hydrogen isotopes in the HIRS.

Keywords: adsorption enthalpy, functionalized zeolites, hydrogen isotopes, nuclear fusion, physisorption

Procedia PDF Downloads 178
1321 Targetting T6SS of Klebsiella pneumoniae for Assessment of Immune Response in Mice for Therapeutic Lead Development

Authors: Sweta Pandey, Samridhi Dhyani, Susmita Chaudhuri

Abstract:

Klebsiella pneumoniae bacteria is a global threat to human health due to an increase in multi-drug resistance among strains. The hypervirulent strains of Klebsiella pneumoniae is a major trouble due to their association with life-threatening infections in a healthy population. One of the major virulence factors of hyper virulent strains of Klebsiella pneumoniae is the T6SS (Type six secretary system) which is majorly involved in microbial antagonism and causes interaction with the host eukaryotic cells during infections. T6SS mediates some of the crucial factors for establishing infection by the bacteria, such as cell adherence, invasion, and subsequent in vivo colonisation. The antibacterial activity and the cell invasion property of the T6SS system is a major requirement for the establishment of K. pneumoniae infections within the gut. The T6SS can be an appropriate target for developing therapeutics. The T6SS consists of an inner tube comprising hexamers of Hcp (Haemolysin -regulated protein) protein, and at the top of this tube sits VgrG (Valine glycine repeat protein G); the tip of the machinery consists of PAAR domain containing proteins which act as a delivery system for bacterial effectors. For this study, immune response to recombinant VgrG protein was generated to establish this protein as a potential immunogen for the development of therapeutic leads. The immunogenicity of the selected protein was determined by predicting the B cell epitopes by the BCEP analysis tool. The gene sequence for multiple domains of VgrG protein (phage_base_V, T6SS_Vgr, DUF2345) was selected and cloned in pMAL vector in E. coli. The construct was subcloned and expressed as a fusion protein of 203 residue protein with mannose binding protein tag (MBP) to enhance solubility and purification of this protein. The purified recombinant VgrG fusion protein was used for mice immunisation. The antiserum showed reactivity with the recombinant VgrG in ELISA and western blot. The immunised mice were challenged with K. pneumoniae bacteria and showed bacterial clearance in immunised mice. The recombinant VgrG protein can further be used for studying downstream signalling of VgrG protein in mice during infection and for therapeutic MAb development to eradicate K. pneumoniae infections.

Keywords: immune response, Klebsiella pneumoniae, multi-drug resistance, recombinant protein expression, T6SS, VgrG

Procedia PDF Downloads 97
1320 Application of a Confirmatory Composite Model for Assessing the Extent of Agricultural Digitalization: A Case of Proactive Land Acquisition Strategy (PLAS) Farmers in South Africa

Authors: Mazwane S., Makhura M. N., Ginege A.

Abstract:

Digitalization in South Africa has received considerable attention from policymakers. The support for the development of the digital economy by the South African government has been demonstrated through the enactment of various national policies and strategies. This study sought to develop an index for agricultural digitalization by applying composite confirmatory analysis (CCA). Another aim was to determine the factors that affect the development of digitalization in PLAS farms. Data on the indicators of the three dimensions of digitalization were collected from 300 Proactive Land Acquisition Strategy (PLAS) farms in South Africa using semi-structured questionnaires. Confirmatory composite analysis (CCA) was employed to reduce the items into three digitalization dimensions and ultimately to a digitalization index. Standardized digitalization index scores were extracted and fitted to a linear regression model to determine the factors affecting digitalization development. The results revealed that the model shows practical validity and can be used to measure digitalization development as measures of fit (geodesic distance, standardized root mean square residual, and squared Euclidean distance) were all below their respective 95%quantiles of bootstrap discrepancies (HI95 values). Therefore, digitalization is an emergent variable that can be measured using CCA. The average level of digitalization in PLAS farms was 0.2 and varied significantly across provinces. The factors that significantly influence digitalization development in PLAS land reform farms were age, gender, farm type, network type, and cellular data type. This should enable researchers and policymakers to understand the level of digitalization and patterns of development, as well as correctly attribute digitalization development to the contributing factors.

Keywords: agriculture, digitalization, confirmatory composite model, land reform, proactive land acquisition strategy, South Africa

Procedia PDF Downloads 57
1319 An Evolutionary Perspective on the Role of Extrinsic Noise in Filtering Transcript Variability in Small RNA Regulation in Bacteria

Authors: Rinat Arbel-Goren, Joel Stavans

Abstract:

Cell-to-cell variations in transcript or protein abundance, called noise, may give rise to phenotypic variability between isogenic cells, enhancing the probability of survival under stress conditions. These variations may be introduced by post-transcriptional regulatory processes such as non-coding, small RNAs stoichiometric degradation of target transcripts in bacteria. We study the iron homeostasis network in Escherichia coli, in which the RyhB small RNA regulates the expression of various targets as a model system. Using fluorescence reporter genes to detect protein levels and single-molecule fluorescence in situ hybridization to monitor transcripts levels in individual cells, allows us to compare noise at both transcript and protein levels. The experimental results and computer simulations show that extrinsic noise buffers through a feed-forward loop configuration the increase in variability introduced at the transcript level by iron deprivation, illuminating the important role that extrinsic noise plays during stress. Surprisingly, extrinsic noise also decouples of fluctuations of two different targets, in spite of RyhB being a common upstream factor degrading both. Thus, phenotypic variability increases under stress conditions by the decoupling of target fluctuations in the same cell rather than by increasing the noise of each. We also present preliminary results on the adaptation of cells to prolonged iron deprivation in order to shed light on the evolutionary role of post-transcriptional downregulation by small RNAs.

Keywords: cell-to-cell variability, Escherichia coli, noise, single-molecule fluorescence in situ hybridization (smFISH), transcript

Procedia PDF Downloads 160
1318 Human-Machine Cooperation in Facial Comparison Based on Likelihood Scores

Authors: Lanchi Xie, Zhihui Li, Zhigang Li, Guiqiang Wang, Lei Xu, Yuwen Yan

Abstract:

Image-based facial features can be classified into category recognition features and individual recognition features. Current automated face recognition systems extract a specific feature vector of different dimensions from a facial image according to their pre-trained neural network. However, to improve the efficiency of parameter calculation, an algorithm generally reduces the image details by pooling. The operation will overlook the details concerned much by forensic experts. In our experiment, we adopted a variety of face recognition algorithms based on deep learning, compared a large number of naturally collected face images with the known data of the same person's frontal ID photos. Downscaling and manual handling were performed on the testing images. The results supported that the facial recognition algorithms based on deep learning detected structural and morphological information and rarely focused on specific markers such as stains and moles. Overall performance, distribution of genuine scores and impostor scores, and likelihood ratios were tested to evaluate the accuracy of biometric systems and forensic experts. Experiments showed that the biometric systems were skilled in distinguishing category features, and forensic experts were better at discovering the individual features of human faces. In the proposed approach, a fusion was performed at the score level. At the specified false accept rate, the framework achieved a lower false reject rate. This paper contributes to improving the interpretability of the objective method of facial comparison and provides a novel method for human-machine collaboration in this field.

Keywords: likelihood ratio, automated facial recognition, facial comparison, biometrics

Procedia PDF Downloads 125
1317 Obtainment of Systems with Efavirenz and Lamellar Double Hydroxide as an Alternative for Solubility Improvement of the Drug

Authors: Danilo A. F. Fontes, Magaly A. M.Lyra, Maria L. C. Moura, Leslie R. M. Ferraz, Salvana P. M. Costa, Amanda C. Q. M. Vieira, Larissa A. Rolim, Giovanna C. R. M. Schver, Ping I. Lee, Severino Alves-Júnior, José L. Soares-Sobrinho, Pedro J. Rolim-Neto

Abstract:

Efavirenz (EFV) is a first-choice drug in antiretroviral therapy with high efficacy in the treatment of infection by Human Immunodeficiency Virus, which causes Acquired Immune Deficiency Syndrome (AIDS). EFV has low solubility in water resulting in a decrease in the dissolution rate and, consequently, in its bioavailability. Among the technological alternatives to increase solubility, the Lamellar Double Hydroxides (LDH) have been applied in the development of systems with poorly water-soluble drugs. The use of analytical techniques such as X-Ray Diffraction (XRD), Infrared Spectroscopy (IR) and Differential Scanning Calorimetry (DSC) allowed the elucidation of drug interaction with the lamellar compounds. The objective of this work was to characterize and develop the binary systems with EFV and LDH in order to increase the solubility of the drug. The LDH-CaAl was synthesized by the method of co-precipitation from salt solutions of calcium nitrate and aluminum nitrate in basic medium. The systems EFV-LDH and their physical mixtures (PM) were obtained at different concentrations (5-60% of EFV) using the solvent technique described by Takahashi & Yamaguchi (1991). The characterization of the systems and the PM’s was performed by XRD techniques, IR, DSC and dissolution test under non-sink conditions. The results showed improvements in the solubility of EFV when associated with LDH, due to a possible change in its crystal structure and formation of an amorphous material. From the DSC results, one could see that the endothermic peak at 173°C, temperature that correspond to the melting process of EFZ in the crystal form, was present in the PM results. For the EFZ-LDH systems (with 5, 10 and 30% of drug loading), this peak was not observed. XRD profiles of the PM showed well-defined peaks for EFV. Analyzing the XRD patterns of the systems, it was found that the XRD profiles of all the systems showed complete attenuation of the characteristic peaks of the crystalline form of EFZ. The IR technique showed that, in the results of the PM, there was the appearance of one band and overlap of other bands, while the IR results of the systems with 5, 10 and 30% drug loading showed the disappearance of bands and a few others with reduced intensity. The dissolution test under non-sink conditions showed that systems with 5, 10 and 30% drug loading promoted a great increase in the solubility of EFV, but the system with 10% of drug loading was the only one that could keep substantial amount of drug in solution at different pHs.

Keywords: Efavirenz, Lamellar Double Hydroxides, Pharmaceutical Techonology, Solubility

Procedia PDF Downloads 578
1316 Adapting Tools for Text Monitoring and for Scenario Analysis Related to the Field of Social Disasters

Authors: Svetlana Cojocaru, Mircea Petic, Inga Titchiev

Abstract:

Humanity faces more and more often with different social disasters, which in turn can generate new accidents and catastrophes. To mitigate their consequences, it is important to obtain early possible signals about the events which are or can occur and to prepare the corresponding scenarios that could be applied. Our research is focused on solving two problems in this domain: identifying signals related that an accident occurred or may occur and mitigation of some consequences of disasters. To solve the first problem, methods of selecting and processing texts from global network Internet are developed. Information in Romanian is of special interest for us. In order to obtain the mentioned tools, we should follow several steps, divided into preparatory stage and processing stage. Throughout the first stage, we manually collected over 724 news articles and classified them into 10 categories of social disasters. It constitutes more than 150 thousand words. Using this information, a controlled vocabulary of more than 300 keywords was elaborated, that will help in the process of classification and identification of the texts related to the field of social disasters. To solve the second problem, the formalism of Petri net has been used. We deal with the problem of inhabitants’ evacuation in useful time. The analysis methods such as reachability or coverability tree and invariants technique to determine dynamic properties of the modeled systems will be used. To perform a case study of properties of extended evacuation system by adding time, the analysis modules of PIPE such as Generalized Stochastic Petri Nets (GSPN) Analysis, Simulation, State Space Analysis, and Invariant Analysis have been used. These modules helped us to obtain the average number of persons situated in the rooms and the other quantitative properties and characteristics related to its dynamics.

Keywords: lexicon of disasters, modelling, Petri nets, text annotation, social disasters

Procedia PDF Downloads 195
1315 Gnss Aided Photogrammetry for Digital Mapping

Authors: Muhammad Usman Akram

Abstract:

This research work based on GNSS-Aided Photogrammetry for Digital Mapping. It focuses on topographic survey of an area or site which is to be used in future Planning & development (P&D) or can be used for further, examination, exploration, research and inspection. Survey and Mapping in hard-to-access and hazardous areas are very difficult by using traditional techniques and methodologies; as well it is time consuming, labor intensive and has less precision with limited data. In comparison with the advance techniques it is saving with less manpower and provides more precise output with a wide variety of multiple data sets. In this experimentation, Aerial Photogrammetry technique is used where an UAV flies over an area and captures geocoded images and makes a Three-Dimensional Model (3-D Model), UAV operates on a user specified path or area with various parameters; Flight altitude, Ground sampling distance (GSD), Image overlapping, Camera angle etc. For ground controlling, a network of points on the ground would be observed as a Ground Control point (GCP) using Differential Global Positioning System (DGPS) in PPK or RTK mode. Furthermore, that raw data collected by UAV and DGPS will be processed in various Digital image processing programs and Computer Aided Design software. From which as an output we obtain Points Dense Cloud, Digital Elevation Model (DEM) and Ortho-photo. The imagery is converted into geospatial data by digitizing over Ortho-photo, DEM is further converted into Digital Terrain Model (DTM) for contour generation or digital surface. As a result, we get Digital Map of area to be surveyed. In conclusion, we compared processed data with exact measurements taken on site. The error will be accepted if the amount of error is not breached from survey accuracy limits set by concerned institutions.

Keywords: photogrammetry, post processing kinematics, real time kinematics, manual data inquiry

Procedia PDF Downloads 23
1314 Investigation of Different Machine Learning Algorithms in Large-Scale Land Cover Mapping within the Google Earth Engine

Authors: Amin Naboureh, Ainong Li, Jinhu Bian, Guangbin Lei, Hamid Ebrahimy

Abstract:

Large-scale land cover mapping has become a new challenge in land change and remote sensing field because of involving a big volume of data. Moreover, selecting the right classification method, especially when there are different types of landscapes in the study area is quite difficult. This paper is an attempt to compare the performance of different machine learning (ML) algorithms for generating a land cover map of the China-Central Asia–West Asia Corridor that is considered as one of the main parts of the Belt and Road Initiative project (BRI). The cloud-based Google Earth Engine (GEE) platform was used for generating a land cover map for the study area from Landsat-8 images (2017) by applying three frequently used ML algorithms including random forest (RF), support vector machine (SVM), and artificial neural network (ANN). The selected ML algorithms (RF, SVM, and ANN) were trained and tested using reference data obtained from MODIS yearly land cover product and very high-resolution satellite images. The finding of the study illustrated that among three frequently used ML algorithms, RF with 91% overall accuracy had the best result in producing a land cover map for the China-Central Asia–West Asia Corridor whereas ANN showed the worst result with 85% overall accuracy. The great performance of the GEE in applying different ML algorithms and handling huge volume of remotely sensed data in the present study showed that it could also help the researchers to generate reliable long-term land cover change maps. The finding of this research has great importance for decision-makers and BRI’s authorities in strategic land use planning.

Keywords: land cover, google earth engine, machine learning, remote sensing

Procedia PDF Downloads 111
1313 Impact of Charging PHEV at Different Penetration Levels on Power System Network

Authors: M. R. Ahmad, I. Musirin, M. M. Othman, N. A. Rahmat

Abstract:

Plug-in Hybrid-Electric Vehicle (PHEV) has gained immense popularity in recent years. PHEV offers numerous advantages compared to the conventional internal-combustion engine (ICE) vehicle. Millions of PHEVs are estimated to be on the road in the USA by 2020. Uncoordinated PHEV charging is believed to cause severe impacts to the power grid; i.e. feeders, lines and transformers overload and voltage drop. Nevertheless, improper PHEV data model used in such studies may cause the findings of their works is in appropriated. Although smart charging is more attractive to researchers in recent years, its implementation is not yet attainable on the street due to its requirement for physical infrastructure readiness and technology advancement. As the first step, it is finest to study the impact of charging PHEV based on real vehicle travel data from National Household Travel Survey (NHTS) and at present charging rate. Due to the lack of charging station on the street at the moment, charging PHEV at home is the best option and has been considered in this work. This paper proposed a technique that comprehensively presents the impact of charging PHEV on power system networks considering huge numbers of PHEV samples with its traveling data pattern. Vehicles Charging Load Profile (VCLP) is developed and implemented in IEEE 30-bus test system that represents a portion of American Electric Power System (Midwestern US). Normalization technique is used to correspond to real time loads at all buses. Results from the study indicated that charging PHEV using opportunity charging will have significant impacts on power system networks, especially whereas bigger battery capacity (kWh) is used as well as for higher penetration level.

Keywords: plug-in hybrid electric vehicle, transportation electrification, impact of charging PHEV, electricity demand profile, load profile

Procedia PDF Downloads 283
1312 Integrating Cyber-Physical System toward Advance Intelligent Industry: Features, Requirements and Challenges

Authors: V. Reyes, P. Ferreira

Abstract:

In response to high levels of competitiveness, industrial systems have evolved to improve productivity. As a consequence, a rapid increase in volume production and simultaneously, a customization process require lower costs, more variety, and accurate quality of products. Reducing time-cycle production, enabling customizability, and ensure continuous quality improvement are key features in advance intelligent industry. In this scenario, customers and producers will be able to participate in the ongoing production life cycle through real-time interaction. To achieve this vision, transparency, predictability, and adaptability are key features that provide the industrial systems the capability to adapt to customer demands modifying the manufacturing process through an autonomous response and acting preventively to avoid errors. The industrial system incorporates a diversified number of components that in advanced industry are expected to be decentralized, end to end communicating, and with the capability to make own decisions through feedback. The evolving process towards advanced intelligent industry defines a set of stages to empower components of intelligence and enhancing efficiency to achieve the decision-making stage. The integrated system follows an industrial cyber-physical system (CPS) architecture whose real-time integration, based on a set of enabler technologies, links the physical and virtual world generating the digital twin (DT). This instance allows incorporating sensor data from real to virtual world and the required transparency for real-time monitoring and control, contributing to address important features of the advanced intelligent industry and simultaneously improve sustainability. Assuming the industrial CPS as the core technology toward the latest advanced intelligent industry stage, this paper reviews and highlights the correlation and contributions of the enabler technologies for the operationalization of each stage in the path toward advanced intelligent industry. From this research, a real-time integration architecture for a cyber-physical system with applications to collaborative robotics is proposed. The required functionalities and issues to endow the industrial system of adaptability are identified.

Keywords: cyber-physical systems, digital twin, sensor data, system integration, virtual model

Procedia PDF Downloads 115
1311 Physical Education Effect on Sports Science Analysis Technology

Authors: Peter Adly Hamdy Fahmy

Abstract:

The aim of the study was to examine the effects of a physical education program on student learning by combining the teaching of personal and social responsibility (TPSR) with a physical education model and TPSR with a traditional teaching model, these learning outcomes involving self-learning. -Study. Athletic performance, enthusiasm for sport, group cohesion, sense of responsibility and game performance. The participants were 3 secondary school physical education teachers and 6 physical education classes, 133 participants with students from the experimental group with 75 students and the control group with 58 students, and each teacher taught the experimental group and the control group for 16 weeks. The research methods used surveys, interviews and focus group meetings. Research instruments included the Personal and Social Responsibility Questionnaire, Sports Enthusiasm Scale, Group Cohesion Scale, Sports Self-Efficacy Scale, and Game Performance Assessment Tool. Multivariate analyzes of covariance and repeated measures ANOVA were used to examine differences in student learning outcomes between combining the TPSR with a physical education model and the TPSR with a traditional teaching model. The research findings are as follows: 1) The TPSR sports education model can improve students' learning outcomes, including sports self-efficacy, game performance, sports enthusiasm, team cohesion, group awareness and responsibility. 2) A traditional teaching model with TPSR could improve student learning outcomes, including sports self-efficacy, responsibility, and game performance. 3) The sports education model with TPSR could improve learning outcomes more than the traditional teaching model with TPSR, including sports self-efficacy, sports enthusiasm, responsibility and game performance. 4) Based on qualitative data on teachers' and students' learning experience, the physical education model with TPSR significantly improves learning motivation, group interaction and sense of play. The results suggest that physical education with TPSR could further improve learning outcomes in the physical education program. On the other hand, the hybrid model curriculum projects TPSR - Physical Education and TPSR - Traditional Education are good curriculum projects for moral character education that can be used in school physics.

Keywords: approach competencies, physical, education, teachers employment, graduate, physical education and sport sciences, SWOT analysis character education, sport season, game performance, sport competence

Procedia PDF Downloads 43
1310 Toxic Masculinity as Dictatorship: Gender and Power Struggles in Tomás Eloy Martínez´s Novels

Authors: Mariya Dzhyoyeva

Abstract:

In the present paper, I examine manifestations of toxic masculinity in the novels by Tomás Eloy Martínez, a post-Boom author, journalist, literary critic, and one of the representatives of the Argentine writing diaspora. I focus on the analysis of Martínez´s characters that display hypermasculine traits to define the relationship between toxic masculinity and power, including the power of authorship and violence as they are represented in his novels. The analysis reveals a complex network in which gender, power, and violence are intertwined and influence and modify each other. As the author exposes toxic masculine behaviors that generate violence, he looks to undermine them. Departing from M. Kimmel´s idea of masculinity as homophobia, I examine how Martínez “outs” his characters by incorporating into the narrative some secret, privileged sources that provide alternative accounts of their otherwise hypermasculine lives. These background stories expose their “weaknesses,” both physical and mental, and thereby feminize them in their own eyes. In a similar way, the toxic masculinity of the fictional male author that wields his power by abusing the written word as he abuses the female character in the story is exposed as a complex of insecurities accumulated by the character due to his childhood trauma. The artistic technique that Martínez uses to condemn the authoritarian male behavior is accessing his subjectivity and subverting it through a multiplicity of identities. Martínez takes over the character’s “I” and turns it into a host of pronouns with a constantly shifting point of reference that distorts not only the notions of gender but also the very notion of identity. In doing so, he takes the character´s affirmation of masculinity to the limit where the very idea of it becomes unsustainable. Viewed in the context of Martínez´s own exilic story, the condemnation of toxic masculine power turns into the condemnation of dictatorship and authoritarianism.

Keywords: gender, masculinity., toxic masculinity, authoritarian, Argentine literature, Martínez

Procedia PDF Downloads 66
1309 Comparison of Two Neural Networks To Model Margarine Age And Predict Shelf-Life Using Matlab

Authors: Phakamani Xaba, Robert Huberts, Bilainu Oboirien

Abstract:

The present study was aimed at developing & comparing two neural-network-based predictive models to predict shelf-life/product age of South African margarine using free fatty acid (FFA), water droplet size (D3.3), water droplet distribution (e-sigma), moisture content, peroxide value (PV), anisidine valve (AnV) and total oxidation (totox) value as input variables to the model. Brick margarine products which had varying ages ranging from fresh i.e. week 0 to week 47 were sourced. The brick margarine products which had been stored at 10 & 25 °C and were characterized. JMP and MATLAB models to predict shelf-life/ margarine age were developed and their performances were compared. The key performance indicators to evaluate the model performances were correlation coefficient (CC), root mean square error (RMSE), and mean absolute percentage error (MAPE) relative to the actual data. The MATLAB-developed model showed a better performance in all three performance indicators. The correlation coefficient of the MATLAB model was 99.86% versus 99.74% for the JMP model, the RMSE was 0.720 compared to 1.005 and the MAPE was 7.4% compared to 8.571%. The MATLAB model was selected to be the most accurate, and then, the number of hidden neurons/ nodes was optimized to develop a single predictive model. The optimized MATLAB with 10 neurons showed a better performance compared to the models with 1 & 5 hidden neurons. The developed models can be used by margarine manufacturers, food research institutions, researchers etc, to predict shelf-life/ margarine product age, optimize addition of antioxidants, extend shelf-life of products and proactively troubleshoot for problems related to changes which have an impact on shelf-life of margarine without conducting expensive trials.

Keywords: margarine shelf-life, predictive modelling, neural networks, oil oxidation

Procedia PDF Downloads 190
1308 Efficient Compact Micro Dielectric Barrier Discharge (DBD) Plasma Reactor for Ozone Generation for Industrial Application in Liquid and Gas Phase Systems

Authors: D. Kuvshinov, A. Siswanto, J. Lozano-Parada, W. Zimmerman

Abstract:

Ozone is well known as a powerful fast reaction rate oxidant. The ozone based processes produce no by-product left as a non-reacted ozone returns back to the original oxygen molecule. Therefore an application of ozone is widely accepted as one of the main directions for a sustainable and clean technologies development. There are number of technologies require ozone to be delivered to specific points of a production network or reactors construction. Due to space constrains, high reactivity and short life time of ozone the use of ozone generators even of a bench top scale is practically limited. This requires development of mini/micro scale ozone generator which can be directly incorporated into production units. Our report presents a feasibility study of a new micro scale rector for ozone generation (MROG). Data on MROG calibration and indigo decomposition at different operation conditions are presented. At selected operation conditions with residence time of 0.25 s the process of ozone generation is not limited by reaction rate and the amount of ozone produced is a function of power applied. It was shown that the MROG is capable to produce ozone at voltage level starting from 3.5kV with ozone concentration of 5.28E-6 (mol/L) at 5kV. This is in line with data presented on numerical investigation for a MROG. It was shown that in compare to a conventional ozone generator, MROG has lower power consumption at low voltages and atmospheric pressure. The MROG construction makes it applicable for emerged and dry systems. With a robust compact design MROG can be used as incorporated unit for production lines of high complexity.

Keywords: dielectric barrier discharge (DBD), micro reactor, ozone, plasma

Procedia PDF Downloads 331
1307 Hyper Parameter Optimization of Deep Convolutional Neural Networks for Pavement Distress Classification

Authors: Oumaima Khlifati, Khadija Baba

Abstract:

Pavement distress is the main factor responsible for the deterioration of road structure durability, damage vehicles, and driver comfort. Transportation agencies spend a high proportion of their funds on pavement monitoring and maintenance. The auscultation of pavement distress was based on the manual survey, which was extremely time consuming, labor intensive, and required domain expertise. Therefore, the automatic distress detection is needed to reduce the cost of manual inspection and avoid more serious damage by implementing the appropriate remediation actions at the right time. Inspired by recent deep learning applications, this paper proposes an algorithm for automatic road distress detection and classification using on the Deep Convolutional Neural Network (DCNN). In this study, the types of pavement distress are classified as transverse or longitudinal cracking, alligator, pothole, and intact pavement. The dataset used in this work is composed of public asphalt pavement images. In order to learn the structure of the different type of distress, the DCNN models are trained and tested as a multi-label classification task. In addition, to get the highest accuracy for our model, we adjust the structural optimization hyper parameters such as the number of convolutions and max pooling, filers, size of filters, loss functions, activation functions, and optimizer and fine-tuning hyper parameters that conclude batch size and learning rate. The optimization of the model is executed by checking all feasible combinations and selecting the best performing one. The model, after being optimized, performance metrics is calculated, which describe the training and validation accuracies, precision, recall, and F1 score.

Keywords: distress pavement, hyperparameters, automatic classification, deep learning

Procedia PDF Downloads 85
1306 A Critical Study on Unprecedented Employment Discrimination and Growth of Contractual Labour Engaged by Rail Industry in India

Authors: Munmunlisa Mohanty, K. D. Raju

Abstract:

Rail industry is one of the model employers in India has separate national legislation (Railways Act 1989) to regulate its vast employment structure, functioning across the country. Indian Railway is not only the premier transport industry of the country; indeed, it is Asia’s most extensive rail network organisation and the world’s second-largest industry functioning under one management. With the growth of globalization of industrial products, the scope of anti-employment discrimination is no more confined to gender aspect only; instead, it extended to the unregularized classification of labour force applicable in the various industrial establishments in India. And the Indian Rail Industry inadvertently enhanced such discriminatory employment trends by engaging contractual labour in an unprecedented manner. The engagement of contractual labour by rail industry vanished the core “Employer-Employee” relationship between rail management and contractual labour who employed through the contractor. This employment trend reduces the cost of production and supervision, discourages the contractual labour from forming unions, and reduces its collective bargaining capacity. So, the primary intention of this paper is to highlight the increasing discriminatory employment scope for contractual labour engaged by Indian Railways. This paper critically analyses the diminishing perspective of anti-employment opportunity practiced by Indian Railways towards contractual labour and demands an urgent outlook on the probable scope of anti-employment discrimination against contractual labour engaged by Indian Railways. The researcher used doctrinal methodology where primary materials (Railways Act, Contract Labour Act and Occupational, health and Safety Code, 2020) and secondary data (CAG Report 2018, Railways Employment Regulation Rules, ILO Report etc.) are used for the paper.

Keywords: anti-employment, CAG Report, contractual labour, discrimination, Indian Railway, principal employer

Procedia PDF Downloads 166
1305 Functional Impairment in South African Children with ADHD: Design, Implementation and Evaluation of a Targeted Intervention

Authors: Mareli Fischer, Kevin G. F. Thomas

Abstract:

Although Attention-Deficit/Hyperactivity Disorder (ADHD) is one of the most prevalent childhood neurobehavioural disorders, little empirical research has been published on its clinical presentation in Africa, and, globally, few studies evaluate ADHD intervention programs that emphasize parent training. Hence, Stage 1 of this research programme aimed to describe the functional impairment of South African children with ADHD, and also sought to investigate the influence of sociodemographic variables (e.g., sex, age, socioeconomic status, family environment) and clinical variables (e.g., ADHD subtype and comorbidity) on the degree of that impairment. We used the Mini International Neuropsychiatric Interview for Children and Adolescents as a diagnostic tool, and the Child Behavior Checklist, the Strengths and Difficulties Questionnaire, and the Impairment Rating Scale as measures of functional impairment. Results from this stage of the research indicated that South African children and adolescents who meet diagnostic criteria for ADHD experience most functional impairment in the school domain, as well as in the area of social functioning. None of the measured sociodemographic variables had a significant detrimental or protective effect on how ADHD symptoms impacted on functioning. In terms of comorbidity, the presence of Major Depressive Disorder, Conduct Disorder, and Oppositional Defiant Disorder were all associated with significantly impaired overall functioning. Stage 2 of the research programme aimed to design, implement, and evaluate a child-specific intervention that targeted the primary areas of impairment identified in Stage 1. Existing literature suggests that a positive parent-training programme, in the group format, is one of the best options for cost-effective and successful ADHD intervention. Hence, the intervention took that form. Parents were taught basic behaviour analysis concepts within a supportive group context. Evaluation of the intervention’s efficacy used many of the same measures as in Stage 1, but also featured semi-structured interviews with participants and naturalistic observation of parent-child interaction. We will discuss preliminary results of that evaluation. Studying functional impairment and designing intervention plans in this way will pave the way for evidence-based treatment plans for children and adolescents diagnosed with ADHD.

Keywords: attention deficit/hyperactivity disorder, children, intervention, parenting groups

Procedia PDF Downloads 430
1304 Patient Experience in a Healthcare Setting: How Patients' Encounters Make for Better Value Co-creation

Authors: Kingsley Agyapong

Abstract:

Research conducted in recent years has delved into the concept of patient-perceived value within the context of co-creation, particularly in the realm of doctor-patient interactions within healthcare settings. However, existing scholarly discourse lacks exploration regarding the emergence of patient-derived value in the co-creation process, specifically within encounters involving patients and stakeholders such as doctors, nurses, pharmacists, and other healthcare professionals. This study aims to fill this gap by elucidating the perspectives of patients regarding the value they derive from their interactions with multiple stakeholders in the delivery of healthcare services. The fieldwork was conducted at a university clinic located in Ghana. Data collection procedures involved conducting 20 individual interviews with key informants on distinct value accrued from co-creation practices and interactions with stakeholders. The key informants consisted of patients receiving care at the university clinic during the Malaria Treatment Process. Three themes emerged from both the existing literature and the empirical data collected. The first theme, labeled as "patient value needs in co-creation," encapsulates elements such as communication effectiveness, interpersonal interaction quality, treatment efficacy, and enhancements to the overall quality of life experienced by patients during their interactions with healthcare professionals. The second theme, designated as "services that enhance patients' experience in value co-creation," pertains to patients' perceptions of services that contribute favourably to co-creation experiences, including initiatives related to health promotion and the provision of various in-house services that patients deem pertinent for augmenting their overall experiences. The third theme, titled "Challenges in the co-creation of patients' value," delineates obstacles encountered within the co-creation process, including health professionals' challenges in effectively following up with patients scheduled for review and prolonged waiting times for healthcare delivery. This study contributes to the patients' perceptions of value within the co-creation process during their interactions with service providers, particularly healthcare professionals. By gaining a deeper insight into this process, healthcare providers can enhance the delivery of patient-centered care, thereby leading to improved healthcare outcomes. The study further offers managerial implications derived from its findings, providing actionable insights for healthcare managers and policymakers aiming to optimize patient value creation in healthcare services. Furthermore, it suggests avenues for future research endeavors within healthcare settings.

Keywords: patient, healthcare, co-creation, malaria

Procedia PDF Downloads 43
1303 Entropy in a Field of Emergence in an Aspect of Linguo-Culture

Authors: Nurvadi Albekov

Abstract:

Communicative situation is a basis, which designates potential models of ‘constructed forms’, a motivated basis of a text, for a text can be assumed as a product of the communicative situation. It is within the field of emergence the models of text, that can be potentially prognosticated in a certain communicative situation, are designated. Every text can be assumed as conceptual system structured on the base of certain communicative situation. However in the process of ‘structuring’ of a certain model of ‘conceptual system’ consciousness of a recipient is able act only within the border of the field of emergence for going out of this border indicates misunderstanding of the communicative situation. On the base of communicative situation we can witness the increment of meaning where the synergizing of the informative model of communication, formed by using of the invariant units of a language system, is a result of verbalization of the communicative situation. The potential of the models of a text, prognosticated within the field of emergence, also depends on the communicative situation. The conception ‘the field of emergence’ is interpreted as a unit of the language system, having poly-directed universal structure, implying the presence of the core, the center and the periphery, including different levels of means of a functioning system of language, both in terms of linguistic resources, and in terms of extra linguistic factors interaction of which results increment of a text. The conception ‘field of emergence’ is considered as the most promising in the analysis of texts: oral, written, printed and electronic. As a unit of the language system field of emergence has several properties that predict its use during the study of a text in different levels. This work is an attempt analysis of entropy in a text in the aspect of lingua-cultural code, prognosticated within the model of the field of emergence. The article describes the problem of entropy in the field of emergence, caused by influence of the extra-linguistic factors. The increasing of entropy is caused not only by the fact of intrusion of the language resources but by influence of the alien culture in a whole, and by appearance of non-typical for this very culture symbols in the field of emergence. The borrowing of alien lingua-cultural symbols into the lingua-culture of the author is a reason of increasing the entropy when constructing a text both in meaning and in structuring level. It is nothing but artificial formatting of lexical units that violate stylistic unity of a phrase. It is marked that one of the important characteristics descending the entropy in the field of emergence is a typical similarity of lexical and semantic resources of the different lingua-cultures in aspects of extra linguistic factors.

Keywords: communicative situation, field of emergence, lingua-culture, entropy

Procedia PDF Downloads 358
1302 Didactic Games for the Development of Reading and Writing: Proeduca Program

Authors: Andreia Osti

Abstract:

The context experienced in the face of the COVID-19 pandemic substantially changed the way children communicate and the way literacy teaching was carried out. Officially, according to the Brazilian Institute of Geography and Statistics, children who should be literate were seriously impacted by the pandemic, and it was found that the number of illiterate children increased from 1.4 million, in 2019, to 2.4 million in 2021. In this context, this work presents partial results of an intervention project in which classroom monitoring of students in the literacy phase was carried out. Methodologically, pedagogical games were developed that work on specific reading and writing content, such as 1) games with direct regularities and; 2) Games with contextual regularities. The project involves the elaboration and production of games and their application by the classroom teacher. All work focused on literacy and improving understanding of grapheme and phoneme relationships among students, aiming to improve reading and writing comprehension levels. The project, still under development, is carried out in two schools and supports 60 students. The teachers participate in the research, as they apply the games produced at the university and monitor the children's learning process. The project is developed with financial support for research from FAPESP - in the public education improvement program – PROEDUCA. The initial results show that children are more involved in playful activities, that games provide better moments of interaction in the classroom and that they result in effective learning since they constitute a different way of approaching the content to be taught. It is noteworthy that the pedagogical games produced directly involve the teaching and learning processes of curricular components – in this case, reading and writing, which are basic components in elementary education and constitute teaching methodologies as specific and guided activities are planned in literacy methods. In this presentation, some of the materials developed will be shown, as well as the results of the assessments carried out with the students. In relation to the Sustainable Development objectives (SDGs) linked to this project, we have 4 – Quality Education, 10 – Reduction of inequalities. It is noteworthy that the research seeks to improve Public Education and promote the articulation between theory and practice in the educational context with a view to consolidating the tripod of teaching, research and university extension and promoting a humanized education.

Keywords: didactic, teaching, games, learning, literacy

Procedia PDF Downloads 16
1301 Nondestructive Acoustic Microcharacterisation of Gamma Irradiation Effects on Sodium Oxide Borate Glass X2Na2O-X2B2O3 by Acoustic Signature

Authors: Ibrahim Al-Suraihy, Abdellaziz Doghmane, Zahia Hadjoub

Abstract:

We discuss in this work the elastic properties by using acoustic microscopes to measure Rayleigh and longitudinal wave velocities in a no radiated and radiated sodium borate glasses X2Na2O-X2B2O3 with 0 ≤ x ≤ 27 (mol %) at microscopic resolution. The acoustic material signatures were first measured, from which the characteristic surface velocities were determined.Longitudinal and shear ultrasonic velocities were measured in a different composition of sodium borate glass samples before and after irradiation with γ-rays. Results showed that the effect due to increasing sodium oxide content on the ultrasonic velocity appeared more clearly than due to γ-radiation. It was found that as Na2O composition increases, longitudinal velocities vary from 3832 to 5636 m/s in irradiated sample and it vary from 4010 to 5836 m/s in high radiated sample by 10 dose whereas shear velocities vary from 2223 to 3269 m/s in irradiated sample and it vary from 2326 m/s in low radiation to 3385 m/s in high radiated sample by 10 dose. The effect of increasing sodium oxide content on ultrasonic velocity was very clear. The increase of velocity was attributed to the gradual increase in the rigidity of glass and hence strengthening of network due to gradual change of boron atoms from the three-fold to the four-fold coordination of oxygen atoms. The ultrasonic velocities data of glass samples have been used to find the elastic modulus. It was found that ultrasonic velocity, elastic modulus and microhardness increase with increasing barium oxide content and increasing γ-radiation dose.

Keywords: mechanical properties X2Na2O-X2B2O3, acoustic signature, SAW velocities, additives, gamma-radiation dose

Procedia PDF Downloads 395
1300 Transparency Obligations under the AI Act Proposal: A Critical Legal Analysis

Authors: Michael Lognoul

Abstract:

In April 2021, the European Commission released its AI Act Proposal, which is the first policy proposal at the European Union level to target AI systems comprehensively, in a horizontal manner. This Proposal notably aims to achieve an ecosystem of trust in the European Union, based on the respect of fundamental rights, regarding AI. Among many other requirements, the AI Act Proposal aims to impose several generic transparency obligationson all AI systems to the benefit of natural persons facing those systems (e.g. information on the AI nature of systems, in case of an interaction with a human). The Proposal also provides for more stringent transparency obligations, specific to AI systems that qualify as high-risk, to the benefit of their users, notably on the characteristics, capabilities, and limitations of the AI systems they use. Against that background, this research firstly presents all such transparency requirements in turn, as well as related obligations, such asthe proposed obligations on record keeping. Secondly, it focuses on a legal analysis of their scope of application, of the content of the obligations, and on their practical implications. On the scope of transparency obligations tailored for high-risk AI systems, the research notably notes that it seems relatively narrow, given the proposed legal definition of the notion of users of AI systems. Hence, where end-users do not qualify as users, they may only receive very limited information. This element might potentially raise concern regarding the objective of the Proposal. On the content of the transparency obligations, the research highlights that the information that should benefit users of high-risk AI systems is both very broad and specific, from a technical perspective. Therefore, the information required under those obligations seems to create, prima facie, an adequate framework to ensure trust for users of high-risk AI systems. However, on the practical implications of these transparency obligations, the research notes that concern arises due to potential illiteracy of high-risk AI systems users. They might not benefit from sufficient technical expertise to fully understand the information provided to them, despite the wording of the Proposal, which requires that information should be comprehensible to its recipients (i.e. users).On this matter, the research points that there could be, more broadly, an important divergence between the level of detail of the information required by the Proposal and the level of expertise of users of high-risk AI systems. As a conclusion, the research provides policy recommendations to tackle (part of) the issues highlighted. It notably recommends to broaden the scope of transparency requirements for high-risk AI systems to encompass end-users. It also suggests that principles of explanation, as they were put forward in the Guidelines for Trustworthy AI of the High Level Expert Group, should be included in the Proposal in addition to transparency obligations.

Keywords: aI act proposal, explainability of aI, high-risk aI systems, transparency requirements

Procedia PDF Downloads 304
1299 Incorporation of Growth Factors onto Hydrogels via Peptide Mediated Binding for Development of Vascular Networks

Authors: Katie Kilgour, Brendan Turner, Carly Catella, Michael Daniele, Stefano Menegatti

Abstract:

In vivo, the extracellular matrix (ECM) provides biochemical and mechanical properties that are instructional to resident cells to form complex tissues with characteristics to develop and support vascular networks. In vitro, the development of vascular networks can be guided by biochemical patterning of substrates via spatial distribution and display of peptides and growth factors to prompt cell adhesion, differentiation, and proliferation. We have developed a technique utilizing peptide ligands that specifically bind vascular endothelial growth factor (VEGF), erythropoietin (EPO), or angiopoietin-1 (ANG1) to spatiotemporally distribute growth factors to cells. This allows for the controlled release of each growth factor, ultimately enhancing the formation of a vascular network. Our engineered tissue constructs (ETCs) are fabricated out of gelatin methacryloyl (GelMA), which is an ideal substrate for tailored stiffness and bio-functionality, and covalently patterned with growth factor specific peptides. These peptides mimic growth factor receptors, facilitating the non-covalent binding of the growth factors to the ETC, allowing for facile uptake by the cells. We have demonstrated in the absence of cells the binding affinity of VEGF, EPO, and ANG1 to their respective peptides and the ability for each to be patterned onto a GelMA substrate. The ability to organize growth factors on an ETC provides different functionality to develop organized vascular networks. Our results demonstrated a method to incorporate biochemical cues into ETCs that enable spatial and temporal control of growth factors. Future efforts will investigate the cellular response by evaluating gene expression, quantifying angiogenic activity, and measuring the speed of growth factor consumption.

Keywords: growth factor, hydrogel, peptide, angiogenesis, vascular, patterning

Procedia PDF Downloads 158
1298 Formulation and Evaluation of Glimepiride (GMP)-Solid Nanodispersion and Nanodispersed Tablets

Authors: Ahmed. Abdel Bary, Omneya. Khowessah, Mojahed. al-jamrah

Abstract:

Introduction: The major challenge with the design of oral dosage forms lies with their poor bioavailability. The most frequent causes of low oral bioavailability are attributed to poor solubility and low permeability. The aim of this study was to develop solid nanodispersed tablet formulation of Glimepiride for the enhancement of the solubility and bioavailability. Methodology: Solid nanodispersions of Glimepiride (GMP) were prepared using two different ratios of 2 different carriers, namely; PEG6000, pluronic F127, and by adopting two different techniques, namely; solvent evaporation technique and fusion technique. A full factorial design of 2 3 was adopted to investigate the influence of formulation variables on the prepared nanodispersion properties. The best chosen formula of nanodispersed powder was formulated into tablets by direct compression. The Differential Scanning Calorimetry (DSC) analysis and Fourier Transform Infra-Red (FTIR) analysis were conducted for the thermal behavior and surface structure characterization, respectively. The zeta potential and particle size analysis of the prepared glimepiride nanodispersions was determined. The prepared solid nanodispersions and solid nanodispersed tablets of GMP were evaluated in terms of pre-compression and post-compression parameters, respectively. Results: The DSC and FTIR studies revealed that there was no interaction between GMP and all the excipients used. Based on the resulted values of different pre-compression parameters, the prepared solid nanodispersions powder blends showed poor to excellent flow properties. The resulted values of the other evaluated pre-compression parameters of the prepared solid nanodispersion were within the limits of pharmacopoeia. The drug content of the prepared nanodispersions ranged from 89.6 ± 0.3 % to 99.9± 0.5% with particle size ranged from 111.5 nm to 492.3 nm and the resulted zeta potential (ζ ) values of the prepared GMP-solid nanodispersion formulae (F1-F8) ranged from -8.28±3.62 mV to -78±11.4 mV. The in-vitro dissolution studies of the prepared solid nanodispersed tablets of GMP concluded that GMP- pluronic F127 combinations (F8), exhibited the best extent of drug release, compared to other formulations, and to the marketed product. One way ANOVA for the percent of drug released from the prepared GMP-nanodispersion formulae (F1- F8) after 20 and 60 minutes showed significant differences between the percent of drug released from different GMP-nanodispersed tablet formulae (F1- F8), (P<0.05). Conclusion: Preparation of glimepiride as nanodispersed particles proven to be a promising tool for enhancing the poor solubility of glimepiride.

Keywords: glimepiride, solid Nanodispersion, nanodispersed tablets, poorly water soluble drugs

Procedia PDF Downloads 484
1297 Re-Integrating Historic Lakes into the City Fabric in the Case of Vandiyur Lake, Madurai

Authors: Soumya Pugal

Abstract:

The traditional lake system of an ancient town is a network of water holding blue spaces, erected further than 2000 years ago by the rulers of ancient cities and maintained for centuries by the original communities. These blue spaces form a micro-watershed wherein an individual tank has its own catchment, tank bed area, and command area. These lakes are connected by a common sluice from the upstream tank, thereby feeding the downstream tank. The lakes used to be of socio-economic significance in those times, but the rapid growth of the city, as well as the change in systems of ownership of the lakes, have turned them into the backyard of urban development. Madurai is one such historic city to be facing the issues of finding a balance to the social, ecological, and profitable requirements of the people with respect to the traditional lake system. To find a solution to problems caused by the neglect of vital ecological systems of a city, the theory of transformative placemaking through water sensitive urban design has been explored. This approach re-invents the relationship between the people and the urban lakes to suit the modern aspirations while respecting the environment. The thesis aims to develop strategies to guide the development along the major urban lake of Vandiyur to equip the lake to meet the growing requirements of the megacity in terms of its recreational requirements and give a renewed connection between people and water. The intent of the design is to understand the ecological and social structures of the lake and find ways to use the lake to produce social cohesion within the community and balance the city's profitable and ecological requirements by using transformative placemaking through water sensitive urban design..

Keywords: urban lakes, urban blue spaces, placemaking, revitalisation of lakes, urban cohesion

Procedia PDF Downloads 70
1296 Synthesis and Characterization of pH-Responsive Nanocarriers Based on POEOMA-b-PDPA Block Copolymers for RNA Delivery

Authors: Bruno Baptista, Andreia S. R. Oliveira, Patricia V. Mendonca, Jorge F. J. Coelho, Fani Sousa

Abstract:

Drug delivery systems are designed to allow adequate protection and controlled delivery of drugs to specific locations. These systems aim to reduce side effects and control the biodistribution profile of drugs, thus improving therapeutic efficacy. This study involved the synthesis of polymeric nanoparticles, based on amphiphilic diblock copolymers, comprising a biocompatible, poly (oligo (ethylene oxide) methyl ether methacrylate (POEOMA) as hydrophilic segment and a pH-sensitive block, the poly (2-diisopropylamino)ethyl methacrylate) (PDPA). The objective of this work was the development of polymeric pH-responsive nanoparticles to encapsulate and carry small RNAs as a model to further develop non-coding RNAs delivery systems with therapeutic value. The responsiveness of PDPA to pH allows the electrostatic interaction of these copolymers with nucleic acids at acidic pH, as a result of the protonation of the tertiary amine groups of this polymer at pH values below its pKa (around 6.2). Initially, the molecular weight parameters and chemical structure of the block copolymers were determined by size exclusion chromatography (SEC) and nuclear magnetic resonance (1H-NMR) spectroscopy, respectively. Then, the complexation with small RNAs was verified, generating polyplexes with sizes ranging from 300 to 600 nm and with encapsulation efficiencies around 80%, depending on the molecular weight of the polymers, their composition, and concentration used. The effect of pH on the morphology of nanoparticles was evaluated by scanning electron microscopy (SEM) being verified that at higher pH values, particles tend to lose their spherical shape. Since this work aims to develop systems for the delivery of non-coding RNAs, studies on RNA protection (contact with RNase, FBS, and Trypsin) and cell viability were also carried out. It was found that they induce some protection against constituents of the cellular environment and have no cellular toxicity. In summary, this research work contributes to the development of pH-sensitive polymers, capable of protecting and encapsulating RNA, in a relatively simple and efficient manner, to further be applied on drug delivery to specific sites where pH may have a critical role, as it can occur in several cancer environments.

Keywords: drug delivery systems, pH-responsive polymers, POEOMA-b-PDPA, small RNAs

Procedia PDF Downloads 256
1295 The Image of Saddam Hussein and Collective Memory: The Semiotics of Ba'ath Regime's Mural in Iraq (1980-2003)

Authors: Maryam Pirdehghan

Abstract:

During the Ba'ath Party's rule in Iraq, propaganda was utilized to justify and to promote Saddam Hussein's image in the collective memory as the greatest Arab leader. Consequently, urban walls were routinely covered with images of Saddam. Relying on these images, the regime aimed to provide a basis for evoking meanings in the public opinion, which would supposedly strengthen Saddam’s power and reconstruct facts to legitimize his political ideology. Nonetheless, Saddam was not always portrayed with common and explicit elements but in certain periods of his rule, the paintings depicted him in an unusual context, where various historical and contemporary elements were combined in a narrative background. Therefore, an understanding of the implied socio-political references of these elements is required to fully elucidate the impact of these images on forming the memory and collective unconscious of the Iraqi people. To obtain such understanding, one needs to address the following questions: a) How Saddam Hussein is portrayed in mural during his rule? b) What of elements and mythical-historical narratives are found in the paintings? c) Which Saddam's political views were subject to the collective memory through mural? Employing visual semiotics, this study reveals that during Saddam Hussein's regime, the paintings were initially simple portraits but gradually transformed into narrative images, characterized by a complex network of historical, mythical and religious elements. These elements demonstrate the transformation of a secular-nationalist politician into a Muslim ruler who tried to instill three major policies in domestic and international relations i.e. the arabization of Iraq, as well as the propagation of pan-arabism ideology (first period), the implementation of anti-Israel policy (second period) and the implementation of anti-American-British policy (last period).

Keywords: Ba'ath Party, Saddam Hussein, mural, Iraq, propaganda, collective memory

Procedia PDF Downloads 322