Search results for: principal objects
247 Investigation of FOXM1 Gene Expression in Breast Cancer and Its Relationship with Mir-216B-5P Expression Level
Authors: Ramin Mehdiabadi, Neda Menbari, Mohammad Nazir Menbari
Abstract:
As a pressing public health concern, breast cancer stands as the predominant oncological diagnosis and principal cause of cancer-related mortality among women globally, accounting for 11.7% of new cancer incidences and 6.9% of cancer-related deaths. The annual figures indicate that approximately 230,480 women are diagnosed with breast cancer in the United States alone, with 39,520 succumbing to the disease. While developed economies have reported a deceleration in both incidence and mortality rates across various forms of cancer, including breast cancer, emerging and low-income economies manifest a contrary escalation, largely attributable to lifestyle-mediated risk factors such as tobacco usage, physical inactivity, and high caloric intake. Breast cancer is distinctly characterized by molecular heterogeneity, manifesting in specific subtypes delineated by biomarkers—Estrogen Receptors (ER), Progesterone Receptors (PR), and Human Epidermal Growth Factor Receptor 2 (HER2). These subtypes, comprising Luminal A, Luminal B, HER2-enriched, triple-negative/basal-like, and normal-like, necessitate nuanced, subtype-specific therapeutic regimens, thereby challenging the applicability of generalized treatment protocols. Within this molecular complexity, the transcription factor Forkhead Box M1 (FoxM1) has garnered attention as a significant driver of cellular proliferation, tumorigenesis, metastatic progression, and treatment resistance in a spectrum of human malignancies, including breast cancer. Concurrently, microRNAs (miRs), specifically miR-216b-5p, have been identified as post-transcriptional gene expression regulators and potential tumor suppressors. The overarching objective of this academic investigation is to explicate the multifaceted interrelationship between FoxM1 and miR-216b-5p across the disparate molecular subtypes of breast cancer. Employing a methodologically rigorous, interdisciplinary research design that incorporates cutting-edge molecular biology techniques, sophisticated bioinformatics analytics, and exhaustive meta-analyses of extant clinical data, this scholarly endeavor aims to unveil novel biomarker-specific therapeutic pathways. By doing so, this research is positioned to make a seminal contribution to the advancement of personalized, efficacious, and minimally toxic treatment paradigms, thus profoundly impacting the global efforts to ameliorate the burden of breast cancer.Keywords: breast cancer, fox m1, microRNAs, mir-216b-5p, gene expression
Procedia PDF Downloads 74246 Implementing Peer Mediated Interventions with Visual Supports for Social Skills Development in a School-Based Work Setting with Secondary Students with Autism
Authors: Karen Eastman
Abstract:
More youths and young adults with autism spectrum disorder (ASD) have been entering the workforce in recent years. Historically, students with ASD struggle after leaving high school and experience lower rates of employment, with social skills continuing to be the most problematic area of concern. Special education teachers may find it challenging to identify effective combinations of evidence-based practices (EBPs) and supports to best guide these students. One EBP, Peer Mediated Instruction and Intervention (PMII) has been well documented in the literature as being effective for younger students with autism but not researched as much with older students and adults, particularly in work settings. A need to combine PMII with other EBPs has been identified as a way to achieve a greater positive impact rather than any practice alone. A multiple baseline across skills design was used in this research project with two participants in different settings. PMII was combined with Visual Supports, with typical peers being trained in both practices. PMII is an evidence-based practice used to address social concerns by training peers without disabilities as to how they can provide feedback to and support, the student with ASD with social interactions in structured settings. The peers without disabilities were the instructors, while the adults facilitated the social situations and provided support to both the peers and students with ASD when needed. Because many individuals with ASD learn best with visual input, rather than using only the spoken word (verbal directions and feedback), Visual Supports were used in conjunction with PMII. Visual Supports can include written words, pictures, symbols, videos, or objects. In this project, the Visual Supports used were written social scripts, videos, Stop and Think signs, written reminder cards, a school map, and a pictorial task analysis of work tasks. Variables that may affect intervention outcomes in this project included attendance at school and school-based work settings for both the students with ASD and the peers without disabilities and behaviors and responses from others in the settings. Qualitative data was also collected from observations and surveys with peers about the process and their role. Data indicated that the students with ASD responded more positively to redirection and support from their peers than to teachers and staff and showed an increase in positive interactions with others. Those surveyed indicated a positive attitude toward and response to the use of peer interventions with visual supports.Keywords: autism, social skills, vocational training, peer interventions
Procedia PDF Downloads 42245 Relativity in Toddlers' Understanding of the Physical World as Key to Misconceptions in the Science Classroom
Authors: Michael Hast
Abstract:
Within their first year, infants can differentiate between objects based on their weight. By at least 5 years children hold consistent weight-related misconceptions about the physical world, such as that heavy things fall faster than lighter ones because of their weight. Such misconceptions are seen as a challenge for science education since they are often highly resistant to change through instruction. Understanding the time point of emergence of such ideas could, therefore, be crucial for early science pedagogy. The paper thus discusses two studies that jointly address the issue by examining young children’s search behaviour in hidden displacement tasks under consideration of relative object weight. In both studies, they were tested with a heavy or a light ball, and they either had information about one of the balls only or both. In Study 1, 88 toddlers aged 2 to 3½ years watched a ball being dropped into a curved tube and were then allowed to search for the ball in three locations – one straight beneath the tube entrance, one where the curved tube lead to, and one that corresponded to neither of the previous outcomes. Success and failure at the task were not impacted by weight of the balls alone in any particular way. However, from around 3 years onwards, relative lightness, gained through having tactile experience of both balls beforehand, enhanced search success. Conversely, relative heaviness increased search errors such that children increasingly searched in the location immediately beneath the tube entry – known as the gravity bias. In Study 2, 60 toddlers aged 2, 2½ and 3 years watched a ball roll down a ramp and behind a screen with four doors, with a barrier placed along the ramp after one of four doors. Toddlers were allowed to open the doors to find the ball. While search accuracy generally increased with age, relative weight did not play a role in 2-year-olds’ search behaviour. Relative lightness improved 2½-year-olds’ searches. At 3 years, both relative lightness and relative heaviness had a significant impact, with the former improving search accuracy and the latter reducing it. Taken together, both studies suggest that between 2 and 3 years of age, relative object weight is increasingly taken into consideration in navigating naïve physical concepts. In particular, it appears to contribute to the early emergence of misconceptions relating to object weight. This insight from developmental psychology research may have consequences for early science education and related pedagogy towards early conceptual change.Keywords: conceptual development, early science education, intuitive physics, misconceptions, object weight
Procedia PDF Downloads 190244 The Growth Role of Natural Gas Consumption for Developing Countries
Authors: Tae Young Jin, Jin Soo Kim
Abstract:
Carbon emissions have emerged as global concerns. Intergovernmental Panel of Climate Change (IPCC) have published reports about Green House Gases (GHGs) emissions regularly. United Nations Framework Convention on Climate Change (UNFCCC) have held a conference yearly since 1995. Especially, COP21 held at December 2015 made the Paris agreement which have strong binding force differently from former COP. The Paris agreement was ratified as of 4 November 2016, they finally have legal binding. Participating countries set up their own Intended Nationally Determined Contributions (INDC), and will try to achieve this. Thus, carbon emissions must be reduced. The energy sector is one of most responsible for carbon emissions and fossil fuels particularly are. Thus, this paper attempted to examine the relationship between natural gas consumption and economic growth. To achieve this, we adopted the Cobb-Douglas production function that consists of natural gas consumption, economic growth, capital, and labor using dependent panel analysis. Data were preprocessed with Principal Component Analysis (PCA) to remove cross-sectional dependency which can disturb the panel results. After confirming the existence of time-trended component of each variable, we moved to cointegration test considering cross-sectional dependency and structural breaks to describe more realistic behavior of volatile international indicators. The cointegration test result indicates that there is long-run equilibrium relationship between selected variables. Long-run cointegrating vector and Granger causality test results show that while natural gas consumption can contribute economic growth in the short-run, adversely affect in the long-run. From these results, we made following policy implications. Since natural gas has positive economic effect in only short-run, the policy makers in developing countries must consider the gradual switching of major energy source, from natural gas to sustainable energy source. Second, the technology transfer and financing business suggested by COP must be accelerated. Acknowledgement—This work was supported by the Energy Efficiency & Resources Core Technology Program of the Korea Institute of Energy Technology Evaluation and Planning (KETEP) granted financial resource from the Ministry of Trade, Industry & Energy, Republic of Korea (No. 20152510101880) and by the National Research Foundation of Korea Grant funded by the Korean Government (NRF-205S1A3A2046684).Keywords: developing countries, economic growth, natural gas consumption, panel data analysis
Procedia PDF Downloads 234243 Application of Micro-Tunneling Technique to Rectify Tilted Structures Constructed on Cohesive Soil
Authors: Yasser R. Tawfic, Mohamed A. Eid
Abstract:
Foundation differential settlement and supported structure tilting is an occasionally occurred engineering problem. This may be caused by overloading, changes in ground soil properties or unsupported nearby excavations. Engineering thinking points directly toward the logic solution for such problem by uplifting the settled side. This can be achieved with deep foundation elements such as micro-piles and macro-piles™, jacked piers and helical piers, jet grouted soil-crete columns, compaction grout columns, cement grouting or with chemical grouting, or traditional pit underpinning with concrete and mortar. Although, some of these techniques offer economic, fast and low noise solutions, many of them are quite the contrary. For tilted structures, with limited inclination, it may be much easier to cause a balancing settlement on the less-settlement side which shall be done carefully in a proper rate. This principal has been applied in Leaning Tower of Pisa stabilization with soil extraction from the ground surface. In this research, the authors attempt to introduce a new solution with a different point of view. So, micro-tunneling technique is presented in here as an intended ground deformation cause. In general, micro-tunneling is expected to induce limited ground deformations. Thus, the researchers propose to apply the technique to form small size ground unsupported holes to produce the target deformations. This shall be done in four phases: •Application of one or more micro-tunnels, regarding the existing differential settlement value, under the raised side of the tilted structure. •For each individual tunnel, the lining shall be pulled out from both sides (from jacking and receiving shafts) in slow rate. •If required, according to calculations and site records, an additional surface load can be applied on the raised foundation side. •Finally, a strengthening soil grouting shall be applied for stabilization after adjustment. A finite element based numerical model is presented to simulate the proposed construction phases for different tunneling positions and tunnels group. For each case, the surface settlements are calculated and induced plasticity points are checked. These results show the impact of the suggested procedure on the tilted structure and its feasibility. Comparing results also show the importance of the position selection and tunnels group gradual effect. Thus, a new engineering solution is presented to one of the structural and geotechnical engineering challenges.Keywords: differential settlement, micro-tunneling, soil-structure interaction, tilted structures
Procedia PDF Downloads 208242 Transformation of Periodic Fuzzy Membership Function to Discrete Polygon on Circular Polar Coordinates
Authors: Takashi Mitsuishi
Abstract:
Fuzzy logic has gained acceptance in the recent years in the fields of social sciences and humanities such as psychology and linguistics because it can manage the fuzziness of words and human subjectivity in a logical manner. However, the major field of application of the fuzzy logic is control engineering as it is a part of the set theory and mathematical logic. Mamdani method, which is the most popular technique for approximate reasoning in the field of fuzzy control, is one of the ways to numerically represent the control afforded by human language and sensitivity and has been applied in various practical control plants. Fuzzy logic has been gradually developing as an artificial intelligence in different applications such as neural networks, expert systems, and operations research. The objects of inference vary for different application fields. Some of these include time, angle, color, symptom and medical condition whose fuzzy membership function is a periodic function. In the defuzzification stage, the domain of the membership function should be unique to obtain uniqueness its defuzzified value. However, if the domain of the periodic membership function is determined as unique, an unintuitive defuzzified value may be obtained as the inference result using the center of gravity method. Therefore, the authors propose a method of circular-polar-coordinates transformation and defuzzification of the periodic membership functions in this study. The transformation to circular polar coordinates simplifies the domain of the periodic membership function. Defuzzified value in circular polar coordinates is an argument. Furthermore, it is required that the argument is calculated from a closed plane figure which is a periodic membership function on the circular polar coordinates. If the closed plane figure is continuous with the continuity of the membership function, a significant amount of computation is required. Therefore, to simplify the practice example and significantly reduce the computational complexity, we have discretized the continuous interval and the membership function in this study. In this study, the following three methods are proposed to decide the argument from the discrete polygon which the continuous plane figure is transformed into. The first method provides an argument of a straight line passing through the origin and through the coordinate of the arithmetic mean of each coordinate of the polygon (physical center of gravity). The second one provides an argument of a straight line passing through the origin and the coordinate of the geometric center of gravity of the polygon. The third one provides an argument of a straight line passing through the origin bisecting the perimeter of the polygon (or the closed continuous plane figure).Keywords: defuzzification, fuzzy membership function, periodic function, polar coordinates transformation
Procedia PDF Downloads 363241 The Quantum Theory of Music and Human Languages
Authors: Mballa Abanda Luc Aurelien Serge, Henda Gnakate Biba, Kuate Guemo Romaric, Akono Rufine Nicole, Zabotom Yaya Fadel Biba, Petfiang Sidonie, Bella Suzane Jenifer
Abstract:
The main hypotheses proposed around the definition of the syllable and of music, of the common origin of music and language, should lead the reader to reflect on the cross-cutting questions raised by the debate on the notion of universals in linguistics and musicology. These are objects of controversy, and there lies its interest: the debate raises questions that are at the heart of theories on language. It is an inventive, original, and innovative research thesis. A contribution to the theoretical, musicological, ethno musicological, and linguistic conceptualization of languages, giving rise to the practice of interlocution between the social and cognitive sciences, the activities of artistic creation, and the question of modeling in the human sciences: mathematics, computer science, translation automation, and artificial intelligence. When you apply this theory to any text of a folksong of a world-tone language, you do not only piece together the exact melody, rhythm, and harmonies of that song as if you knew it in advance but also the exact speaking of this language. The author believes that the issue of the disappearance of tonal languages and their preservation has been structurally resolved, as well as one of the greatest cultural equations related to the composition and creation of tonal, polytonal, and random music. The experimentation confirming the theorization, I designed a semi-digital, semi-analog application that translates the tonal languages of Africa (about 2,100 languages) into blues, jazz, world music, polyphonic music, tonal and anatonal music, and deterministic and random music). To test this application, I use music reading and writing software that allows me to collect the data extracted from my mother tongue, which is already modeled in the musical staves saved in the ethnographic (semiotic) dictionary for automatic translation ( volume 2 of the book). The translation is done (from writing to writing, from writing to speech, and from writing to music). Mode of operation: you type a text on your computer, a structured song (chorus-verse), and you command the machine a melody of blues, jazz, and world music or variety, etc. The software runs, giving you the option to choose harmonies, and then you select your melody.Keywords: language, music, sciences, quantum entenglement
Procedia PDF Downloads 77240 Comparison of Extracellular miRNA from Different Lymphocyte Cell Lines and Isolation Methods
Authors: Christelle E. Chua, Alicia L. Ho
Abstract:
The development of a panel of differential gene expression signatures has been of interest in the field of biomarker discovery for radiation exposure. In the absence of the availability of exposed human subjects, lymphocyte cell lines have often been used as a surrogate to human whole blood, when performing ex vivo irradiation studies. The extent of variation between different lymphocyte cell lines is currently unclear, especially with regard to the expression of extracellular miRNA. This study compares the expression profile of extracellular miRNA isolated from different lymphocyte cell lines. It also compares the profile of miRNA obtained when different exosome isolation kits are used. Lymphocyte cell lines were created using lymphocytes isolated from healthy adult males of similar racial descent (Chinese American and Chinese Singaporean) and immortalised with Epstein-Barr virus. The cell lines were cultured in exosome-free cell culture media for 72h and the cell culture supernatant was removed for exosome isolation. Two exosome isolation kits were used. Total exosome isolation reagent (TEIR, ThermoFisher) is a polyethylene glycol (PEG)-based exosome precipitation kit, while ExoSpin (ES, Cell Guidance Systems) is a PEG-based exosome precipitation kit that includes an additional size exclusion chromatography step. miRNA from the isolated exosomes were isolated using miRNEASY minikit (Qiagen) and analysed using nCounter miRNA assay (Nanostring). Principal component analysis (PCA) results suggested that the overall extracellular miRNA expression profile differed between the lymphocyte cell line originating from the Chinese American donor and the cell line originating from the Chinese Singaporean donor. As the gender, age and racial origins of both donors are similar, this may suggest that there are other genetic or epigenetic differences that account for the variation in extracellular miRNA gene expression in lymphocyte cell lines. However, statistical analysis showed that only 3 miRNA genes had a fold difference > 2 at p < 0.05, suggesting that the differences may not be of that great a significance as to impact overall conclusions drawn from different cell lines. Subsequent analysis using cell lines from other donors will give further insight into the reproducibility of results when difference cell lines are used. PCA results also suggested that the method of exosome isolation impacted the expression profile. 107 miRNA had a fold difference > 2 at p < 0.05. This suggests that the inclusion of an additional size exclusion chromatography step altered the subset of the extracellular vesicles that were isolated. In conclusion, these results suggest that extracellular miRNA can be isolated and analysed from exosomes derived from lymphocyte cell lines. However, care must be taken in the choice of cell line and method of exosome isolation used.Keywords: biomarker, extracellular miRNA, isolation methods, lymphocyte cell line
Procedia PDF Downloads 199239 Electroforming of 3D Digital Light Processing Printed Sculptures Used as a Low Cost Option for Microcasting
Authors: Cecile Meier, Drago Diaz Aleman, Itahisa Perez Conesa, Jose Luis Saorin Perez, Jorge De La Torre Cantero
Abstract:
In this work, two ways of creating small-sized metal sculptures are proposed: the first by means of microcasting and the second by electroforming from models printed in 3D using an FDM (Fused Deposition Modeling) printer or using a DLP (Digital Light Processing) printer. It is viable to replace the wax in the processes of the artistic foundry with 3D printed objects. In this technique, the digital models are manufactured with resin using a low-cost 3D FDM printer in polylactic acid (PLA). This material is used, because its properties make it a viable substitute to wax, within the processes of artistic casting with the technique of lost wax through Ceramic Shell casting. This technique consists of covering a sculpture of wax or in this case PLA with several layers of thermoresistant material. This material is heated to melt the PLA, obtaining an empty mold that is later filled with the molten metal. It is verified that the PLA models reduce the cost and time compared with the hand modeling of the wax. In addition, one can manufacture parts with 3D printing that are not possible to create with manual techniques. However, the sculptures created with this technique have a size limit. The problem is that when printed pieces with PLA are very small, they lose detail, and the laminar texture hides the shape of the piece. DLP type printer allows obtaining more detailed and smaller pieces than the FDM. Such small models are quite difficult and complex to melt using the lost wax technique of Ceramic Shell casting. But, as an alternative, there are microcasting and electroforming, which are specialized in creating small metal pieces such as jewelry ones. The microcasting is a variant of the lost wax that consists of introducing the model in a cylinder in which the refractory material is also poured. The molds are heated in an oven to melt the model and cook them. Finally, the metal is poured into the still hot cylinders that rotate in a machine at high speed to properly distribute all the metal. Because microcasting requires expensive material and machinery to melt a piece of metal, electroforming is an alternative for this process. The electroforming uses models in different materials; for this study, micro-sculptures printed in 3D are used. These are subjected to an electroforming bath that covers the pieces with a very thin layer of metal. This work will investigate the recommended size to use 3D printers, both with PLA and resin and first tests are being done to validate use the electroforming process of microsculptures, which are printed in resin using a DLP printer.Keywords: sculptures, DLP 3D printer, microcasting, electroforming, fused deposition modeling
Procedia PDF Downloads 135238 Foundations for Global Interactions: The Theoretical Underpinnings of Understanding Others
Authors: Randall E. Osborne
Abstract:
In a course on International Psychology, 8 theoretical perspectives (Critical Psychology, Liberation Psychology, Post-Modernism, Social Constructivism, Social Identity Theory, Social Reduction Theory, Symbolic Interactionism, and Vygotsky’s Sociocultural Theory) are used as a framework for getting students to understand the concept of and need for Globalization. One of critical psychology's main criticisms of conventional psychology is that it fails to consider or deliberately ignores the way power differences between social classes and groups can impact the mental and physical well-being of individuals or groups of people. Liberation psychology, also known as liberation social psychology or psicología social de la liberación, is an approach to psychological science that aims to understand the psychology of oppressed and impoverished communities by addressing the oppressive sociopolitical structure in which they exist. Postmodernism is largely a reaction to the assumed certainty of scientific, or objective, efforts to explain reality. It stems from a recognition that reality is not simply mirrored in human understanding of it, but rather, is constructed as the mind tries to understand its own particular and personal reality. Lev Vygotsky argued that all cognitive functions originate in, and must therefore be explained as products of social interactions and that learning was not simply the assimilation and accommodation of new knowledge by learners. Social Identity Theory discusses the implications of social identity for human interactions with and assumptions about other people. Social Identification Theory suggests people: (1) categorize—people find it helpful (humans might be perceived as having a need) to place people and objects into categories, (2) identify—people align themselves with groups and gain identity and self-esteem from it, and (3) compare—people compare self to others. Social reductionism argues that all behavior and experiences can be explained simply by the affect of groups on the individual. Symbolic interaction theory focuses attention on the way that people interact through symbols: words, gestures, rules, and roles. Meaning evolves from human their interactions in their environment and with people. Vygotsky’s sociocultural theory of human learning describes learning as a social process and the origination of human intelligence in society or culture. The major theme of Vygotsky’s theoretical framework is that social interaction plays a fundamental role in the development of cognition. This presentation will discuss how these theoretical perspectives are incorporated into a course on International Psychology, a course on the Politics of Hate, and a course on the Psychology of Prejudice, Discrimination and Hate to promote student thinking in a more ‘global’ manner.Keywords: globalization, international psychology, society and culture, teaching interculturally
Procedia PDF Downloads 252237 Quantification of Lawsone and Adulterants in Commercial Henna Products
Authors: Ruchi B. Semwal, Deepak K. Semwal, Thobile A. N. Nkosi, Alvaro M. Viljoen
Abstract:
The use of Lawsonia inermis L. (Lythraeae), commonly known as henna, has many medicinal benefits and is used as a remedy for the treatment of diarrhoea, cancer, inflammation, headache, jaundice and skin diseases in folk medicine. Although widely used for hair dyeing and temporary tattooing, henna body art has popularized over the last 15 years and changed from being a traditional bridal and festival adornment to an exotic fashion accessory. The naphthoquinone, lawsone, is one of the main constituents of the plant and responsible for its dyeing property. Henna leaves typically contain 1.8–1.9% lawsone, which is used as a marker compound for the quality control of henna products. Adulteration of henna with various toxic chemicals such as p-phenylenediamine, p-methylaminophenol, p-aminobenzene and p-toluenodiamine to produce a variety of colours, is very common and has resulted in serious health problems, including allergic reactions. This study aims to assess the quality of henna products collected from different parts of the world by determining the lawsone content, as well as the concentrations of any adulterants present. Ultra high performance liquid chromatography-mass spectrometry (UPLC-MS) was used to determine the lawsone concentrations in 172 henna products. Separation of the chemical constituents was achieved on an Acquity UPLC BEH C18 column using gradient elution (0.1% formic acid and acetonitrile). The results from UPLC-MS revealed that of 172 henna products, 11 contained 1.0-1.8% lawsone, 110 contained 0.1-0.9% lawsone, whereas 51 samples did not contain detectable levels of lawsone. High performance thin layer chromatography was investigated as a cheaper, more rapid technique for the quality control of henna in relation to the lawsone content. The samples were applied using an automatic TLC Sampler 4 (CAMAG) to pre-coated silica plates, which were subsequently developed with acetic acid, acetone and toluene (0.5: 1.0: 8.5 v/v). A Reprostar 3 digital system allowed the images to be captured. The results obtained corresponded to those from UPLC-MS analysis. Vibrational spectroscopy analysis (MIR or NIR) of the powdered henna, followed by chemometric modelling of the data, indicates that this technique shows promise as an alternative quality control method. Principal component analysis (PCA) was used to investigate the data by observing clustering and identifying outliers. Partial least squares (PLS) multivariate calibration models were constructed for the quantification of lawsone. In conclusion, only a few of the samples analysed contain lawsone in high concentrations, indicating that they are of poor quality. Currently, the presence of adulterants that may have been added to enhance the dyeing properties of the products, is being investigated.Keywords: Lawsonia inermis, paraphenylenediamine, temporary tattooing, lawsone
Procedia PDF Downloads 459236 A Methodology to Virtualize Technical Engineering Laboratories: MastrLAB-VR
Authors: Ivana Scidà, Francesco Alotto, Anna Osello
Abstract:
Due to the importance given today to innovation, the education sector is evolving thanks digital technologies. Virtual Reality (VR) can be a potential teaching tool offering many advantages in the field of training and education, as it allows to acquire theoretical knowledge and practical skills using an immersive experience in less time than the traditional educational process. These assumptions allow to lay the foundations for a new educational environment, involving and stimulating for students. Starting from the objective of strengthening the innovative teaching offer and the learning processes, the case study of the research concerns the digitalization of MastrLAB, High Quality Laboratory (HQL) belonging to the Department of Structural, Building and Geotechnical Engineering (DISEG) of the Polytechnic of Turin, a center specialized in experimental mechanical tests on traditional and innovative building materials and on the structures made with them. The MastrLAB-VR has been developed, a revolutionary innovative training tool designed with the aim of educating the class in total safety on the techniques of use of machinery, thus reducing the dangers arising from the performance of potentially dangerous activities. The virtual laboratory, dedicated to the students of the Building and Civil Engineering Courses of the Polytechnic of Turin, has been projected to simulate in an absolutely realistic way the experimental approach to the structural tests foreseen in their courses of study: from the tensile tests to the relaxation tests, from the steel qualification tests to the resilience tests on elements at environmental conditions or at characterizing temperatures. The research work proposes a methodology for the virtualization of technical laboratories through the application of Building Information Modelling (BIM), starting from the creation of a digital model. The process includes the creation of an independent application, which with Oculus Rift technology will allow the user to explore the environment and interact with objects through the use of joypads. The application has been tested in prototype way on volunteers, obtaining results related to the acquisition of the educational notions exposed in the experience through a virtual quiz with multiple answers, achieving an overall evaluation report. The results have shown that MastrLAB-VR is suitable for both beginners and experts and will be adopted experimentally for other laboratories of the University departments.Keywords: building information modelling, digital learning, education, virtual laboratory, virtual reality
Procedia PDF Downloads 131235 Imaging of Underground Targets with an Improved Back-Projection Algorithm
Authors: Alireza Akbari, Gelareh Babaee Khou
Abstract:
Ground Penetrating Radar (GPR) is an important nondestructive remote sensing tool that has been used in both military and civilian fields. Recently, GPR imaging has attracted lots of attention in detection of subsurface shallow small targets such as landmines and unexploded ordnance and also imaging behind the wall for security applications. For the monostatic arrangement in the space-time GPR image, a single point target appears as a hyperbolic curve because of the different trip times of the EM wave when the radar moves along a synthetic aperture and collects reflectivity of the subsurface targets. With this hyperbolic curve, the resolution along the synthetic aperture direction shows undesired low resolution features owing to the tails of hyperbola. However, highly accurate information about the size, electromagnetic (EM) reflectivity, and depth of the buried objects is essential in most GPR applications. Therefore hyperbolic curve behavior in the space-time GPR image is often willing to be transformed to a focused pattern showing the object's true location and size together with its EM scattering. The common goal in a typical GPR image is to display the information of the spatial location and the reflectivity of an underground object. Therefore, the main challenge of GPR imaging technique is to devise an image reconstruction algorithm that provides high resolution and good suppression of strong artifacts and noise. In this paper, at first, the standard back-projection (BP) algorithm that was adapted to GPR imaging applications used for the image reconstruction. The standard BP algorithm was limited with against strong noise and a lot of artifacts, which have adverse effects on the following work like detection targets. Thus, an improved BP is based on cross-correlation between the receiving signals proposed for decreasing noises and suppression artifacts. To improve the quality of the results of proposed BP imaging algorithm, a weight factor was designed for each point in region imaging. Compared to a standard BP algorithm scheme, the improved algorithm produces images of higher quality and resolution. This proposed improved BP algorithm was applied on the simulation and the real GPR data and the results showed that the proposed improved BP imaging algorithm has a superior suppression artifacts and produces images with high quality and resolution. In order to quantitatively describe the imaging results on the effect of artifact suppression, focusing parameter was evaluated.Keywords: algorithm, back-projection, GPR, remote sensing
Procedia PDF Downloads 452234 Influence of Non-Formal Physical Education Curriculum, Based on Olympic Pedagogy, for 11-13 Years Old Children Physical Development
Authors: Asta Sarkauskiene
Abstract:
The pedagogy of Olympic education is based upon the main idea of P. de Coubertin, that physical education can and has to support the education of the perfect person, the one who was an aspiration in archaic Greece, when it was looking towards human as a one whole, which is composed of three interconnected functions: physical, psychical and spiritual. The following research question was formulated in the present study: What curriculum of non-formal physical education in school can positively influence physical development of 11-13 years old children? The aim of this study was to formulate and implement curriculum of non-formal physical education, based on Olympic pedagogy, and assess its effectiveness for physical development of 11-13 years old children. The research was conducted in two stages. In the first stage 51 fifth grade children (Mage = 11.3 years) participated in a quasi-experiment for two years. Children were organized into 2 groups: E and C. Both groups shared the duration (1 hour) and frequency (twice a week) but were different in their education curriculum. Experimental group (E) worked under the program developed by us. Priorities of the E group were: training of physical powers in unity with psychical and spiritual powers; integral growth of physical development, physical activity, physical health, and physical fitness; integration of children with lower health and physical fitness level; content that corresponds children needs, abilities, physical and functional powers. Control group (C) worked according to NFPE programs prepared by teachers and approved by school principal and school methodical group. Priorities of the C group were: motion actions teaching and development; physical qualities training; training of the most physically capable children. In the second stage (after four years) 72 sixth graders (Mage = 13.00) attended in the research from the same comprehensive schools. Children were organized into first and second groups. The curriculum of the first group was modified and the second - the same as group C. The focus groups conducted anthropometric (height, weight, BMI) and physiometric (VC, right and left handgrip strength) measurements. Dependent t test indicated that over two years E and C group girls and boys height, weight, right and left handgrip strength indices increased significantly, p < 0.05. E group girls and boys BMI indices did not change significantly, p > 0.05, i.e. height and weight ratio of girls, who participated in NFPE in school, became more proportional. C group girls VC indices did not differ significantly, p > 0.05. Independent t test indicated that in the first and second research stage differences of anthropometric and physiometric measurements of the groups are not significant, p > 0.05. Formulated and implemented curriculum of non-formal education in school, based on olympic pedagogy, had the biggest positive influence on decreasing 11-13 years old children level of BMI and increasing level of VC.Keywords: non – formal physical education, olympic pedagogy, physical development, health sciences
Procedia PDF Downloads 563233 Governance Models of Higher Education Institutions
Authors: Zoran Barac, Maja Martinovic
Abstract:
Higher Education Institutions (HEIs) are a special kind of organization, with its unique purpose and combination of actors. From the societal point of view, they are central institutions in the society that are involved in the activities of education, research, and innovation. At the same time, their societal function derives complex relationships between involved actors, ranging from students, faculty and administration, business community and corporate partners, government agencies, to the general public. HEIs are also particularly interesting as objects of governance research because of their unique public purpose and combination of stakeholders. Furthermore, they are the special type of institutions from an organizational viewpoint. HEIs are often described as “loosely coupled systems” or “organized anarchies“ that implies the challenging nature of their governance models. Governance models of HEIs describe roles, constellations, and modes of interaction of the involved actors in the process of strategic direction and holistic control of institutions, taking into account each particular context. Many governance models of the HEIs are primarily based on the balance of power among the involved actors. Besides the actors’ power and influence, leadership style and environmental contingency could impact the governance model of an HEI. Analyzing them through the frameworks of institutional and contingency theories, HEI governance models originate as outcomes of their institutional and contingency adaptation. HEIs tend to fit to institutional context comprised of formal and informal institutional rules. By fitting to institutional context, HEIs are converging to each other in terms of their structures, policies, and practices. On the other hand, contingency framework implies that there is no governance model that is suitable for all situations. Consequently, the contingency approach begins with identifying contingency variables that might impact a particular governance model. In order to be effective, the governance model should fit to contingency variables. While the institutional context creates converging forces on HEI governance actors and approaches, contingency variables are the causes of divergence of actors’ behavior and governance models. Finally, an HEI governance model is a balanced adaptation of the HEIs to the institutional context and contingency variables. It also encompasses roles, constellations, and modes of interaction of involved actors influenced by institutional and contingency pressures. Actors’ adaptation to the institutional context brings benefits of legitimacy and resources. On the other hand, the adaptation of the actors’ to the contingency variables brings high performance and effectiveness. HEI governance models outlined and analyzed in this paper are collegial, bureaucratic, entrepreneurial, network, professional, political, anarchical, cybernetic, trustee, stakeholder, and amalgam models.Keywords: governance, governance models, higher education institutions, institutional context, situational context
Procedia PDF Downloads 336232 Factors Influencing Family Resilience and Quality of Life in Pediatric Cancer Patients and Their Caregivers: A Cluster Analysis
Authors: Li Wang, Dan Shu, Shiguang Pang, Lixiu Wang, Bing Xiang Yang, Qian Liu
Abstract:
Background: Cancer is one of the most severe diseases in childhood; long-term treatment and its side effects significantly impact the patient's physical, psychological, social functioning and quality of life while also placing substantial physical and psychological burdens on caregivers and families. Family resilience is crucial for children with cancer, helping them cope better with the disease and supporting the family in facing challenges together. As a family-level variable, family resilience requires information from multiple family members. However, to our best knowledge, there is currently no research investigating family resilience from both the perspectives of pediatric cancer patients and their caregivers. Therefore, this study aims to investigate the family resilience and quality of life of pediatric cancer patients from a patient–caregiver dyadic perspective. Methods: A total of 149 dyads of patients diagnosed with pediatric cancer patients and their principal caregivers were recruited from oncology departments of 4 tertiary hospitals in Wuhan and Taiyuan, China. All participants completed questionnaires that identified their demographic and clinical characteristics as well as assessed their family resilience and quality of life for both the patients and their caregivers. K-means cluster analysis was used to identify different clusters of family resilience based on the reports from patients and caregivers. Multivariate logistic regression and linear regression are used to analyze the factors influencing family resilience and quality of life, as well as the relationship between the two. Results: Three clusters of family resilience were identified: a cluster of high family resilience (HR), a cluster of low family resilience (LR), and a cluster of discrepant family resilience (DR). Most (67.1%) families fell into the cluster with low resilience. Characteristics such as the types of caregivers perceived social support of the patient were different among the three clusters. Compared to the LR group, families where the mother is the caregiver and where the patient has high social support are more likely to be assigned to the HR. The quality of life for caregivers was consistently highest in the HR cluster and lowest in the LR cluster. The patient's quality of life is not related to family resilience. In the linear regression analysis of the patient's quality of life, patients who are the first-born have higher quality of life, while those living with their parents have lower quality of life. The participants' characteristics were not associated with the quality of life for caregivers. Conclusions: In most families, family resilience was low. Families with maternal caregivers and patients receiving high levels of social support are more inclined to be higher levels of family resilience. Family resilience was linked to the quality of life of caregivers of pediatric cancer patients. The clinical implications of this findings suggest that healthcare and social support organizations should prioritize and support the participation of mothers in caregiving responsibilities. Furthermore, they should assist families in accessing social support to enhance family resilience. This study also emphasizes the importance of promoting family resilience for enhancing family health and happiness, as well as improving the quality of life for caregivers.Keywords: pediatric cancer, cluster analysis, family resilience, quality of life
Procedia PDF Downloads 37231 Approach to Honey Volatiles' Profiling by Gas Chromatography and Mass Spectrometry
Authors: Igor Jerkovic
Abstract:
Biodiversity of flora provides many different nectar sources for the bees. Unifloral honeys possess distinctive flavours, mainly derived from their nectar sources (characteristic volatile organic components (VOCs)). Specific or nonspecific VOCs (chemical markers) could be used for unifloral honey characterisation as addition to the melissopalynologycal analysis. The main honey volatiles belong, in general, to three principal categories: terpenes, norisoprenoids, and benzene derivatives. Some of these substances have been described as characteristics of the floral source, and other compounds, like several alcohols, branched aldehydes, and furan derivatives, may be related to the microbial purity of honey processing and storage conditions. Selection of the extraction method for the honey volatiles profiling should consider that heating of the honey produce different artefacts and therefore conventional methods of VOCs isolation (such as hydrodistillation) cannot be applied for the honey. Two-way approach for the isolation of the honey VOCs was applied using headspace solid-phase microextraction (HS-SPME) and ultrasonic solvent extraction (USE). The extracts were analysed by gas chromatography and mass spectrometry (GC-MS). HS-SPME (with the fibers of different polarity such as polydimethylsiloxane/ divinylbenzene (PDMS/DVB) or divinylbenzene/carboxene/ polydimethylsiloxane (DVB/CAR/PDMS)) enabled isolation of high volatile headspace VOCs of the honey samples. Among them, some characteristic or specific compounds can be found such as 3,4-dihydro-3-oxoedulan (in Centaurea cyanus L. honey) or 1H-indole, methyl anthranilate, and cis-jasmone (in Citrus unshiu Marc. honey). USE with different solvents (mainly dichloromethane or the mixture pentane : diethyl ether 1 : 2 v/v) enabled isolation of less volatile and semi-volatile VOCs of the honey samples. Characteristic compounds from C. unshiu honey extracts were caffeine, 1H-indole, 1,3-dihydro-2H-indol-2-one, methyl anthranilate, and phenylacetonitrile. Sometimes, the selection of solvent sequence was useful for more complete profiling such as sequence I: pentane → diethyl ether or sequence II: pentane → pentane/diethyl ether (1:2, v/v) → dichloromethane). The extracts with diethyl ether contained hydroquinone and 4-hydroxybenzoic acid as the major compounds, while (E)-4-(r-1’,t-2’,c-4’-trihydroxy-2’,6’,6’-trimethylcyclo-hexyl)but-3-en-2-one predominated in dichloromethane extracts of Allium ursinum L. honey. With this two-way approach, it was possible to obtain a more detailed insight into the honey volatile and semi-volatile compounds and to minimize the risks of compound discrimination due to their partial extraction that is of significant importance for the complete honey profiling and identification of the chemical biomarkers that can complement the pollen analysis.Keywords: honey chemical biomarkers, honey volatile compounds profiling, headspace solid-phase microextraction (HS-SPME), ultrasonic solvent extraction (USE)
Procedia PDF Downloads 202230 A World Map of Seabed Sediment Based on 50 Years of Knowledge
Authors: T. Garlan, I. Gabelotaud, S. Lucas, E. Marchès
Abstract:
Production of a global sedimentological seabed map has been initiated in 1995 to provide the necessary tool for searches of aircraft and boats lost at sea, to give sedimentary information for nautical charts, and to provide input data for acoustic propagation modelling. This original approach had already been initiated one century ago when the French hydrographic service and the University of Nancy had produced maps of the distribution of marine sediments of the French coasts and then sediment maps of the continental shelves of Europe and North America. The current map of the sediment of oceans presented was initiated with a UNESCO's general map of the deep ocean floor. This map was adapted using a unique sediment classification to present all types of sediments: from beaches to the deep seabed and from glacial deposits to tropical sediments. In order to allow good visualization and to be adapted to the different applications, only the granularity of sediments is represented. The published seabed maps are studied, if they present an interest, the nature of the seabed is extracted from them, the sediment classification is transcribed and the resulted map is integrated in the world map. Data come also from interpretations of Multibeam Echo Sounder (MES) imagery of large hydrographic surveys of deep-ocean. These allow a very high-quality mapping of areas that until then were represented as homogeneous. The third and principal source of data comes from the integration of regional maps produced specifically for this project. These regional maps are carried out using all the bathymetric and sedimentary data of a region. This step makes it possible to produce a regional synthesis map, with the realization of generalizations in the case of over-precise data. 86 regional maps of the Atlantic Ocean, the Mediterranean Sea, and the Indian Ocean have been produced and integrated into the world sedimentary map. This work is permanent and permits a digital version every two years, with the integration of some new maps. This article describes the choices made in terms of sediment classification, the scale of source data and the zonation of the variability of the quality. This map is the final step in a system comprising the Shom Sedimentary Database, enriched by more than one million punctual and surface items of data, and four series of coastal seabed maps at 1:10,000, 1:50,000, 1:200,000 and 1:1,000,000. This step by step approach makes it possible to take into account the progresses in knowledge made in the field of seabed characterization during the last decades. Thus, the arrival of new classification systems for seafloor has improved the recent seabed maps, and the compilation of these new maps with those previously published allows a gradual enrichment of the world sedimentary map. But there is still a lot of work to enhance some regions, which are still based on data acquired more than half a century ago.Keywords: marine sedimentology, seabed map, sediment classification, world ocean
Procedia PDF Downloads 232229 Exploring the Potential of Bio-Inspired Lattice Structures for Dynamic Applications in Design
Authors: Axel Thallemer, Aleksandar Kostadinov, Abel Fam, Alex Teo
Abstract:
For centuries, the forming processes in nature served as a source of inspiration for both architects and designers. It seems as most human artifacts are based on ideas which stem from the observation of the biological world and its principles of growth. As a fact, in the cultural history of Homo faber, materials have been mostly used in their solid state: From hand axe to computer mouse, the principle of employing matter has not changed ever since the first creation. In the scope of history only recently and by the help of additive-generative fabrication processes through Computer Aided Design (CAD), designers were enabled to deconstruct solid artifacts into an outer skin and an internal lattice structure. The intention behind this approach is to create a new topology which reduces resources and integrates functions into an additively manufactured component. However, looking at the currently employed lattice structures, it is very clear that those lattice structure geometries have not been thoroughly designed, but rather taken out of basic-geometry libraries which are usually provided by the CAD. In the here presented study, a group of 20 industrial design students created new and unique lattice structures using natural paragons as their models. The selected natural models comprise both the animate and inanimate world, with examples ranging from the spiraling of narwhal tusks, off-shooting of mangrove roots, minimal surfaces of soap bubbles, up to the rhythmical arrangement of molecular geometry, like in the case of SiOC (Carbon-Rich Silicon Oxicarbide). This ideation process leads to a design of a geometric cell, which served as a basic module for the lattice structure, whereby the cell was created in visual analogy to its respective natural model. The spatial lattices were fabricated additively in mostly [X]3 by [Y]3 by [Z]3 units’ volumes using selective powder bed melting in polyamide with (z-axis) 50 mm and 100 µm resolution and subdued to mechanical testing of their elastic zone in a biomedical laboratory. The results demonstrate that additively manufactured lattice structures can acquire different properties when they are designed in analogy to natural models. Several of the lattices displayed the ability to store and return kinetic energy, while others revealed a structural failure which can be exploited for purposes where a controlled collapse of a structure is required. This discovery allows for various new applications of functional lattice structures within industrially created objects.Keywords: bio-inspired, biomimetic, lattice structures, additive manufacturing
Procedia PDF Downloads 148228 The Connection Between the Semiotic Theatrical System and the Aesthetic Perception
Authors: Păcurar Diana Istina
Abstract:
The indissoluble link between aesthetics and semiotics, the harmonization and semiotic understanding of the interactions between the viewer and the object being looked at, are the basis of the practical demonstration of the importance of aesthetic perception within the theater performance. The design of a theater performance includes several structures, some considered from the beginning, art forms (i.e., the text), others being represented by simple, common objects (e.g., scenographic elements), which, if reunited, can trigger a certain aesthetic perception. The audience is delivered, by the team involved in the performance, a series of auditory and visual signs with which they interact. It is necessary to explain some notions about the physiological support of the transformation of different types of stimuli at the level of the cerebral hemispheres. The cortex considered the superior integration center of extransecal and entanged stimuli, permanently processes the information received, but even if it is delivered at a constant rate, the generated response is individualized and is conditioned by a number of factors. Each changing situation represents a new opportunity for the viewer to cope with, developing feelings of different intensities that influence the generation of meanings and, therefore, the management of interactions. In this sense, aesthetic perception depends on the detection of the “correctness” of signs, the forms of which are associated with an aesthetic property. Fairness and aesthetic properties can have positive or negative values. Evaluating the emotions that generate judgment and implicitly aesthetic perception, whether we refer to visual emotions or auditory emotions, involves the integration of three areas of interest: Valence, arousal and context control. In this context, superior human cognitive processes, memory, interpretation, learning, attribution of meanings, etc., help trigger the mechanism of anticipation and, no less important, the identification of error. This ability to locate a short circuit produced in a series of successive events is fundamental in the process of forming an aesthetic perception. Our main purpose in this research is to investigate the possible conditions under which aesthetic perception and its minimum content are generated by all these structures and, in particular, by interactions with forms that are not commonly considered aesthetic forms. In order to demonstrate the quantitative and qualitative importance of the categories of signs used to construct a code for reading a certain message, but also to emphasize the importance of the order of using these indices, we have structured a mathematical analysis that has at its core the analysis of the percentage of signs used in a theater performance.Keywords: semiology, aesthetics, theatre semiotics, theatre performance, structure, aesthetic perception
Procedia PDF Downloads 89227 Ways Management of Foods Not Served to Consumers in Food Service Sector
Authors: Marzena Tomaszewska, Beata Bilska, Danuta Kolozyn-Krajewska
Abstract:
Food loss and food waste are a global problem of the modern economy. The research undertaken aimed to analyze how food is handled in catering establishments when it comes to food waste and to demonstrate main ways of management with foods/dishes not served to consumers. A survey study was conducted from January to June 2019. The selection of catering establishments participating in the study was deliberate. The study included establishments located only in Mazowieckie Voivodeship (Poland). 42 completed questionnaires were collected. In some questions, answers were based on a 5-point scale of 1 to 5 (from 'always'/'every day' to 'never'). The survey also included closed questions with a suggested cafeteria of answers. The respondents stated that in their workplaces, dishes served cold and hot ready meals are discarded every day or almost every day (23.7% and 20.5% of answers respectively). A procedure most frequently used for dealing with dishes not served to consumers on a given day is their storage at a cool temperature until the following day. In the research, 1/5 of respondents admitted that consumers 'always' or 'usually' leave uneaten meals on their plates, and over 41% 'sometimes' do so. It was found additionally that food not used in food service sector is most often thrown into a public container for rubbish. Most often thrown into the public container (with communal trash) were: expired products (80.0%), plate waste (80.0%), and inedible products (fruit and vegetable peels, egg shells) (77.5%). Most frequently into the container dedicated only for food waste were thrown out used deep-frying oil (62.5%). 10% of respondents indicated that inedible products in their workplaces is allocate for animal feeds. Food waste in the food service sector still remains an insufficiently studied issue, as owners of these objects are often unwilling to disclose data pertaining to the subject. Incorrect ways of management with foods not served to consumers were observed. There is the need to develop the educational activities for employees and management in the context of food waste management in the food service sector. This publication has been developed under the contract with the National Center for Research and Development No Gospostrateg1/385753/1/NCBR/2018 for carrying out and funding of a project implemented as part of the 'The social and economic development of Poland in the conditions of globalizing markets - GOSPOSTRATEG' program entitled 'Developing a system for monitoring wasted food and an effective program to rationalize losses and reduce food wastage' (acronym PROM).Keywords: food waste, inedible products, plate waste, used deep-frying oil
Procedia PDF Downloads 119226 The Validation and Reliability of the Arabic Effort-Reward Imbalance Model Questionnaire: A Cross-Sectional Study among University Students in Jordan
Authors: Mahmoud M. AbuAlSamen, Tamam El-Elimat
Abstract:
Amid the economic crisis in Jordan, the Jordanian government has opted for a knowledge economy where education is promoted as a mean for economic development. University education usually comes at the expense of study-related stress that may adversely impact the health of students. Since stress is a latent variable that is difficult to measure, a valid tool should be used in doing so. The effort-reward imbalance (ERI) is a model used as a measurement tool for occupational stress. The model was built on the notion of reciprocity, which relates ‘effort’ to ‘reward’ through the mediating ‘over-commitment’. Reciprocity assumes equilibrium between both effort and reward, where ‘high’ effort is adequately compensated with ‘high’ reward. When this equilibrium is violated (i.e., high effort with low reward), this may elicit negative emotions and stress, which have been correlated to adverse health conditions. The theory of ERI was established in many different parts of the world, and associations with chronic diseases and the health of workers were explored at length. While much of the effort-reward imbalance was investigated in work conditions, there has been a growing interest in understanding the validity of the ERI model when applied to other social settings such as schools and universities. The ERI questionnaire was developed in Arabic recently to measure ERI among high school teachers. However, little information is available on the validity of the ERI questionnaire in university students. A cross-sectional study was conducted on 833 students in Jordan to measure the validity and reliability of the ERI questionnaire in Arabic among university students. Reliability, as measured by Cronbach’s alpha of the effort, reward, and overcommitment scales, was 0.73, 0.76, and 0.69, respectively, suggesting satisfactory reliability. The factorial structure was explored using principal axis factoring. The results fitted a five-solution model where both the effort and overcommitment were uni-dimensional while the reward scale was three-dimensional with its factors, namely being ‘support’, ‘esteem’, and ‘security’. The solution explained 56% of the variance in the data. The established ERI theory was replicated with excellent validity in this study. The effort-reward ratio in university students was 1.19, which suggests a slight degree of failed reciprocity. The study also investigated the association of effort, reward, overcommitment, and ERI with participants’ demographic factors and self-reported health. ERI was found to be significantly associated with absenteeism (p < 0.0001), past history of failed courses (p=0.03), and poor academic performance (p < 0.001). Moreover, ERI was found to be associated with poor self-reported health among university students (p=0.01). In conclusion, the Arabic ERI questionnaire is reliable and valid for use in measuring effort-reward imbalance in university students in Jordan. The results of this research are important in informing higher education policy in Jordan.Keywords: effort-reward imbalance, factor analysis, validity, self-reported health
Procedia PDF Downloads 116225 Analyzing Political Cartoons in Arabic-Language Media after Trump's Jerusalem Move: A Multimodal Discourse Perspective
Authors: Inas Hussein
Abstract:
Communication in the modern world is increasingly becoming multimodal due to globalization and the digital space we live in which have remarkably affected how people communicate. Accordingly, Multimodal Discourse Analysis (MDA) is an emerging paradigm in discourse studies with the underlying assumption that other semiotic resources such as images, colours, scientific symbolism, gestures, actions, music and sound, etc. combine with language in order to communicate meaning. One of the effective multimodal media that combines both verbal and non-verbal elements to create meaning is political cartoons. Furthermore, since political and social issues are mirrored in political cartoons, these are regarded as potential objects of discourse analysis since they not only reflect the thoughts of the public but they also have the power to influence them. The aim of this paper is to analyze some selected cartoons on the recognition of Jerusalem as Israel's capital by the American President, Donald Trump, adopting a multimodal approach. More specifically, the present research examines how the various semiotic tools and resources utilized by the cartoonists function in projecting the intended meaning. Ten political cartoons, among a surge of editorial cartoons highlighted by the Anti-Defamation League (ADL) - an international Jewish non-governmental organization based in the United States - as publications in different Arabic-language newspapers in Egypt, Saudi Arabia, UAE, Oman, Iran and UK, were purposively selected for semiotic analysis. These editorial cartoons, all published during 6th–18th December 2017, invariably suggest one theme: Jewish and Israeli domination of the United States. The data were analyzed using the framework of Visual Social Semiotics. In accordance with this methodological framework, the selected visual compositions were analyzed in terms of three aspects of meaning: representational, interactive and compositional. In analyzing the selected cartoons, an interpretative approach is being adopted. This approach prioritizes depth to breadth and enables insightful analyses of the chosen cartoons. The findings of the study reveal that semiotic resources are key elements of political cartoons due to the inherent political communication they convey. It is proved that adequate interpretation of the three aspects of meaning is a prerequisite for understanding the intended meaning of political cartoons. It is recommended that further research should be conducted to provide more insightful analyses of political cartoons from a multimodal perspective.Keywords: Multimodal Discourse Analysis (MDA), multimodal text, political cartoons, visual modality
Procedia PDF Downloads 240224 The Application of Raman Spectroscopy in Olive Oil Analysis
Authors: Silvia Portarena, Chiara Anselmi, Chiara Baldacchini, Enrico Brugnoli
Abstract:
Extra virgin olive oil (EVOO) is a complex matrix mainly composed by fatty acid and other minor compounds, among which carotenoids are well known for their antioxidative function that is a key mechanism of protection against cancer, cardiovascular diseases, and macular degeneration in humans. EVOO composition in terms of such constituents is generally the result of a complex combination of genetic, agronomical and environmental factors. To selectively improve the quality of EVOOs, the role of each factor on its biochemical composition need to be investigated. By selecting fruits from four different cultivars similarly grown and harvested, it was demonstrated that Raman spectroscopy, combined with chemometric analysis, is able to discriminate the different cultivars, also as a function of the harvest date, based on the relative content and composition of fatty acid and carotenoids. In particular, a correct classification up to 94.4% of samples, according to the cultivar and the maturation stage, was obtained. Moreover, by using gas chromatography and high-performance liquid chromatography as reference techniques, the Raman spectral features further allowed to build models, based on partial least squares regression, that were able to predict the relative amount of the main fatty acids and the main carotenoids in EVOO, with high coefficients of determination. Besides genetic factors, climatic parameters, such as light exposition, distance from the sea, temperature, and amount of precipitations could have a strong influence on EVOO composition of both major and minor compounds. This suggests that the Raman spectra could act as a specific fingerprint for the geographical discrimination and authentication of EVOO. To understand the influence of environment on EVOO Raman spectra, samples from seven regions along the Italian coasts were selected and analyzed. In particular, it was used a dual approach combining Raman spectroscopy and isotope ratio mass spectrometry (IRMS) with principal component and linear discriminant analysis. A correct classification of 82% EVOO based on their regional geographical origin was obtained. Raman spectra were obtained by Super Labram spectrometer equipped with an Argon laser (514.5 nm wavelenght). Analyses of stable isotope content ratio were performed using an isotope ratio mass spectrometer connected to an elemental analyzer and to a pyrolysis system. These studies demonstrate that RR spectroscopy is a valuable and useful technique for the analysis of EVOO. In combination with statistical analysis, it makes possible the assessment of specific samples’ content and allows for classifying oils according to their geographical and varietal origin.Keywords: authentication, chemometrics, olive oil, raman spectroscopy
Procedia PDF Downloads 332223 Challenging Weak Central Coherence: An Exploration of Neurological Evidence from Visual Processing and Linguistic Studies in Autism Spectrum Disorder
Authors: Jessica Scher Lisa, Eric Shyman
Abstract:
Autism spectrum disorder (ASD) is a neuro-developmental disorder that is characterized by persistent deficits in social communication and social interaction (i.e. deficits in social-emotional reciprocity, nonverbal communicative behaviors, and establishing/maintaining social relationships), as well as by the presence of repetitive behaviors and perseverative areas of interest (i.e. stereotyped or receptive motor movements, use of objects, or speech, rigidity, restricted interests, and hypo or hyperactivity to sensory input or unusual interest in sensory aspects of the environment). Additionally, diagnoses of ASD require the presentation of symptoms in the early developmental period, marked impairments in adaptive functioning, and a lack of explanation by general intellectual impairment or global developmental delay (although these conditions may be co-occurring). Over the past several decades, many theories have been developed in an effort to explain the root cause of ASD in terms of atypical central cognitive processes. The field of neuroscience is increasingly finding structural and functional differences between autistic and neurotypical individuals using neuro-imaging technology. One main area this research has focused upon is in visuospatial processing, with specific attention to the notion of ‘weak central coherence’ (WCC). This paper offers an analysis of findings from selected studies in order to explore research that challenges the ‘deficit’ characterization of a weak central coherence theory as opposed to a ‘superiority’ characterization of strong local coherence. The weak central coherence theory has long been both supported and refuted in the ASD literature and has most recently been increasingly challenged by advances in neuroscience. The selected studies lend evidence to the notion of amplified localized perception rather than deficient global perception. In other words, WCC may represent superiority in ‘local processing’ rather than a deficit in global processing. Additionally, the right hemisphere and the specific area of the extrastriate appear to be key in both the visual and lexicosemantic process. Overactivity in the striate region seems to suggest inaccuracy in semantic language, which lends itself to support for the link between the striate region and the atypical organization of the lexicosemantic system in ASD.Keywords: autism spectrum disorder, neurology, visual processing, weak coherence
Procedia PDF Downloads 127222 Disaggregating Communities and the Making of Factional States: Evidence from Joint Forest Management in Sundarban, India
Authors: Amrita Sen
Abstract:
In the face of a growing insurgent movement and the perceived failure of the state and the market towards sustainable resource management, a range of decentralized forest management policies was formulated in the last two decades, which recognized the need for community representations within the statutory methods of forest management. The recognition conceded on the virtues of ecological sustainability and traditional environmental knowledge, which were considered to be the principal repositories of the forest dependent communities. The present study, in the light of empirical insights, reflects on the contemporary disjunctions between the preconceived communitarian ethic in environmentalism and the lived reality of forest based life-worlds. Many of the popular as well as dominant ideologies, which have historically shaped the conceptual and theoretical understanding of sociology, needs further perusal in the context of the emerging contours of empirical knowledge, which lends opportunities for substantive reworking and analysis. The image of the community appears to be one of those concepts, an identity which has for long defined perspectives and processes associated with people living together harmoniously in small physical spaces. Through an ethnographic account of the implementation of Joint Forest Management (JFM) in a forest fringe village in Sundarban, the study explores the ways in which the idea of ‘community’ gets transformed through the process of state-making, rendering the necessity of its departure from the standard, conventional definition of homogeneity and internal equity. The study necessitates an attention towards the anthropology of micro-politics, disaggregating an essentially constructivist anthropology of ‘collective identities’, which can render the visibility of political mobilizations plausible within the seemingly culturalist production of communities. The two critical questions that the paper seeks to ask in this context are: how the ‘local’ is constituted within community based conservation practices? Within the efforts of collaborative forest management, how accurately does the depiction of ‘indigenous environmental knowledge’, subscribe to its role of sustainable conservation practices? Reflecting on the execution of JFM in Sundarban, the study critically explores the ways in which the state ceases to be ‘trans-national’ and interacts with the rural life-worlds through its local factions. Simultaneously, the study attempts to articulate the scope of constructing a competing representation of community, shaped by increasing political negotiations and bureaucratic alignments which strains against the usual preoccupations with tradition primordiality and non material culture as well as the amorous construction of indigeneity.Keywords: community, environmentalism, JFM, state-making, identities, indigenous
Procedia PDF Downloads 198221 Exploring the Vocabulary and Grammar Advantage of US American over British English Speakers at Age 2;0
Authors: Janine Just, Kerstin Meints
Abstract:
The research aims to compare vocabulary size and grammatical development between US American English- and British English-speaking children at age 2;0. As there is evidence that precocious children with large vocabularies develop grammar skills earlier than their typically developing peers, it was investigated if this also holds true across varieties of English. Thus, if US American children start to produce words earlier than their British counterparts, this could mean that US children are also at an advantage in the early developmental stages of acquiring grammar. This research employs a British English adaptation of the MacArthur-Bates CDI Words and Sentences (Lincoln Toddler CDI) to compare vocabulary and also grammar scores with the updated US Toddler CDI norms. At first, the Lincoln TCDI was assessed for its concurrent validity with the Preschool Language Scale (PLS-5 UK). This showed high correlations for the vocabulary and grammar subscales between the tests. In addition, the frequency of the Toddler CDI’s words was also compared using American and British English corpora of adult spoken and written language. A paired-samples t-test found a significant difference in word frequency between the British and the American CDI demonstrating that the TCDI’s words were indeed of higher frequency in British English. We then compared language and grammar scores between US (N = 135) and British children (N = 96). A two-way between groups ANOVA examined if the two samples differed in terms of SES (i.e. maternal education) by investigating the impact of SES and country on vocabulary and sentence complexity. The two samples did not differ in terms of maternal education as the interaction effects between SES and country were not significant. In most cases, scores were not significantly different between US and British children, for example, for overall word production and most grammatical subscales (i.e. use of words, over- regularizations, complex sentences, word combinations). However, in-depth analysis showed that US children were significantly better than British children at using some noun categories (i.e. people, objects, places) and several categories marking early grammatical development (i.e. pronouns, prepositions, quantifiers, helping words). However, the effect sizes were small. Significant differences for grammar were found for irregular word forms and progressive tense suffixes. US children were more advanced in their use of these grammatical categories, but the effect sizes were small. In sum, while differences exist in terms of vocabulary and grammar ability, favouring US children, effect sizes were small. It can be concluded that most British children are ‘catching up’ with their US American peers at age 2;0. Implications of this research will be discussed.Keywords: first language acquisition, grammar, parent report instrument, vocabulary
Procedia PDF Downloads 283220 Industrial Hemp Agronomy and Fibre Value Chain in Pakistan: Current Progress, Challenges, and Prospects
Authors: Saddam Hussain, Ghadeer Mohsen Albadrani
Abstract:
Pakistan is one of the most vulnerable countries to climate change. Being a country where 23% of the country’s GDP relies on agriculture, this is a serious cause of concern. Introducing industrial hemp in Pakistan can help build climate resilience in the agricultural sector of the country, as hemp has recently emerged as a sustainable, eco-friendly, resource-efficient, and climate-resilient crop globally. Hemp has the potential to absorb huge amounts of CO₂, nourish the soil, and be used to create various biodegradable and eco-friendly products. Hemp is twice as effective as trees at absorbing and locking up carbon, with 1 hectare (2.5 acres) of hemp reckoned to absorb 8 to 22 tonnes of CO₂ a year, more than any woodland. Along with its high carbon-sequestration ability, it produces higher biomass and can be successfully grown as a cover crop. Hemp can grow in almost all soil conditions and does not require pesticides. It has fast-growing qualities and needs only 120 days to be ready for harvest. Compared with cotton, hemp requires 50% less water to grow and can produce three times higher fiber yield with a lower ecological footprint. Recently, the Government of Pakistan has allowed the cultivation of industrial hemp for industrial and medicinal purposes, making it possible for hemp to be reinserted into the country’s economy. Pakistan’s agro-climatic and edaphic conditions are well-suitable to produce industrial hemp, and its cultivation can bring economic benefits to the country. Pakistan can enter global markets as a new exporter of hemp products. The production of hemp in Pakistan can be most exciting to the workforce, especially for farmers participating in hemp markets. The minimum production cost of hemp makes it affordable to small holding farmers, especially those who need their cropping system to be as highly sustainable as possible. Dr. Saddam Hussain is leading the first pilot project of Industrial Hemp in Pakistan. In the past three years, he has been able to recruit high-impact research grants on industrial hemp as Principal Investigator. He has already screened the non-toxic hemp genotypes, tested the adaptability of exotic material in various agroecological conditions, formulated the production agronomy, and successfully developed the complete value chain. He has developed prototypes (fabric, denim, knitwear) using hemp fibre in collaboration with industrial partners and has optimized the indigenous fibre processing techniques. In this lecture, Dr. Hussain will talk on hemp agronomy and its complete fibre value chain. He will discuss the current progress, and will highlight the major challenges and future research direction on hemp research.Keywords: industrial hemp, agricultural sustainability, agronomic evaluation, hemp value chain
Procedia PDF Downloads 81219 Geographic Origin Determination of Greek Rice (Oryza Sativa L.) Using Stable Isotopic Ratio Analysis
Authors: Anna-Akrivi Thomatou, Anastasios Zotos, Eleni C. Mazarakioti, Efthimios Kokkotos, Achilleas Kontogeorgos, Athanasios Ladavos, Angelos Patakas
Abstract:
It is well known that accurate determination of geographic origin to confront mislabeling and adulteration of foods is considered as a critical issue worldwide not only for the consumers, but also for producers and industries. Among agricultural products, rice (Oryza sativa L.) is the world’s third largest crop, providing food for more than half of the world’s population. Consequently, the quality and safety of rice products play an important role in people’s life and health. Despite the fact that rice is predominantly produced in Asian countries, rice cultivation in Greece is of significant importance, contributing to national agricultural sector income. More than 25,000 acres are cultivated in Greece, while rice exports to other countries consist the 0,5% of the global rice trade. Although several techniques are available in order to provide information about the geographical origin of rice, little data exist regarding the ability of these methodologies to discriminate rice production from Greece. Thus, the aim of this study is the comparative evaluation of stable isotope ratio methodology regarding its discriminative ability for geographical origin determination of rice samples produced in Greece compared to those from three other Asian countries namely Korea, China and Philippines. In total eighty (80) samples were collected from selected fields of Central Macedonia (Greece), during October of 2021. The light element (C, N, S) isotope ratios were measured using Isotope Ratio Mass Spectrometry (IRMS) and the results obtained were analyzed using chemometric techniques, including principal components analysis (PCA). Results indicated that the 𝜹 15N and 𝜹 34S values of rice produced in Greece were more markedly influenced by geographical origin compared to the 𝜹 13C. In particular, 𝜹 34S values in rice originating from Greece was -1.98 ± 1.71 compared to 2.10 ± 1.87, 4.41 ± 0.88 and 9.02 ± 0.75 for Korea, China and Philippines respectively. Among stable isotope ratios studied, values of 𝜹 34S seem to be the more appropriate isotope marker to discriminate rice geographic origin between the studied areas. These results imply the significant capability of stable isotope ratio methodology for effective geographical origin discrimination of rice, providing a valuable insight into the control of improper or fraudulent labeling. Acknowledgement: This research has been financed by the Public Investment Programme/General Secretariat for Research and Innovation, under the call “YPOERGO 3, code 2018SE01300000: project title: ‘Elaboration and implementation of methodology for authenticity and geographical origin assessment of agricultural products.Keywords: geographical origin, authenticity, rice, isotope ratio mass spectrometry
Procedia PDF Downloads 89218 Envisioning The Future of Language Learning: Virtual Reality, Mobile Learning and Computer-Assisted Language Learning
Authors: Jasmin Cowin, Amany Alkhayat
Abstract:
This paper will concentrate on a comparative analysis of both the advantages and limitations of using digital learning resources (DLRs). DLRs covered will be Virtual Reality (VR), Mobile Learning (M-learning) and Computer-Assisted Language Learning (CALL) together with their subset, Mobile Assisted Language Learning (MALL) in language education. In addition, best practices for language teaching and the application of established language teaching methodologies such as Communicative Language Teaching (CLT), the audio-lingual method, or community language learning will be explored. Education has changed dramatically since the eruption of the pandemic. Traditional face-to-face education was disrupted on a global scale. The rise of distance learning brought new digital tools to the forefront, especially web conferencing tools, digital storytelling apps, test authoring tools, and VR platforms. Language educators raced to vet, learn, and implement multiple technology resources suited for language acquisition. Yet, questions remain on how to harness new technologies, digital tools, and their ubiquitous availability while using established methods and methodologies in language learning paired with best teaching practices. In M-learning language, learners employ portable computing devices such as smartphones or tablets. CALL is a language teaching approach using computers and other technologies through presenting, reinforcing, and assessing language materials to be learned or to create environments where teachers and learners can meaningfully interact. In VR, a computer-generated simulation enables learner interaction with a 3D environment via screen, smartphone, or a head mounted display. Research supports that VR for language learning is effective in terms of exploration, communication, engagement, and motivation. Students are able to relate through role play activities, interact with 3D objects and activities such as field trips. VR lends itself to group language exercises in the classroom with target language practice in an immersive, virtual environment. Students, teachers, schools, language institutes, and institutions benefit from specialized support to help them acquire second language proficiency and content knowledge that builds on their cultural and linguistic assets. Through the purposeful application of different language methodologies and teaching approaches, language learners can not only make cultural and linguistic connections in DLRs but also practice grammar drills, play memory games or flourish in authentic settings.Keywords: language teaching methodologies, computer-assisted language learning, mobile learning, virtual reality
Procedia PDF Downloads 238