Search results for: high specific surface area
11626 Challenge in Teaching Physics during the Pandemic: Another Way of Teaching and Learning
Authors: Edson Pierre, Gustavo de Jesus Lopez Nunez
Abstract:
The objective of this work is to analyze how physics can be taught remotely through the use of platforms and software to attract the attention of 2nd-year high school students at Colégio Cívico Militar Professor Carmelita Souza Dias and point out how remote teaching can be a teaching-learning strategy during the period of social distancing. Teaching physics has been a challenge for teachers and students, permeating common sense with the great difficulty of teaching and learning the subject. The challenge increased in 2020 and 2021 with the impact caused by the new coronavirus pandemic (Sars-Cov-2) and its variants that have affected the entire world. With these changes, a new teaching modality emerged: remote teaching. It brought new challenges and one of them was promoting distance research experiences, especially in physics teaching, since there are learning difficulties and it is often impossible for the student to relate the theory observed in class with the reality that surrounds them. Teaching physics in schools faces some difficulties, which makes it increasingly less attractive for young people to choose this profession. Bearing in mind that the study of physics is very important, as it puts students in front of concrete and real situations, situations that physical principles can respond to, helping to understand nature, nourishing and nurturing a taste for science. The use of new platforms and software, such as PhET Interactive Simulations from the University of Colorado at Boulder, is a virtual laboratory that has numerous simulations of scientific experiments, which serve to improve the understanding of the content taught practically, facilitating student learning and absorption of content, being a simple, practical and free simulation tool, attracts more attention from students, causing them to acquire greater knowledge about the subject studied, or even a quiz, bringing certain healthy competitiveness to students, generating knowledge and interest in the themes used. The present study takes the Theory of Social Representations as a theoretical reference, examining the content and process of constructing the representations of teachers, subjects of our investigation, on the evaluation of teaching and learning processes, through a methodology of qualitative. The result of this work has shown that remote teaching was really a very important strategy for the process of teaching and learning physics in the 2nd year of high school. It provided greater interaction between the teacher and the student. Therefore, the teacher also plays a fundamental role since technology is increasingly present in the educational environment, and he is the main protagonist of this process.Keywords: physics teaching, technologies, remote learning, pandemic
Procedia PDF Downloads 6911625 Formulation and Evaluation of Glimepiride (GMP)-Solid Nanodispersion and Nanodispersed Tablets
Authors: Ahmed. Abdel Bary, Omneya. Khowessah, Mojahed. al-jamrah
Abstract:
Introduction: The major challenge with the design of oral dosage forms lies with their poor bioavailability. The most frequent causes of low oral bioavailability are attributed to poor solubility and low permeability. The aim of this study was to develop solid nanodispersed tablet formulation of Glimepiride for the enhancement of the solubility and bioavailability. Methodology: Solid nanodispersions of Glimepiride (GMP) were prepared using two different ratios of 2 different carriers, namely; PEG6000, pluronic F127, and by adopting two different techniques, namely; solvent evaporation technique and fusion technique. A full factorial design of 2 3 was adopted to investigate the influence of formulation variables on the prepared nanodispersion properties. The best chosen formula of nanodispersed powder was formulated into tablets by direct compression. The Differential Scanning Calorimetry (DSC) analysis and Fourier Transform Infra-Red (FTIR) analysis were conducted for the thermal behavior and surface structure characterization, respectively. The zeta potential and particle size analysis of the prepared glimepiride nanodispersions was determined. The prepared solid nanodispersions and solid nanodispersed tablets of GMP were evaluated in terms of pre-compression and post-compression parameters, respectively. Results: The DSC and FTIR studies revealed that there was no interaction between GMP and all the excipients used. Based on the resulted values of different pre-compression parameters, the prepared solid nanodispersions powder blends showed poor to excellent flow properties. The resulted values of the other evaluated pre-compression parameters of the prepared solid nanodispersion were within the limits of pharmacopoeia. The drug content of the prepared nanodispersions ranged from 89.6 ± 0.3 % to 99.9± 0.5% with particle size ranged from 111.5 nm to 492.3 nm and the resulted zeta potential (ζ ) values of the prepared GMP-solid nanodispersion formulae (F1-F8) ranged from -8.28±3.62 mV to -78±11.4 mV. The in-vitro dissolution studies of the prepared solid nanodispersed tablets of GMP concluded that GMP- pluronic F127 combinations (F8), exhibited the best extent of drug release, compared to other formulations, and to the marketed product. One way ANOVA for the percent of drug released from the prepared GMP-nanodispersion formulae (F1- F8) after 20 and 60 minutes showed significant differences between the percent of drug released from different GMP-nanodispersed tablet formulae (F1- F8), (P<0.05). Conclusion: Preparation of glimepiride as nanodispersed particles proven to be a promising tool for enhancing the poor solubility of glimepiride.Keywords: glimepiride, solid Nanodispersion, nanodispersed tablets, poorly water soluble drugs
Procedia PDF Downloads 49111624 Recycling of Sintered NdFeB Magnet Waste Via Oxidative Roasting and Selective Leaching
Authors: W. Kritsarikan, T. Patcharawit, T. Yingnakorn, S. Khumkoa
Abstract:
Neodymium-iron-boron (NdFeB) magnets classified as high-power magnets are widely used in various applications such as electrical and medical devices and account for 13.5 % of the permanent magnet’s market. Since its typical composition of 29 - 32 % Nd, 64.2 – 68.5 % Fe and 1 – 1.2 % B contains a significant amount of rare earth metals and will be subjected to shortages in the future. Domestic NdFeB magnet waste recycling should therefore be developed in order to reduce social, environmental impacts toward a circular economy. Most research works focus on recycling the magnet wastes, both from the manufacturing process and end of life. Each type of wastes has different characteristics and compositions. As a result, these directly affect recycling efficiency as well as the types and purity of the recyclable products. This research, therefore, focused on the recycling of manufacturing NdFeB magnet waste obtained from the sintering stage of magnet production and the waste contained 23.6% Nd, 60.3% Fe and 0.261% B in order to recover high purity neodymium oxide (Nd₂O₃) using hybrid metallurgical process via oxidative roasting and selective leaching techniques. The sintered NdFeB waste was first ground to under 70 mesh prior to oxidative roasting at 550 - 800 °C to enable selective leaching of neodymium in the subsequent leaching step using H₂SO₄ at 2.5 M over 24 h. The leachate was then subjected to drying and roasting at 700 – 800 °C prior to precipitation by oxalic acid and calcination to obtain neodymium oxide as the recycling product. According to XRD analyses, it was found that increasing oxidative roasting temperature led to an increasing amount of hematite (Fe₂O₃) as the main composition with a smaller amount of magnetite (Fe₃O₄) found. Peaks of neodymium oxide (Nd₂O₃) were also observed in a lesser amount. Furthermore, neodymium iron oxide (NdFeO₃) was present and its XRD peaks were pronounced at higher oxidative roasting temperatures. When proceeded to acid leaching and drying, iron sulfate and neodymium sulfate were mainly obtained. After the roasting step prior to water leaching, iron sulfate was converted to form hematite as the main compound, while neodymium sulfate remained in the ingredient. However, a small amount of magnetite was still detected by XRD. The higher roasting temperature at 800 °C resulted in a greater Fe₂O₃ to Nd₂(SO₄)₃ ratio, indicating a more effective roasting temperature. Iron oxides were subsequently water leached and filtered out while the solution contained mainly neodymium sulfate. Therefore, low oxidative roasting temperature not exceeding 600 °C followed by acid leaching and roasting at 800 °C gave the optimum condition for further steps of precipitation and calcination to finally achieve neodymium oxide.Keywords: NdFeB magnet waste, oxidative roasting, recycling, selective leaching
Procedia PDF Downloads 18511623 The Design of an Afghan Refugee Camp in Kerman City through Ecotech Architecture
Authors: Kourosh Ghaffari, Baghaei Azhang
Abstract:
This study aims to address two main questions whether a camp designed for refugees will affect their quality of life and how to effectively incorporate ecotech architecture into the architectural design of a refugee camp. The current study planned to ensure that the final design reflects the principles of ecotech architecture in most refugee camps. The design process has taken into account various factors, including flexibility, diversity in the camp space according to the ecotech approach, expandability in the building, spatial hierarchy in the design of camp spaces, and the assignment of territories and space sanctuaries to refugees. It should be noted that this study is not a research-oriented type of study and is only limited to collecting information and making hypotheses and questions related to the plan. The researchers attempted to provide a general summary of similar domestic and foreign examples and examine them in similar conditions using the ecotech architecture. The research method utilized in this study was qualitative. Afterwards, the climate studies of the target area, citing and paying attention to the criteria and points extracted from the theoretical framework, reaching the desired conclusion and examining similar examples were followed. Additionally, placement on the site, compliance with relevant standards and regulations, attention to the content and physical program, and addressing the idea and its evolution in all the details of the plan were presented. The data collection procedure included observation and library studies, and the design method was to determine and recognize the subject and examine similar samples. In conclusion, the principles of theoretical foundations, the design protocols in ecotech architecture and the scope of the study are dealt. Furthermore, the site analysis, the design process and the final plan are presented.Keywords: ecotech architecture, livable city, shelter, refugee camp
Procedia PDF Downloads 8311622 Investigating Elements That Influence Higher Education Institutions’ Digital Maturity
Authors: Zarah M. Bello, Nathan Baddoo, Mariana Lilley, Paul Wernick
Abstract:
In this paper, we present findings from a multi-part study to evaluate candidate elements reflecting the level of digital capability maturity (DCM) in higher education and the relationship between these elements. We will use these findings to propose a model of DCM for educational institutions. We suggest that the success of learning in higher education is dependent in part on the level of maturity of digital capabilities of institutions as well as the abilities of learners and those who support the learning process. It is therefore important to have a good understanding of the elements that underpin this maturity as well as their impact and interactions in order to better exploit the benefits that technology presents to the modern learning environment and support its continued improvement. Having identified ten candidate elements of digital capability that we believe support the level of a University’s maturity in this area as well as a number of relevant stakeholder roles, we conducted two studies utilizing both quantitative and qualitative research methods. In the first of these studies, 85 electronic questionnaires were completed by various stakeholders in a UK university, with a 100% response rate. We also undertook five in-depth interviews with management stakeholders in the same university. We then utilized statistical analysis to process the survey data and conducted a textual analysis of the interview transcripts. Our findings support our initial identification of candidate elements and support our contention that these elements interact in a multidimensional manner. This multidimensional dynamic suggests that any proposal for improvement in digital capability must reflect the interdependency and cross-sectional relationship of the elements that contribute to DCM. Our results also indicate that the notion of DCM is strongly data-centric and that any proposed maturity model must reflect the role of data in driving maturity and improvement. We present these findings as a key step towards the design of an operationalisable DCM maturity model for universities.Keywords: digital capability, elements, maturity, maturity framework, university
Procedia PDF Downloads 14611621 Importance-Implementation of Disability Management Practices in Hotels: The Moderating Effect of Team Orientation
Authors: Zakaria Elkhwesky, Islam E. Salem, Mona Barakat
Abstract:
The purpose of this study is to analyze the importance of disability management practices (DMPs) and the level of implementation from viewpoints of food and beverage (F & B) managers, F and B entry-level employees, working in F & B departments, and human resources (HRs) managers in five-star hotels in Egypt. It also examined the moderating effect of team orientation (TO) between the importance and the implementation. Data were collected from 400 participants. The correlation proved to be significant, moderate, and positive between the importance and the implementation of DMPs. More, the findings revealed that the relationship between the importance and the implementation is significantly more positive under the condition of a high encouragement of TO.Keywords: disability management practices, diversity management, team orientation, HR management, hospitality, and tourism operations
Procedia PDF Downloads 12911620 Amelioration of Lipopolysaccharide Induced Murine Colitis by Cell Wall Contents of Probiotic Lactobacillus Casei: Targeting Immuno-Inflammation and Oxidative Stress
Authors: Vishvas N. Patel, Mehul Chorawala
Abstract:
Currently, according to the authors best knowledge there are less effective therapeutic agents to limit intestinal mucosa damage associated with inflammatory bowel disease (IBD). Clinical studies have shown beneficial effects of several probiotics in patients of IBD. Probiotics are live organisms; confer a health benefit to the host by modulating immunoinflammation and oxidative stress. Although probiotics in murine and human improve disease severity, very little is known about the specific contribution of cell wall contents of probiotics in IBD. Herein, we investigated the ameliorative potential of cell wall contents of Lactobacillus casei (LC) in lipopolysaccharide (LPS)-induced murine colitis. Methods: Colitis was induced in LPS-sensitized rats by intracolonic instillation of LPS (50 µg/rat) for consecutive 14 days. Concurrently, cell wall contents isolated from 103, 106 and 109 CFU of LC was given subcutaneously to each rat for 21 days, considering sulfasalazine (100 mg/kg, p.o.) as standard. The severity of colitis was assessed by body weight loss, food intake, stool consistency, rectal bleeding, colon weight/length, spleen weight and histological analysis. Colonic inflammatory markers (myeloperoxidase (MPO) activity, C-reactive protein and proinflammatory cytokines) and oxidative stress markers (malondialdehyde, reduced glutathione and nitric oxide) were also assayed. Results: Cell wall contents of isolated from 106 and 109 CFU of LC significantly improved the severity of colitis by reducing body weight loss and diarrhea & bleeding incidence, improving food intake, colon weight/length, spleen weight and microscopic damage to the colonic mucosa. The treatment also reduced levels of inflammatory and oxidative stress markers and boosted antioxidant molecule. However, cell wall contents of isolated from 103 were ineffective. Conclusion: In conclusion, cell wall contents of LC attenuate LPS-induced colitis by modulating immuno-inflammation and oxidative stress.Keywords: probiotics, Lactobacillus casei, immuno-inflammation, oxidative stress, lipopolysaccharide, colitis
Procedia PDF Downloads 9011619 The Influence of Characteristics of Waste Water on Properties of Sewage Sludge
Authors: Catalina Iticescu, Lucian P. Georgescu, Mihaela Timofti, Gabriel Murariu, Catalina Topa
Abstract:
In the field of environmental protection in the EU and also in Romania, strict and clear rules are imposed that are respected. Among those, mandatory municipal wastewater treatment is included. Our study involved Municipal Wastewater Treatment Plant (MWWTP) of Galati. MWWTP began its activity by the end of 2011 and technology is one of the most modern used in the EU. Moreover, to our knowledge, it is the first technology of this kind used in the region. Until commissioning, municipal wastewater was discharged directly into the Danube without any treatment. Besides the benefits of depollution, a new problem has arisen: the accumulation of increasingly large sewage sludge. Therefore, it is extremely important to find economically feasible and environmentally friendly solutions. One of the most feasible methods of disposing of sewage sludge is their use on agricultural land. Sewage sludge can be used in agriculture if monitored in terms of physicochemical properties (pH, nutrients, heavy metals, etc.), in order not to contribute to pollution in soils and not to affect chemical and biological balances, which are relatively fragile. In this paper, 16 physico-chemical parameters were monitored. Experimental testings were realised on waste water samples, sewage sludge results and treated water samples. Testing was conducted with electrochemichal methods (pH, conductivity, TDS); parameters N-total (mg/L), P-total (mg/L), N-NH4 (mg/L), N-NO2 (mg/L), N-NO3 (mg/L), Fe-total (mg/L), Cr-total (mg/L), Cu (mg/L), Zn (mg/L), Cd (mg/L), Pb (mg/L), Ni (mg/L) were determined by spectrophotometric methods using a spectrophotometer NOVA 60 and specific kits. Analyzing the results, we concluded that Sewage sludges, although containing heavy metals, are in small quantities and will not affect the land on which they will be deposited. Also, the amount of nutrients contained are appreciable. These features indicate that the sludge can be safely used in agriculture, with the advantage that they represent a cheap fertilizer. Acknowledgement: This work was supported by a grant of the Romanian National Authority for Scientific Research and Innovation – UEFISCDI, PNCDI III project, 79BG/2017, Efficiency of the technological process for obtaining of sewage sludge usable in agriculture, Efficient.Keywords: municipal wastewater, physico-chemical properties, sewage sludge, technology
Procedia PDF Downloads 21211618 The Analysis of New Town Hillside Development Pattern Guided by Low-Intensity Damage
Authors: Shan Zhou, Wenju Li, Kehui Chai
Abstract:
Along with economic globalization, marketization and regional development, strengthen planning and construction of the New Town, which is always the main way to optimize the structure and function of metropolitan spatial configuration. But, the new town is often of high-intensity development, bringing a series of natural, ecological and environmental issues, so it is difficult to achieve sustainable development. In this paper, taking the administrative center of Jiangping in Dongxing as an example. It is analyzed from the following three aspects:Vertical design of road traffic,Space layout of mountain buildings,and the design of landscape. The purpose is to elaborate the hillside design methods guided by low-intensity damage, and explore the guiding significance of sustainable development of the hillside construction in the future.Keywords: low-intensity damage, new town construction,hillside,sustainable development, natural, ecology
Procedia PDF Downloads 47611617 Virucidal, Bactericidal and Fungicidal Efficiency of Dry Microfine Steam on Innate Surfaces
Authors: C. Recchia, M. Bourel, B. Recchia
Abstract:
Microorganisms (viruses, bacteria, fungi) are responsible for most communicable diseases, threatening human health. For domestic use, chemical agents are often criticized because of their potential dangerousness, and natural solutions are needed. Application of the “dry microfine steam” (DMS) technology was tested on a selection of common pathogens (SARS-CoV-2, enterovirus EV-71, human coronavirus 229E, E. coli, S. aureus, C. albicans), on different innate surfaces, for 5 to 10 seconds. Quantification of the remaining pathogens was performed, and the reduction rates ranged from 99.8% (S. aureus on plastic) to over 99.999%. DMS showed high efficacy in the elimination of common microorganisms and could be seen as a natural alternative to chemical agents to improve domestic hygiene.Keywords: steam, SARS-CoV-2, bactericidal, virucidal, fungicidal, sterilization
Procedia PDF Downloads 16611616 Effect of Retained Posterior Horn of Medial Meniscus on Functional Outcome of ACL Reconstructed Knees
Authors: Kevin Syam, Devendra K. Chauhan, Mandeep Singh Dhillon
Abstract:
Background: The posterior horn of medial meniscus (PHMM) is a secondary stabilizer against anterior translation of tibia. Cadaveric studies have revealed increased strain on the ACL graft and greater instrumented laxity in Posterior horn deficient knees. Clinical studies have shown higher prevalence of radiological OA after ACL reconstruction combined with menisectomy. However, functional outcomes in ACL reconstructed knee in the absence of Posterior horn is less discussed, and specific role of posterior horn is ill-documented. This study evaluated functional and radiological outcomes in posterior horn preserved and posterior horn sacrificed ACL reconstructed knees. Materials: Of the 457 patients who had ACL reconstruction done over a 6 year period, 77 cases with minimum follow up of 18 months were included in the study after strict exclusion criteria (associated lateral meniscus injury, other ligamentous injuries, significant cartilage degeneration, repeat injury and contralateral knee injuries were excluded). 41 patients with intact menisci were compared with 36 patients with absent posterior horn of medial meniscus. Radiological and clinical tests for instability were conducted, and knees were evaluated using subjective International Knee Documentation Committee (IKDC) score and the Orthopadische Arbeitsgruppe Knie score (OAK). Results: We found a trend towards significantly better overall outcome (OAK) in cases with intact PHMM at average follow-up of 43.03 months (p value 0.082). Cases with intact PHMM had significantly better objective stability (p value 0.004). No significant differences were noted in the subjective IKDC score (p value 0.526) and the functional OAK outcome (category D) (p value 0.363). More cases with absent posterior horn had evidence of radiological OA (p value 0.022) even at mid-term follow-up. Conclusion: Even though the overall OAK and subjective IKDC scores did not show significant difference between the two subsets, the poorer outcomes in terms of objective stability and radiological OA noted in the absence of PHMM, indicates the importance of preserving this important part of the meniscus.Keywords: ACL, functional outcome, knee, posterior of medial meniscus
Procedia PDF Downloads 35911615 The Relationships between Carbon Dioxide (CO2) Emissions, Energy Consumption, and GDP for Turkey: Time Series Analysis, 1980-2010
Authors: Jinhoa Lee
Abstract:
The relationships between environmental quality, energy use and economic output have created growing attention over the past decades among researchers and policy makers. Focusing on the empirical aspects of the role of CO2 emissions and energy use in affecting the economic output, this paper is an effort to fulfill the gap in a comprehensive case study at a country level using modern econometric techniques. To achieve the goal, this country-specific study examines the short-run and long-run relationships among energy consumption (using disaggregated energy sources: crude oil, coal, natural gas, electricity), carbon dioxide (CO2) emissions and gross domestic product (GDP) for Turkey using time series analysis from the year 1980-2010. To investigate the relationships between the variables, this paper employs the Phillips–Perron (PP) test for stationarity, Johansen maximum likelihood method for cointegration and a Vector Error Correction Model (VECM) for both short- and long-run causality among the research variables for the sample. All the variables in this study show very strong significant effects on GDP in the country for the long term. The long-run equilibrium in the VECM suggests negative long-run causalities from consumption of petroleum products and the direct combustion of crude oil, coal and natural gas to GDP. Conversely, positive impacts of CO2 emissions and electricity consumption on GDP are found to be significant in Turkey during the period. There exists a short-run bidirectional relationship between electricity consumption and natural gas consumption. There exists a positive unidirectional causality running from electricity consumption to natural gas consumption, while there exists a negative unidirectional causality running from natural gas consumption to electricity consumption. Moreover, GDP has a negative effect on electricity consumption in Turkey in the short run. Overall, the results support arguments that there are relationships among environmental quality, energy use and economic output but the associations can to be differed by the sources of energy in the case of Turkey over of period 1980-2010.Keywords: CO2 emissions, energy consumption, GDP, Turkey, time series analysis
Procedia PDF Downloads 51411614 Discrete-Event Modeling and Simulation Methodologies: Past, Present and Future
Authors: Gabriel Wainer
Abstract:
Modeling and Simulation methods have been used to better analyze the behavior of complex physical systems, and it is now common to use simulation as a part of the scientific and technological discovery process. M&S advanced thanks to the improvements in computer technology, which, in many cases, resulted in the development of simulation software using ad-hoc techniques. Formal M&S appeared in order to try to improve the development task of very complex simulation systems. Some of these techniques proved to be successful in providing a sound base for the development of discrete-event simulation models, improving the ease of model definition and enhancing the application development tasks; reducing costs and favoring reuse. The DEVS formalism is one of these techniques, which proved to be successful in providing means for modeling while reducing development complexity and costs. DEVS model development is based on a sound theoretical framework. The independence of M&S tasks made possible to run DEVS models on different environments (personal computers, parallel computers, real-time equipment, and distributed simulators) and middleware. We will present a historical perspective of discrete-event M&S methodologies, showing different modeling techniques. We will introduce DEVS origins and general ideas, and compare it with some of these techniques. We will then show the current status of DEVS M&S, and we will discuss a technological perspective to solve current M&S problems (including real-time simulation, interoperability, and model-centered development techniques). We will show some examples of the current use of DEVS, including applications in different fields. We will finally show current open topics in the area, which include advanced methods for centralized, parallel or distributed simulation, the need for real-time modeling techniques, and our view in these fields.Keywords: modeling and simulation, discrete-event simulation, hybrid systems modeling, parallel and distributed simulation
Procedia PDF Downloads 32511613 The Grammar of the Content Plane as a Style Marker in Forensic Authorship Attribution
Authors: Dayane de Almeida
Abstract:
This work aims at presenting a study that demonstrates the usability of categories of analysis from Discourse Semiotics – also known as Greimassian Semiotics in authorship cases in forensic contexts. It is necessary to know if the categories examined in semiotic analysis (the ‘grammar’ of the content plane) can distinguish authors. Thus, a study with 4 sets of texts from a corpus of ‘not on demand’ written samples (those texts differ in formality degree, purpose, addressees, themes, etc.) was performed. Each author contributed with 20 texts, separated into 2 groups of 10 (Author1A, Author1B, and so on). The hypothesis was that texts from a single author were semiotically more similar to each other than texts from different authors. The assumptions and issues that led to this idea are as follows: -The features analyzed in authorship studies mostly relate to the expression plane: they are manifested on the ‘surface’ of texts. If language is both expression and content, content would also have to be considered for more accurate results. Style is present in both planes. -Semiotics postulates the content plane is structured in a ‘grammar’ that underlies expression, and that presents different levels of abstraction. This ‘grammar’ would be a style marker. -Sociolinguistics demonstrates intra-speaker variation: an individual employs different linguistic uses in different situations. Then, how to determine if someone is the author of several texts, distinct in nature (as it is the case in most forensic sets), when it is known intra-speaker variation is dependent on so many factors?-The idea is that the more abstract the level in the content plane, the lower the intra-speaker variation, because there will be a greater chance for the author to choose the same thing. If two authors recurrently chose the same options, differently from one another, it means each one’s option has discriminatory power. -Size is another issue for various attribution methods. Since most texts in real forensic settings are short, methods relying only on the expression plane tend to fail. The analysis of the content plane as proposed by greimassian semiotics would be less size-dependable. -The semiotic analysis was performed using the software Corpus Tool, generating tags to allow the counting of data. Then, similarities and differences were quantitatively measured, through the application of the Jaccard coefficient (a statistical measure that compares the similarities and differences between samples). The results showed the hypothesis was confirmed and, hence, the grammatical categories of the content plane may successfully be used in questioned authorship scenarios.Keywords: authorship attribution, content plane, forensic linguistics, greimassian semiotics, intraspeaker variation, style
Procedia PDF Downloads 24611612 Health-Related Quality of Life of Caregivers of Institution-Reared Children in Metro Manila: Effects of Role Overload and Role Distress
Authors: Ian Christopher Rocha
Abstract:
This study aimed to determine the association of the quality of life (QOL) of the caregivers of children in need of special protection (CNSP) in child-caring institutions in Metro Manila with the levels of their role overload (RO) and role distress (RD). The CNSP in this study covered the orphaned, abandoned, abused, neglected, exploited, and mentally-challenged children. In this study, the domains of QOL included physical health (PH), psychological health, social health (SH), and living conditions (LC). It also intended to ascertain the association of their personal and work-related characteristics with their RO and RD levels. The respondents of this study were 130 CNSP caregivers in 17 residential child-rearing institutions in Metro Manila. A purposive non-probability sampling was used. Using a quantitative methodological approach, the survey method was utilized to gather data with the use of a self-administered structured questionnaire. Data were analyzed using both descriptive and inferential statistics. Results revealed that the level of RO, the level of RD, and the QOL of the CNSP caregivers were all moderate. Data also suggested that there were significant positive relationships between the RO level and the caregivers’ characteristics, such as age, the number of training, and years of service in the institution. At the same time, the findings revealed that there were significant positive relationships between the RD level and the caregivers’ characteristics, such as age and hours of care rendered to their care recipients. In addition, the findings suggested that all domains of their QOL obtained significant relationships with their RO level. For the correlations of their level of RO and their QOL domains, the PH and the LC obtained a moderate negative correlation with the RO level while the rest of the domains obtained weak negative correlations with RO level. For the correlations of their level of RD and the QOL domains, all domains, except SH, obtained strong negative correlations with the level of RD. The SH revealed to have a moderate negative correlation with RD level. In conclusion, caregivers who are older experience higher levels of RO and RD; caregivers who have more training and years of service experience the higher level of RO; and caregivers who have longer hours of rendered care experience the higher level of RD. In addition, the study affirmed that if the levels of RO and RD are high, the QOL is low, and vice versa. Therefore, the RO and RD levels are reliable predictors of the caregivers’ QOL. In relation, the caregiving situation in the Philippines revealed to be unique and distinct from other countries because the levels of RO and RD and the QOL of Filipino CNSP caregivers were all moderate in contrast with their foreign counterparts who experience high caregiving RO and RD leading to low QOL.Keywords: quality of life, caregivers, children in need of special protection, physical health, psychological health, social health, living conditions, role overload, role distress
Procedia PDF Downloads 21411611 Image Steganography Using Predictive Coding for Secure Transmission
Authors: Baljit Singh Khehra, Jagreeti Kaur
Abstract:
In this paper, steganographic strategy is used to hide the text file inside an image. To increase the storage limit, predictive coding is utilized to implant information. In the proposed plan, one can exchange secure information by means of predictive coding methodology. The predictive coding produces high stego-image. The pixels are utilized to insert mystery information in it. The proposed information concealing plan is powerful as contrasted with the existing methodologies. By applying this strategy, a provision helps clients to productively conceal the information. Entropy, standard deviation, mean square error and peak signal noise ratio are the parameters used to evaluate the proposed methodology. The results of proposed approach are quite promising.Keywords: cryptography, steganography, reversible image, predictive coding
Procedia PDF Downloads 41911610 Embedding Employability in the Curriculum: Experiences from New Zealand
Authors: Narissa Lewis, Susan Geertshuis
Abstract:
The global and national employability agenda is changing the higher education landscape as academic staff are faced with the responsibility of developing employability capabilities and attributes in addition to delivering discipline specific content and skills. They realise that the shift towards teaching sustainable capabilities means a shift in the way they teach. But what that shift should be or how they should bring it about is unclear. As part of a national funded project, representatives from several New Zealand (NZ) higher education institutions and the NZ Association of Graduate Employers partnered to discover, trial and disseminate means of embedding employability in the curriculum. Findings from four focus groups (n=~75) and individual interviews (n=20) with staff from several NZ higher education institutions identified factors that enable or hinder embedded employability development within their respective institutions. Participants believed that higher education institutions have a key role in developing graduates for successful lives and careers however this requires a significant shift in culture within their respective institutions. Participants cited three main barriers: lack of strategic direction, support and guidance; lack of understanding and awareness of employability; and lack of resourcing and staff capability. Without adequate understanding and awareness of employability, participants believed it is difficult to understand what employability is let alone how it can be embedded in the curriculum. This presentation will describe some of the impacts that the employability agenda has on staff as they try to move from traditional to contemporary forms of teaching to develop employability attributes of students. Changes at the institutional level are required to support contemporary forms of teaching, however this is often beyond the sphere of influence at the teaching staff level. The study identified that small changes to teaching practices were necessary and a simple model to facilitate change from traditional to contemporary forms of teaching was developed. The model provides a framework to identify small but impactful teaching practices and exemplar teaching practices were identified. These practices were evaluated for transferability into other contexts to encourage small but impactful changes to embed employability in the curriculum.Keywords: curriculum design, change management, employability, teaching exemplars
Procedia PDF Downloads 33111609 Biological Hazards and Laboratory inflicted Infections in Sub-Saharan Africa
Authors: Godfrey Muiya Mukala
Abstract:
This research looks at an array of fields in Sub-Saharan Africa comprising agriculture, food enterprises, medicine, organisms genetically modified, microbiology, and nanotechnology that can be gained from biotechnological research and development. Findings into dangerous organisms, mainly bacterial germs, rickettsia, fungi, parasites, or organisms that are genetically engineered, have immensely posed questions attributed to the biological danger they bring forth to human beings and the environment because of their uncertainties. In addition, the recurrence of previously managed diseases or the inception of new diseases are connected to biosafety challenges, especially in rural set-ups in low and middle-income countries. Notably, biotechnology laboratories are required to adopt biosafety measures to protect their workforce, community, environment, and ecosystem from unforeseen materials and organisms. Sensitization and inclusion of educational frameworks for laboratory workers are essential to acquiring a solid knowledge of harmful biological agents. This is in addition to human pathogenicity, susceptibility, and epidemiology to the biological data used in research and development. This article reviews and analyzes research intending to identify the proper implementation of universally accepted practices in laboratory safety and biological hazards. This research identifies ideal microbiological methods, adequate containment equipment, sufficient resources, safety barriers, specific training, and education of the laboratory workforce to decrease and contain biological hazards. Subsequently, knowledge of standardized microbiological techniques and processes, in addition to the employment of containment facilities, protective barriers, and equipment, is far-reaching in preventing occupational infections. Similarly, reduction of risks and prevention may be attained by training, education, and research on biohazards, pathogenicity, and epidemiology of the relevant microorganisms. In this technique, medical professionals in rural setups may adopt the knowledge acquired from the past to project possible concerns in the future.Keywords: sub-saharan africa, biotechnology, laboratory, infections, health
Procedia PDF Downloads 8011608 Study of Mobile Game Addiction Using Electroencephalography Data Analysis
Authors: Arsalan Ansari, Muhammad Dawood Idrees, Maria Hafeez
Abstract:
Use of mobile phones has been increasing considerably over the past decade. Currently, it is one of the main sources of communication and information. Initially, mobile phones were limited to calls and messages, but with the advent of new technology smart phones were being used for many other purposes including video games. Despite of positive outcomes, addiction to video games on mobile phone has become a leading cause of psychological and physiological problems among many people. Several researchers examined the different aspects of behavior addiction with the use of different scales. Objective of this study is to examine any distinction between mobile game addicted and non-addicted players with the use of electroencephalography (EEG), based upon psycho-physiological indicators. The mobile players were asked to play a mobile game and EEG signals were recorded by BIOPAC equipment with AcqKnowledge as data acquisition software. Electrodes were places, following the 10-20 system. EEG was recorded at sampling rate of 200 samples/sec (12,000samples/min). EEG recordings were obtained from the frontal (Fp1, Fp2), parietal (P3, P4), and occipital (O1, O2) lobes of the brain. The frontal lobe is associated with behavioral control, personality, and emotions. The parietal lobe is involved in perception, understanding logic, and arithmetic. The occipital lobe plays a role in visual tasks. For this study, a 60 second time window was chosen for analysis. Preliminary analysis of the signals was carried out with Acqknowledge software of BIOPAC Systems. From the survey based on CGS manual study 2010, it was concluded that five participants out of fifteen were in addictive category. This was used as prior information to group the addicted and non-addicted by physiological analysis. Statistical analysis showed that by applying clustering analysis technique authors were able to categorize the addicted and non-addicted players specifically on theta frequency range of occipital area.Keywords: mobile game, addiction, psycho-physiology, EEG analysis
Procedia PDF Downloads 16911607 Controlled Drug Delivery System for Delivery of Poor Water Soluble Drugs
Authors: Raj Kumar, Prem Felix Siril
Abstract:
The poor aqueous solubility of many pharmaceutical drugs and potential drug candidates is a big challenge in drug development. Nanoformulation of such candidates is one of the major solutions for the delivery of such drugs. We initially developed the evaporation assisted solvent-antisolvent interaction (EASAI) method. EASAI method is use full to prepared nanoparticles of poor water soluble drugs with spherical morphology and particles size below 100 nm. However, to further improve the effect formulation to reduce number of dose and side effect it is important to control the delivery of drugs. However, many drug delivery systems are available. Among the many nano-drug carrier systems, solid lipid nanoparticles (SLNs) have many advantages over the others such as high biocompatibility, stability, non-toxicity and ability to achieve controlled release of drugs and drug targeting. SLNs can be administered through all existing routes due to high biocompatibility of lipids. SLNs are usually composed of lipid, surfactant and drug were encapsulated in lipid matrix. A number of non-steroidal anti-inflammatory drugs (NSAIDs) have poor bioavailability resulting from their poor aqueous solubility. In the present work, SLNs loaded with NSAIDs such as Nabumetone (NBT), Ketoprofen (KP) and Ibuprofen (IBP) were successfully prepared using different lipids and surfactants. We studied and optimized experimental parameters using a number of lipids, surfactants and NSAIDs. The effect of different experimental parameters such as lipid to surfactant ratio, volume of water, temperature, drug concentration and sonication time on the particles size of SLNs during the preparation using hot-melt sonication was studied. It was found that particles size was directly proportional to drug concentration and inversely proportional to surfactant concentration, volume of water added and temperature of water. SLNs prepared at optimized condition were characterized thoroughly by using different techniques such as dynamic light scattering (DLS), field emission scanning electron microscopy (FESEM), transmission electron microscopy (TEM), atomic force microscopy (AFM), X-ray diffraction (XRD) and differential scanning calorimetry and Fourier transform infrared spectroscopy (FTIR). We successfully prepared the SLN of below 220 nm using different lipids and surfactants combination. The drugs KP, NBT and IBP showed 74%, 69% and 53% percentage of entrapment efficiency with drug loading of 2%, 7% and 6% respectively in SLNs of Campul GMS 50K and Gelucire 50/13. In-vitro drug release profile of drug loaded SLNs is shown that nearly 100% of drug was release in 6 h.Keywords: nanoparticles, delivery, solid lipid nanoparticles, hot-melt sonication, poor water soluble drugs, solubility, bioavailability
Procedia PDF Downloads 31511606 Innovative Fabric Integrated Thermal Storage Systems and Applications
Authors: Ahmed Elsayed, Andrew Shea, Nicolas Kelly, John Allison
Abstract:
In northern European climates, domestic space heating and hot water represents a significant proportion of total primary total primary energy use and meeting these demands from a national electricity grid network supplied by renewable energy sources provides an opportunity for a significant reduction in EU CO2 emissions. However, in order to adapt to the intermittent nature of renewable energy generation and to avoid co-incident peak electricity usage from consumers that may exceed current capacity, the demand for heat must be decoupled from its generation. Storage of heat within the fabric of dwellings for use some hours, or days, later provides a route to complete decoupling of demand from supply and facilitates the greatly increased use of renewable energy generation into a local or national electricity network. The integration of thermal energy storage into the building fabric for retrieval at a later time requires much evaluation of the many competing thermal, physical, and practical considerations such as the profile and magnitude of heat demand, the duration of storage, charging and discharging rate, storage media, space allocation, etc. In this paper, the authors report investigations of thermal storage in building fabric using concrete material and present an evaluation of several factors that impact upon performance including heating pipe layout, heating fluid flow velocity, storage geometry, thermo-physical material properties, and also present an investigation of alternative storage materials and alternative heat transfer fluids. Reducing the heating pipe spacing from 200 mm to 100 mm enhances the stored energy by 25% and high-performance Vacuum Insulation results in heat loss flux of less than 3 W/m2, compared to 22 W/m2 for the more conventional EPS insulation. Dense concrete achieved the greatest storage capacity, relative to medium and light-weight alternatives, although a material thickness of 100 mm required more than 5 hours to charge fully. Layers of 25 mm and 50 mm thickness can be charged in 2 hours, or less, facilitating a fast response that could, aggregated across multiple dwellings, provide significant and valuable reduction in demand from grid-generated electricity in expected periods of high demand and potentially eliminate the need for additional new generating capacity from conventional sources such as gas, coal, or nuclear.Keywords: fabric integrated thermal storage, FITS, demand side management, energy storage, load shifting, renewable energy integration
Procedia PDF Downloads 16811605 Iranian Processed Cheese under Effect of Emulsifier Salts and Cooking Time in Process
Authors: M. Dezyani, R. Ezzati bbelvirdi, M. Shakerian, H. Mirzaei
Abstract:
Sodium Hexametaphosphate (SHMP) is commonly used as an Emulsifying Salt (ES) in process cheese, although rarely as the sole ES. It appears that no published studies exist on the effect of SHMP concentration on the properties of process cheese when pH is kept constant; pH is well known to affect process cheese functionality. The detailed interactions between the added phosphate, Casein (CN), and indigenous Ca phosphate are poorly understood. We studied the effect of the concentration of SHMP (0.25-2.75%) and holding time (0-20 min) on the textural and Rheological properties of pasteurized process Cheddar cheese using a central composite rotatable design. All cheeses were adjusted to pH 5.6. The meltability of process cheese (as indicated by the decrease in loss tangent parameter from small amplitude oscillatory rheology, degree of flow, and melt area from the Schreiber test) decreased with an increase in the concentration of SHMP. Holding time also led to a slight reduction in meltability. Hardness of process cheese increased as the concentration of SHMP increased. Acid-base titration curves indicated that the buffering peak at pH 4.8, which is attributable to residual colloidal Ca phosphate, was shifted to lower pH values with increasing concentration of SHMP. The insoluble Ca and total and insoluble P contents increased as concentration of SHMP increased. The proportion of insoluble P as a percentage of total (indigenous and added) P decreased with an increase in ES concentration because of some of the (added) SHMP formed soluble salts. The results of this study suggest that SHMP chelated the residual colloidal Ca phosphate content and dispersed CN; the newly formed Ca-phosphate complex remained trapped within the process cheese matrix, probably by cross-linking CN. Increasing the concentration of SHMP helped to improve fat emulsification and CN dispersion during cooking, both of which probably helped to reinforce the structure of process cheese.Keywords: Iranian processed cheese, emulsifying salt, rheology, texture
Procedia PDF Downloads 43411604 Wedding Organizer Strategy in the Era Covid-19 Pandemic In Surabaya, Indonesia
Authors: Rifky Cahya Putra
Abstract:
At this time of corona makes some countries affected difficult. As a result, many traders or companies are difficult to work in this pandemic era. So human activities in some fields must implement a new lifestyle or known as new normal. The transition from the one activity to another certainly requires high adaptation. So that almost in all sectors experience the impact of this phase, on of which is the wedding organizer. This research aims to find out what strategies are used so that the company can run in this pandemic. Techniques in data collection in the form interview to the owner of the wedding organizer and his team. Data analysis qualitative descriptive use interactive model analysis consisting of three main things, namely data reduction, data presentaion, and conclusion. For the result of the interview, the conclusion is that there are three strategies consisting of social media, sponsorship, and promotion.Keywords: strategy, wedding organizer, pandemic, indonesia
Procedia PDF Downloads 13911603 Gravitational Water Vortex Power Plant: Experimental-Parametric Design of a Hydraulic Structure Capable of Inducing the Artificial Formation of a Gravitational Water Vortex Appropriate for Hydroelectric Generation
Authors: Henrry Vicente Rojas Asuero, Holger Manuel Benavides Muñoz
Abstract:
Approximately 80% of the energy consumed worldwide is generated from fossil sources, which are responsible for the emission of a large volume of greenhouse gases. For this reason, the global trend, at present, is the widespread use of energy produced from renewable sources. This seeks safety and diversification of energy supply, based on social cohesion, economic feasibility and environmental protection. In this scenario, small hydropower systems (P ≤ 10MW) stand out due to their high efficiency, economic competitiveness and low environmental impact. Small hydropower systems, along with wind and solar energy, are expected to represent a significant percentage of the world's energy matrix in the near term. Among the various technologies present in the state of the art, relating to small hydropower systems, is the Gravitational Water Vortex Power Plant, a recent technology that excels because of its versatility of operation, since it can operate with jumps in the range of 0.70 m-2.00 m and flow rates from 1 m3/s to 20 m3/s. Its operating system is based on the utilization of the energy of rotation contained within a large water vortex artificially induced. This paper presents the study and experimental design of an optimal hydraulic structure with the capacity to induce the artificial formation of a gravitational water vortex trough a system of easy application and high efficiency, able to operate in conditions of very low head and minimum flow. The proposed structure consists of a channel, with variable base, vortex inductor, tangential flow generator, coupled to a circular tank with a conical transition bottom hole. In the laboratory test, the angular velocity of the water vortex was related to the geometric characteristics of the inductor channel, as well as the influence of the conical transition bottom hole on the physical characteristics of the water vortex. The results show angular velocity values of greater magnitude as a function of depth, in addition the presence of the conical transition in the bottom hole of the circular tank improves the water vortex formation conditions while increasing the angular velocity values. Thus, the proposed system is a sustainable solution for the energy supply of rural areas near to watercourses.Keywords: experimental model, gravitational water vortex power plant, renewable energy, small hydropower
Procedia PDF Downloads 29411602 LTE Performance Analysis in the City of Bogota Northern Zone for Two Different Mobile Broadband Operators over Qualipoc
Authors: Víctor D. Rodríguez, Edith P. Estupiñán, Juan C. Martínez
Abstract:
The evolution in mobile broadband technologies has allowed to increase the download rates in users considering the current services. The evaluation of technical parameters at the link level is of vital importance to validate the quality and veracity of the connection, thus avoiding large losses of data, time and productivity. Some of these failures may occur between the eNodeB (Evolved Node B) and the user equipment (UE), so the link between the end device and the base station can be observed. LTE (Long Term Evolution) is considered one of the IP-oriented mobile broadband technologies that work stably for data and VoIP (Voice Over IP) for those devices that have that feature. This research presents a technical analysis of the connection and channeling processes between UE and eNodeB with the TAC (Tracking Area Code) variables, and analysis of performance variables (Throughput, Signal to Interference and Noise Ratio (SINR)). Three measurement scenarios were proposed in the city of Bogotá using QualiPoc, where two operators were evaluated (Operator 1 and Operator 2). Once the data were obtained, an analysis of the variables was performed determining that the data obtained in transmission modes vary depending on the parameters BLER (Block Error Rate), performance and SNR (Signal-to-Noise Ratio). In the case of both operators, differences in transmission modes are detected and this is reflected in the quality of the signal. In addition, due to the fact that both operators work in different frequencies, it can be seen that Operator 1, despite having spectrum in Band 7 (2600 MHz), together with Operator 2, is reassigning to another frequency, a lower band, which is AWS (1700 MHz), but the difference in signal quality with respect to the establishment with data by the provider Operator 2 and the difference found in the transmission modes determined by the eNodeB in Operator 1 is remarkable.Keywords: BLER, LTE, network, qualipoc, SNR.
Procedia PDF Downloads 11911601 Scientific Investigation for an Ancient Egyptian Polychrome Wooden Stele
Authors: Ahmed Abdrabou, Medhat Abdalla
Abstract:
The studied stele dates back to Third Intermediate Period (1075-664) B.C in an ancient Egypt. It is made of wood and covered with painted gesso layers. This study aims to use a combination of multi spectral imaging {visible, infrared (IR), Visible-induced infrared luminescence (VIL), Visible-induced ultraviolet luminescence (UVL) and ultraviolet reflected (UVR)}, along with portable x-ray fluorescence in order to map and identify the pigments as well as to provide a deeper understanding of the painting techniques. Moreover; the authors were significantly interested in the identification of wood species. Multispectral imaging acquired in 3 spectral bands, ultraviolet (360-400 nm), visible (400-780 nm) and infrared (780-1100 nm) using (UV Ultraviolet-induced luminescence (UVL), UV Reflected (UVR), Visible (VIS), Visible-induced infrared luminescence (VIL) and Infrared photography. False color images are made by digitally editing the VIS with IR or UV images using Adobe Photoshop. Optical Microscopy (OM), potable X-ray fluorescence spectroscopy (p-XRF) and Fourier Transform Infrared Spectroscopy (FTIR) were used in this study. Mapping and imaging techniques provided useful information about the spatial distribution of pigments, in particular visible-induced luminescence (VIL) which allowed the spatial distribution of Egyptian blue pigment to be mapped and every region containing Egyptian blue, even down to single crystals in some instances, is clearly visible as a bright white area; however complete characterization of the pigments requires the use of p. XRF spectroscopy. Based on the elemental analysis found by P.XRF, we conclude that the artists used mixtures of the basic mineral pigments to achieve a wider palette of hues. Identification of wood species Microscopic identification indicated that the wood used was Sycamore Fig (Ficus sycomorus L.) which is recorded as being native to Egypt and was used to make wooden artifacts since at least the Fifth Dynasty.Keywords: polychrome wooden stele, multispectral imaging, IR luminescence, Wood identification, Sycamore Fig, p-XRF
Procedia PDF Downloads 27011600 The Teacher’s Role in Generating and Maintaining the Motivation of Adult Learners of English: A Mixed Methods Study in Hungarian Corporate Contexts
Authors: Csaba Kalman
Abstract:
In spite of the existence of numerous second language (L2) motivation theories, the teacher’s role in motivating learners has remained an under-researched niche to this day. If we narrow down our focus on the teacher’s role on motivating adult learners of English in an English as a Foreign Language (EFL) context in corporate environments, empirical research is practically non-existent. This study fills the above research niche by exploring the most motivating aspects of the teacher’s personality, behaviour, and teaching practices that affect adult learners’ L2 motivation in corporate contexts in Hungary. The study was conducted in a wide range of industries in 18 organisations that employ over 250 people in Hungary. In order to triangulate the research, 21 human resources managers, 18 language teachers, and 466 adult learners of English were involved in the investigation by participating in interview studies, and quantitative questionnaire studies that measured ten scales related to the teacher’s role, as well as two criterion measure scales of intrinsic and extrinsic motivation. The qualitative data were analysed using a template organising style, while descriptive, inferential statistics, as well as multivariate statistical techniques, such as correlation and regression analyses, were used for analysing the quantitative data. The results showed that certain aspects of the teacher’s personality (thoroughness, enthusiasm, credibility, and flexibility), as well as preparedness, incorporating English for Specific Purposes (ESP) in the syllabus, and focusing on the present, proved to be the most salient aspects of the teacher’s motivating influence. The regression analyses conducted with the criterion measure scales revealed that 22% of the variance in learners’ intrinsic motivation could be explained by the teacher’s preparedness and appearance, and 23% of the variance in learners’ extrinsic motivation could be attributed to the teacher’s personal branding and incorporating ESP in the syllabus. The findings confirm the pivotal role teachers play in motivating L2 learners independent of the context they teach in; and, at the same time, call for further research so that we can better conceptualise the motivating influence of L2 teachers.Keywords: adult learners, corporate contexts, motivation, teacher’s role
Procedia PDF Downloads 11211599 An Assessment of Impact of Financial Statement Fraud on Profit Performance of Manufacturing Firms in Nigeria: A Study of Food and Beverage Firms in Nigeria
Authors: Wale Agbaje
Abstract:
The aim of this research study is to assess the impact of financial statement fraud on profitability of some selected Nigerian manufacturing firms covering (2002-2016). The specific objectives focused on to ascertain the effect of incorrect asset valuation on return on assets (ROA) and to ascertain the relationship between improper expense recognition and return on assets (ROA). To achieve these objectives, descriptive research design was used for the study while secondary data were collected from the financial reports of the selected firms and website of security and exchange commission. The analysis of covariance (ANCOVA) was used and STATA II econometric method was used in the analysis of the data. Altman model and operating expenses ratio was adopted in the analysis of the financial reports to create a dummy variable for the selected firms from 2002-2016 and validation of the parameters were ascertained using various statistical techniques such as t-test, co-efficient of determination (R2), F-statistics and Wald chi-square. Two hypotheses were formulated and tested using the t-statistics at 5% level of significance. The findings of the analysis revealed that there is a significant relationship between financial statement fraud and profitability in Nigerian manufacturing industry. It was revealed that incorrect assets valuation has a significant positive relationship and so also is the improper expense recognition on return on assets (ROA) which serves as a proxy for profitability. The implication of this is that distortion of asset valuation and expense recognition leads to decreasing profit in the long run in the manufacturing industry. The study therefore recommended that pragmatic policy options need to be taken in the manufacturing industry to effectively manage incorrect asset valuation and improper expense recognition in order to enhance manufacturing industry performance in the country and also stemming of financial statement fraud should be adequately inculcated into the internal control system of manufacturing firms for the effective running of the manufacturing industry in Nigeria.Keywords: Althman's Model, improper expense recognition, incorrect asset valuation, return on assets
Procedia PDF Downloads 16311598 Investigating the Editing's Effect of Advertising Photos on the Virtual Purchase Decision Based on the Quantitative Electroencephalogram (EEG) Parameters
Authors: Parya Tabei, Maryam Habibifar
Abstract:
Decision-making is an important cognitive function that can be defined as the process of choosing an option among available options to achieve a specific goal. Consumer ‘need’ is the main reason for purchasing decisions. Human decision-making while buying products online is subject to various factors, one of which is the quality and effect of advertising photos. Advertising photo editing can have a significant impact on people's virtual purchase decisions. This technique helps improve the quality and overall appearance of photos by adjusting various aspects such as brightness, contrast, colors, cropping, resizing, and adding filters. This study, by examining the effect of editing advertising photos on the virtual purchase decision using EEG data, tries to investigate the effect of edited images on the decision-making of customers. A group of 30 participants were asked to react to 24 edited and unedited images while their EEG was recorded. Analysis of the EEG data revealed increased alpha wave activity in the occipital regions (O1, O2) for both edited and unedited images, which is related to visual processing and attention. Additionally, there was an increase in beta wave activity in the frontal regions (FP1, FP2, F4, F8) when participants viewed edited images, suggesting involvement in cognitive processes such as decision-making and evaluating advertising content. Gamma wave activity also increased in various regions, especially the frontal and parietal regions, which are associated with higher cognitive functions, such as attention, memory, and perception, when viewing the edited images. While the visual processing reflected by alpha waves remained consistent across different visual conditions, editing advertising photos appeared to boost neural activity in frontal and parietal regions associated with decision-making processes. These Findings suggest that photo editing could potentially influence consumer perceptions during virtual shopping experiences by modulating brain activity related to product assessment and purchase decisions.Keywords: virtual purchase decision, advertising photo, EEG parameters, decision Making
Procedia PDF Downloads 6211597 Evaluation of Classification Algorithms for Diagnosis of Asthma in Iranian Patients
Authors: Taha SamadSoltani, Peyman Rezaei Hachesu, Marjan GhaziSaeedi, Maryam Zolnoori
Abstract:
Introduction: Data mining defined as a process to find patterns and relationships along data in the database to build predictive models. Application of data mining extended in vast sectors such as the healthcare services. Medical data mining aims to solve real-world problems in the diagnosis and treatment of diseases. This method applies various techniques and algorithms which have different accuracy and precision. The purpose of this study was to apply knowledge discovery and data mining techniques for the diagnosis of asthma based on patient symptoms and history. Method: Data mining includes several steps and decisions should be made by the user which starts by creation of an understanding of the scope and application of previous knowledge in this area and identifying KD process from the point of view of the stakeholders and finished by acting on discovered knowledge using knowledge conducting, integrating knowledge with other systems and knowledge documenting and reporting.in this study a stepwise methodology followed to achieve a logical outcome. Results: Sensitivity, Specifity and Accuracy of KNN, SVM, Naïve bayes, NN, Classification tree and CN2 algorithms and related similar studies was evaluated and ROC curves were plotted to show the performance of the system. Conclusion: The results show that we can accurately diagnose asthma, approximately ninety percent, based on the demographical and clinical data. The study also showed that the methods based on pattern discovery and data mining have a higher sensitivity compared to expert and knowledge-based systems. On the other hand, medical guidelines and evidence-based medicine should be base of diagnostics methods, therefore recommended to machine learning algorithms used in combination with knowledge-based algorithms.Keywords: asthma, datamining, classification, machine learning
Procedia PDF Downloads 451