Search results for: power system analysis
24413 A Systematic Review on Challenges in Big Data Environment
Authors: Rimmy Yadav, Anmol Preet Kaur
Abstract:
Big Data has demonstrated the vast potential in streamlining, deciding, spotting business drifts in different fields, for example, producing, fund, Information Technology. This paper gives a multi-disciplinary diagram of the research issues in enormous information and its procedures, instruments, and system identified with the privacy, data storage management, network and energy utilization, adaptation to non-critical failure and information representations. Other than this, result difficulties and openings accessible in this Big Data platform have made.Keywords: big data, privacy, data management, network and energy consumption
Procedia PDF Downloads 31624412 Gradient Length Anomaly Analysis for Landslide Vulnerability Analysis of Upper Alaknanda River Basin, Uttarakhand Himalayas, India
Authors: Hasmithaa Neha, Atul Kumar Patidar, Girish Ch Kothyari
Abstract:
The northward convergence of the Indian plate has a dominating influence over the structural and geomorphic development of the Himalayan region. The highly deformed and complex stratigraphy in the area arises from a confluence of exogenic and endogenetic geological processes. This region frequently experiences natural hazards such as debris flows, flash floods, avalanches, landslides, and earthquakes due to its harsh and steep topography and fragile rock formations. Therefore, remote sensing technique-based examination and real-time monitoring of tectonically sensitive regions may provide crucial early warnings and invaluable data for effective hazard mitigation strategies. In order to identify unusual changes in the river gradients, the current study demonstrates a spatial quantitative geomorphic analysis of the upper Alaknanda River basin, Uttarakhand Himalaya, India, using gradient length anomaly analysis (GLAA). This basin is highly vulnerable to ground creeping and landslides due to the presence of active faults/thrusts, toe-cutting of slopes for road widening, development of heavy engineering projects on the highly sheared bedrock, and periodic earthquakes. The intersecting joint sets developed in the bedrocks have formed wedges that have facilitated the recurrence of several landslides. The main objective of current research is to identify abnormal gradient lengths, indicating potential landslide-prone zones. High-resolution digital elevation data and geospatial techniques are used to perform this analysis. The results of GLAA are corroborated with the historical landslide events and ultimately used for the generation of landslide susceptibility maps of the current study area. The preliminary results indicate that approximately 3.97% of the basin is stable, while about 8.54% is classified as moderately stable and suitable for human habitation. However, roughly 19.89% fall within the zone of moderate vulnerability, 38.06% are classified as vulnerable, and 29% fall within the highly vulnerable zones, posing risks for geohazards, including landslides, glacial avalanches, and earthquakes. This research provides valuable insights into the spatial distribution of landslide-prone areas. It offers a basis for implementing proactive measures for landslide risk reduction, including land-use planning, early warning systems, and infrastructure development techniques.Keywords: landslide vulnerability, geohazard, GLA, upper Alaknanda Basin, Uttarakhand Himalaya
Procedia PDF Downloads 7824411 Optimizing Energy Efficiency: Leveraging Big Data Analytics and AWS Services for Buildings and Industries
Authors: Gaurav Kumar Sinha
Abstract:
In an era marked by increasing concerns about energy sustainability, this research endeavors to address the pressing challenge of energy consumption in buildings and industries. This study delves into the transformative potential of AWS services in optimizing energy efficiency. The research is founded on the recognition that effective management of energy consumption is imperative for both environmental conservation and economic viability. Buildings and industries account for a substantial portion of global energy use, making it crucial to develop advanced techniques for analysis and reduction. This study sets out to explore the integration of AWS services with big data analytics to provide innovative solutions for energy consumption analysis. Leveraging AWS's cloud computing capabilities, scalable infrastructure, and data analytics tools, the research aims to develop efficient methods for collecting, processing, and analyzing energy data from diverse sources. The core focus is on creating predictive models and real-time monitoring systems that enable proactive energy management. By harnessing AWS's machine learning and data analytics capabilities, the research seeks to identify patterns, anomalies, and optimization opportunities within energy consumption data. Furthermore, this study aims to propose actionable recommendations for reducing energy consumption in buildings and industries. By combining AWS services with metrics-driven insights, the research strives to facilitate the implementation of energy-efficient practices, ultimately leading to reduced carbon emissions and cost savings. The integration of AWS services not only enhances the analytical capabilities but also offers scalable solutions that can be customized for different building and industrial contexts. The research also recognizes the potential for AWS-powered solutions to promote sustainable practices and support environmental stewardship.Keywords: energy consumption analysis, big data analytics, AWS services, energy efficiency
Procedia PDF Downloads 6524410 Multimodal Biometric Cryptography Based Authentication in Cloud Environment to Enhance Information Security
Authors: D. Pugazhenthi, B. Sree Vidya
Abstract:
Cloud computing is one of the emerging technologies that enables end users to use the services of cloud on ‘pay per usage’ strategy. This technology grows in a fast pace and so is its security threat. One among the various services provided by cloud is storage. In this service, security plays a vital factor for both authenticating legitimate users and protection of information. This paper brings in efficient ways of authenticating users as well as securing information on the cloud. Initial phase proposed in this paper deals with an authentication technique using multi-factor and multi-dimensional authentication system with multi-level security. Unique identification and slow intrusive formulates an advanced reliability on user-behaviour based biometrics than conventional means of password authentication. By biometric systems, the accounts are accessed only by a legitimate user and not by a nonentity. The biometric templates employed here do not include single trait but multiple, viz., iris and finger prints. The coordinating stage of the authentication system functions on Ensemble Support Vector Machine (SVM) and optimization by assembling weights of base SVMs for SVM ensemble after individual SVM of ensemble is trained by the Artificial Fish Swarm Algorithm (AFSA). Thus it helps in generating a user-specific secure cryptographic key of the multimodal biometric template by fusion process. Data security problem is averted and enhanced security architecture is proposed using encryption and decryption system with double key cryptography based on Fuzzy Neural Network (FNN) for data storing and retrieval in cloud computing . The proposing scheme aims to protect the records from hackers by arresting the breaking of cipher text to original text. This improves the authentication performance that the proposed double cryptographic key scheme is capable of providing better user authentication and better security which distinguish between the genuine and fake users. Thus, there are three important modules in this proposed work such as 1) Feature extraction, 2) Multimodal biometric template generation and 3) Cryptographic key generation. The extraction of the feature and texture properties from the respective fingerprint and iris images has been done initially. Finally, with the help of fuzzy neural network and symmetric cryptography algorithm, the technique of double key encryption technique has been developed. As the proposed approach is based on neural networks, it has the advantage of not being decrypted by the hacker even though the data were hacked already. The results prove that authentication process is optimal and stored information is secured.Keywords: artificial fish swarm algorithm (AFSA), biometric authentication, decryption, encryption, fingerprint, fusion, fuzzy neural network (FNN), iris, multi-modal, support vector machine classification
Procedia PDF Downloads 26624409 EnumTree: An Enumerative Biclustering Algorithm for DNA Microarray Data
Authors: Haifa Ben Saber, Mourad Elloumi
Abstract:
In a number of domains, like in DNA microarray data analysis, we need to cluster simultaneously rows (genes) and columns (conditions) of a data matrix to identify groups of constant rows with a group of columns. This kind of clustering is called biclustering. Biclustering algorithms are extensively used in DNA microarray data analysis. More effective biclustering algorithms are highly desirable and needed. We introduce a new algorithm called, Enumerative tree (EnumTree) for biclustering of binary microarray data. is an algorithm adopting the approach of enumerating biclusters. This algorithm extracts all biclusters consistent good quality. The main idea of EnumLat is the construction of a new tree structure to represent adequately different biclusters discovered during the process of enumeration. This algorithm adopts the strategy of all biclusters at a time. The performance of the proposed algorithm is assessed using both synthetic and real DNA micryarray data, our algorithm outperforms other biclustering algorithms for binary microarray data. Biclusters with different numbers of rows. Moreover, we test the biological significance using a gene annotation web tool to show that our proposed method is able to produce biologically relevent biclusters.Keywords: DNA microarray, biclustering, gene expression data, tree, datamining.
Procedia PDF Downloads 37424408 Comparison of Stereotactic Body Radiation Therapy Virtual Treatment Plans Obtained With Different Collimators in the Cyberknife System in Partial Breast Irradiation: A Retrospective Study
Authors: Öznur Saribaş, Si̇bel Kahraman Çeti̇ntaş
Abstract:
It is aimed to compare target volume and critical organ doses by using CyberKnife (CK) in accelerated partial breast irradiation (APBI) in patients with early stage breast cancer. Three different virtual plans were made for Iris, fixed and multi-leaf collimator (MLC) for 5 patients who received radiotherapy in the CyberKnife system. CyberKnife virtual plans were created, with 6 Gy per day totaling 30 Gy. Dosimetric parameters for the three collimators were analyzed according to the restrictions in the NSABP-39/RTOG 0413 protocol. The plans ensured critical organs were protected and GTV received 95 % of the prescribed dose. The prescribed dose was defined by the isodose curve of a minimum of 80. Homogeneity index (HI), conformity index (CI), treatment time (min), monitor unit (MU) and doses taken by critical organs were compared. As a result of the comparison of the plans, a significant difference was found for the duration of treatment, MU. However, no significant difference was found for HI, CI. V30 and V15 values of the ipsi-lateral breast were found in the lowest MLC. There was no significant difference between Dmax values for lung and heart. However, the mean MU and duration of treatment were found in the lowest MLC. As a result, the target volume received the desired dose in each collimator. The contralateral breast and contralateral lung doses were the lowest in the Iris. Fixed collimator was found to be more suitable for cardiac doses. But these values did not make a significant difference. The use of fixed collimators may cause difficulties in clinical applications due to the long treatment time. The choice of collimator in breast SBRT applications with CyberKnife may vary depending on tumor size, proximity to critical organs and tumor localization.Keywords: APBI, CyberKnife, early stage breast cancer, radiotherapy.
Procedia PDF Downloads 12224407 Non-Thyroidal Illness Syndrome and Its Prognostic Significance in Pediatric Septic Shock: A Cross-Sectional Analysis
Authors: Ankita Sharma, Satish Kumar Meena, Neha Kawatra Madan
Abstract:
Background and Aims: Pediatric septic shock, a life-threatening condition, is associated with significant morbidity and mortality. Dysregulation of thyroid function, presenting as Non-Thyroidal Illness Syndrome (NTIS), is a common observation in critically ill patients and may impact clinical outcomes. This study investigates the thyroid hormone profile in pediatric septic shock and its correlation with disease outcomes. Methods: A cross-sectional study was conducted in the Pediatric Department of VMMC and Safdarjung Hospital, New Delhi. Ninety-one children, aged 1 month to 12 years, diagnosed with septic shock were included. Thyroid function tests (Total T3, Total T4, Free T3, Free T4, and TSH) were measured upon admission. Outcomes were categorized as favorable (shock reversal within 24 hours, ICU stay <7 days) or unfavorable (prolonged shock, ICU stay >7 days, multiorgan dysfunction syndrome [MODS], or death). Statistical analysis included logistic regression and receiver operating characteristic (ROC) curve evaluation. Results: Thyroid hormone abnormalities were prevalent, with low Total T3 (84.6%), low Total T4 (70.3%), and low Free T3 (76.9%) being the most common findings. Significant associations were observed between low levels of Total T3, Total T4, Free T3, and Free T4 with unfavorable outcomes (p<0.001 for all). ROC analysis identified Free T3 as the strongest predictor of unfavorable outcomes, with an AUROC of 0.842. Conclusions: Thyroid hormone levels, particularly Free T3, are critical prognostic markers in pediatric septic shock. Timely monitoring of thyroid function could aid in risk stratification and therapeutic decision-making. Future research should focus on the potential benefits of thyroid hormone replacement therapy in this population.Keywords: pediatric septic shock, thyroid function, non-thyroidal illness syndrome, prognostic markers, free T3
Procedia PDF Downloads 1324406 The Influence of English Learning on Ethnic Kazakh Minority Students’ Identity (Re)Construction at Chinese Universities
Authors: Sharapat Sharapat
Abstract:
English language is perceived as cultural capital in many non-native English-speaking countries, and minority groups in these social contexts seem to invest in the language to be empowered and reposition themselves from the imbalanced power relation with the dominant group. This study is devoted to explore how English learning influence minority Kazakh students’ identity (re)construction at Chinese universities from the scope of ‘imagined community, investment, and identity’ theory of Norton (2013). To this end the three research questions were designed as follows: 1) Kazakh minority students’ English learning experiences at Chinese universities; 2) Kazakh minority students’ views about benefits and opportunities of English learning; 3) the influence of English learning on Kazakh minority students’ identity (re)construction. The study employs an interview-based qualitative research method by interviewing nine Kazakh minority students in universities in Xinjiang and other inland cities in China. The findings suggest that through English learning, some students have reconstructed multiple identities as multicultural and global identities, which created ‘a third space’ to break limits of their ethnic and national identities and confused identity as someone in-between. Meanwhile, most minority students were empowered by the English language to resist inferior or marginalized positions and reconstruct imagined elite identity. However, English learning disempowered students who have little previous English education in school and placed them on unequal footing with other students, which further escalated the educational inequities.Keywords: minority in China, identity construction, multilingual education, language empowerment
Procedia PDF Downloads 23824405 Examining the Effects of College Education on Democratic Attitudes in China: A Regression Discontinuity Analysis
Authors: Gang Wang
Abstract:
Education is widely believed to be a prerequisite for democracy and civil society, but the causal link between education and outcome variables is usually hardly to be identified. This study applies a fuzzy regression discontinuity design to examine the effects of college education on democratic attitudes in the Chinese context. In the analysis treatment assignment is determined by students’ college entry years and thus naturally selected by subjects’ ages. Using a sample of Chinese college students collected in Beijing in 2009, this study finds that college education actually reduces undergraduates’ motivation for political development in China but promotes political loyalty to the authoritarian government. Further hypotheses tests explain these interesting findings from two perspectives. The first is related to the complexity of politics. As college students progress over time, they increasingly realize the complexity of political reform in China’s authoritarian regime and rather stay away from politics. The second is related to students’ career opportunities. As students are close to graduation, they are immersed with job hunting and have a reduced interest in political freedom.Keywords: china, college education, democratic attitudes, regression discontinuity
Procedia PDF Downloads 35224404 Predictive Value of Primary Tumor Depth for Cervical Lymphadenopathy in Squamous Cell Carcinoma of Buccal Mucosa
Authors: Zohra Salim
Abstract:
Objective: To access the relationship of primary tumor thickness with cervical lymphadenopathy in squamous cell carcinoma of buccal mucosa. Methodology: A cross-sectional observational study was carried out on 80 Patients with biopsy-proven oral squamous cell carcinoma of buccal mucosa at Dow University of Health Sciences. All the study participants were treated with wide local excision of the primary tumor with elective neck dissection. Patients with prior head and neck malignancy or those with prior radiotherapy or chemotherapy were excluded from the study. Data was entered and analyzed on SPSS 21. Chi-squared test with 95% C.I and 80% power of the test was used to evaluate the relationship of tumor depth with cervical lymph nodes. Results: 50 participants were male, and 30 patients were female. 30 patients were in the age range of 20-40 years, 36 patients in the range of 40-60 years, while 14 patients were beyond age 60 years. Tumor size ranged from 0.3cm to 5cm with a mean of 2.03cm. Tumor depth ranged from 0.2cm to 5cm. 20% of the participants reported with tumor depth greater than 2.5cm, while 80% of patients reported with tumor depth less than 2.5cm. Out of 80 patients, 27 reported with negative lymph nodes, while 53 patients reported with positive lymph nodes. Conclusion: Our study concludes that relationship exists between the depth of primary tumor and cervical lymphadenopathy in squamous cell carcinoma of buccal mucosa.Keywords: squamous cell carcinoma, tumor depth, cervical lymphadenopathy, buccal mucosa
Procedia PDF Downloads 23824403 Flexible Coupling between Gearbox and Pump (High Speed Machine)
Authors: Naif Mohsen Alharbi
Abstract:
This paper present failure occurred on flexible coupling installed at oil anf gas operation. Also it presents maintenance ideas implemented on the flexible coupling installed to transmit high torque from gearbox to pump. Basically, the machine train is including steam turbine which drives the pump and there is gearbox located in between for speed reduction. investigation are identifying the root causes, solving and developing the technology designs or bad actor. This report provides the study intentionally for continues operation optimization, utilize the advanced opportunity and implement a improvement. Objective: The main objectives of the investigation are identifying the root causes, solving and developing the technology designs or bad actor. Ultimately, fulfilling the operation productivity, also ensuring better technology, quality and design by solutions. This report provides the study intentionally for continues operation optimization, utilize the advanced opportunity and implemet improvement. Method: The method used in this project was a very focused root cause analysis procedure that incorporated engineering analysis and measurements. The analysis method extensively covers the measuring of the complete coupling dimensions. Including the membranes thickness, hubs, bore diameter and total length, dismantle flexible coupling to diagnose how deep the coupling has been affected. Also, defining failure modes, so that the causes could be identified and verified. Moreover, Vibration analysis and metallurgy test. Lastly applying several solutions by advanced tools (will be mentioned in detail). Results and observation: Design capacity: Coupling capacity is an inadequate to fulfil 100% of operating conditions. Therefore, design modification of service factor to be at least 2.07 is crucial to address this issue and prevent recurrence of similar scenario, especially for the new upgrading project. Discharge fluctuation: High torque flexible coupling encountered during the operation. Therefore, discharge valve behaviour, tuning, set point and general conditions revaluated and modified subsequently, it can be used as baseline for upcoming Coupling design project. Metallurgy test: Material of flexible coupling membrane (discs) tested at the lab, for a detailed metallurgical investigation, better material grade has been selected for our operating conditions,Keywords: high speed machine, reliabilty, flexible coupling, rotating equipment
Procedia PDF Downloads 7224402 Evolution of Bioactive Components of Prickly Pear Juice (Opuntia ficus indica) and Cocktails with Orange Juice
Authors: T. Hadj Sadok, R. Hattab Bey, K. Rebiha
Abstract:
The valuation of juice from prickly pear of Opuntia ficus indica inermis as cocktails appears an attractive alternative because of their nutritional intake and functional compound has anti-radical activity (polyphenols, vitamin C, carotenoids, Betalaines, fiber and minerals). The juice from the fruit pulp is characterized by a high pH 5.85 which makes it difficult for its conservation and preservation requires a thermal treatment at high temperatures (over 100 °C) harmful for bioactive constituents compared to juice orange more acidic and processed at temperatures < 100 °C. The valuation as fig cocktails-orange is particularly interesting thanks to the contribution of polyph2nols, fiber, vitamin C, reducing sugar (sweetener) and betalaine, minerals while allowing lower temperature processing to decrease pH. The heat treatment of these juices: orange alone or in cocktails showed that the antioxidant power decreases by 12% in presence of 30% of juice treated by the heat and of 28 and 32% in the presence of 10 and 20% juice which shows the effect prickly pear juice of Opuntia. During storage for 4 weeks the loss of vitamin C is 40 and 38% in the presence of 10 and 20% juice and 33% in the presence of 30% pear juice parallel, a treatment of stabilization by heat affects relatively the polyphenols rate which decreases from 10.5% to 30% in the cocktail, and 6.11-6.71pour cocktails at 10% and 20%. Vitamin C decreases to 12 to 24 % after a heat treatment at 85°C for 30 minutes respectively for the orange juice and pear juice; this reduction is higher when the juice is in the form of cocktails composed of 10 to 30 % pear juice.Keywords: prickly pear juice, orange cocktail, polyphenol, Opuntia ficus indica, vitamin
Procedia PDF Downloads 38224401 Seismicity and Source Parameter of Some Events in Abu Dabbab Area, Red Sea Coast
Authors: Hamed Mohamed Haggag
Abstract:
Prior to 12 November 1955, no earthquakes have been reported from the Abu Dabbab area in the International Seismological Center catalogue (ISC). The largest earthquake in Abu Dabbab area occurred on November 12, 1955 with magnitude Mb 6.0. The closest station from the epicenter was at Helwan (about 700 km to the north), so the depth of this event is not constrained and no foreshocks or aftershocks were recorded. Two other earthquakes of magnitude Mb 4.5 and 5.2 took place in the same area on March 02, 1982 and July 02, 1984, respectively. Since the installation of Aswan Seismic Network stations in 1982, (250-300 km to the south-west of Abu Dabbab area) then the Egyptian Natoinal Seismic Network stations, it was possible to record some activity from Abu Dabbab area. The recorded earthquakes at Abu Dabbab area as recorded from 1982 to 2014 shows that the earthquake epicenters are distributed in the same direction of the main trends of the faults in the area, which is parallel to the Red Sea coast. The spectral analysis was made for some earthquakes. The source parameters, seismic moment (Mo), source dimension (r), stress drop (Δδ), and apparent stress (δ) are determined for these events. The spectral analysis technique was completed using MAG software program.Keywords: Abu Dabbab, seismicity, seismic moment, source parameter
Procedia PDF Downloads 46424400 Exploring Disruptive Innovation Capacity Effects on Firm Performance: An Investigation in Industries 4.0
Authors: Selma R. Oliveira, E. W. Cazarini
Abstract:
Recently, studies have referenced innovation as a key factor affecting the performance of firms. Companies make use of its innovative capacities to achieve sustainable competitive advantage. In this perspective, the objective of this paper is to contribute to innovation planning policies in industry 4.0. Thus, this paper examines the disruptive innovation capacity on firm performance in Europe. This procedure was prepared according to the following phases: Phase 1: Determination of the conceptual model; and Phase 2: Verification of the conceptual model. The research was initially conducted based on the specialized literature, which extracted the data regarding the constructs/structure and content in order to build the model. The research involved the intervention of experts knowledgeable on the object studied, selected by technical-scientific criteria. The data were extracted using an assessment matrix. To reduce subjectivity in the results achieved the following methods were used complementarily and in combination: multicriteria analysis, multivariate analysis, psychometric scaling and neurofuzzy technology. The data were extracted using an assessment matrix and the results were satisfactory, validating the modeling approach.Keywords: disruptive innovation, capacity, performance, Industry 4.0
Procedia PDF Downloads 16824399 Linguistic Insights Improve Semantic Technology in Medical Research and Patient Self-Management Contexts
Authors: William Michael Short
Abstract:
Semantic Web’ technologies such as the Unified Medical Language System Metathesaurus, SNOMED-CT, and MeSH have been touted as transformational for the way users access online medical and health information, enabling both the automated analysis of natural-language data and the integration of heterogeneous healthrelated resources distributed across the Internet through the use of standardized terminologies that capture concepts and relationships between concepts that are expressed differently across datasets. However, the approaches that have so far characterized ‘semantic bioinformatics’ have not yet fulfilled the promise of the Semantic Web for medical and health information retrieval applications. This paper argues within the perspective of cognitive linguistics and cognitive anthropology that four features of human meaning-making must be taken into account before the potential of semantic technologies can be realized for this domain. First, many semantic technologies operate exclusively at the level of the word. However, texts convey meanings in ways beyond lexical semantics. For example, transitivity patterns (distributions of active or passive voice) and modality patterns (configurations of modal constituents like may, might, could, would, should) convey experiential and epistemic meanings that are not captured by single words. Language users also naturally associate stretches of text with discrete meanings, so that whole sentences can be ascribed senses similar to the senses of words (so-called ‘discourse topics’). Second, natural language processing systems tend to operate according to the principle of ‘one token, one tag’. For instance, occurrences of the word sound must be disambiguated for part of speech: in context, is sound a noun or a verb or an adjective? In syntactic analysis, deterministic annotation methods may be acceptable. But because natural language utterances are typically characterized by polyvalency and ambiguities of all kinds (including intentional ambiguities), such methods leave the meanings of texts highly impoverished. Third, ontologies tend to be disconnected from everyday language use and so struggle in cases where single concepts are captured through complex lexicalizations that involve profile shifts or other embodied representations. More problematically, concept graphs tend to capture ‘expert’ technical models rather than ‘folk’ models of knowledge and so may not match users’ common-sense intuitions about the organization of concepts in prototypical structures rather than Aristotelian categories. Fourth, and finally, most ontologies do not recognize the pervasively figurative character of human language. However, since the time of Galen the widespread use of metaphor in the linguistic usage of both medical professionals and lay persons has been recognized. In particular, metaphor is a well-documented linguistic tool for communicating experiences of pain. Because semantic medical knowledge-bases are designed to help capture variations within technical vocabularies – rather than the kinds of conventionalized figurative semantics that practitioners as well as patients actually utilize in clinical description and diagnosis – they fail to capture this dimension of linguistic usage. The failure of semantic technologies in these respects degrades the efficiency and efficacy not only of medical research, where information retrieval inefficiencies can lead to direct financial costs to organizations, but also of care provision, especially in contexts of patients’ self-management of complex medical conditions.Keywords: ambiguity, bioinformatics, language, meaning, metaphor, ontology, semantic web, semantics
Procedia PDF Downloads 13624398 Corporate Social Responsibility and Career Education: An International Case Study
Authors: Cristina Costa-Lobo, Ana Martins, Maria Das Dores Formosinho, Ana Campina, Filomena Ponte
Abstract:
This paper is a report on the findings of a study conducted at an international leading food group. Documentary analysis and discourse analysis techniques were used to examine how corporate social responsibility and career education are valued by this international group. The Survey on Corporate Social Responsibility and Career Education was used, with 18 open-ended questions, the first six related to Corporate Social Responsibility and the last 12 related to Education for the Career. The Survey on the Social Emergency Fund was made up of 16 open-ended questions. The Social Welfare Survey was used to investigate the contribution of social workers in this area, as well as to understand their status. The sample of this investigation is composed by the Director of the development area, by the Coordinator and two Social Assistants of the Social Emergency Fund. Their collaboration was the provision of information in the form of an interview where the two main axes of this study were explored: Corporate Social Responsibility and Career Education. With regard to the analysis of data obtained from interviews, it was accomplished through the content analysis according to the Bardin's method (2004), through the pre-analytical, exploratory and qualitative treatment and interpretation of responses. Critical review of documents was also used. The success and effectiveness of this international group are marked by ambition, ability to resist difficulties, sharing of values, spirit of unity and team sense that is shared in its different companies, its leadership position is also due to the concern to see reinforced and developed values of work, discipline, rigor and competence, its management is geared towards responding to immediate challenges from a Corporate Social Responsibility perspective that is characteristic of it, incorporating concerns about impacts both in the medium and long term. In addition to internal training, it directs investments for external training by promoting actions such as participation in seminars and congresses worldwide and the creation of partnerships in various areas of management with prestigious teaching entities. Findings indicate the creation of a training school, with initiatives for internal and external training, in partnerships with prestigious teaching entities. Of particular note is the Management Trainees Program, developed for more than 25 years, characterized by building a career by obtaining knowledge and skills acquired in the combination of on-the-job experience and a training program.Keywords: career education, corporate social responsibility, training school, management trainees program
Procedia PDF Downloads 23124397 Correlations in the Ising Kagome Lattice
Authors: Antonio Aguilar Aguilar, Eliezer Braun Guitler
Abstract:
Using a previously developed procedure and with the aid of algebraic software, a two-dimensional generalized Ising model with a 4×2 unitary cell (UC), we obtain a Kagome Lattice with twelve different spin-spin values of interaction, in order to determine the partition function per spin L(T). From the partition function we can study the magnetic behavior of the system. Because of the competition phenomenon between spins, a very complex behavior among them in a variety of magnetic states can be observed.Keywords: correlations, Ising, Kagome, exact functions
Procedia PDF Downloads 37324396 Firm Performance and Stock Price in Nigeria
Authors: Tijjani Bashir Musa
Abstract:
The recent global crisis which suddenly results to Nigerian stock market crash revealed some peculiarities of Nigerian firms. Some firms in Nigeria are performing but their stock prices are not increasing while some firms are at the brink of collapse but their stock prices are increasing. Thus, this study examines the relationship between firm performance and stock price in Nigeria. The study covered the period of 2005 to 2009. This period is the period of stock boom and also marked the period of stock market crash as a result of global financial meltdown. The study is a panel study. A total of 140 firms were sampled from 216 firms listed on the Nigerian Stock Exchange (NSE). Data were collected from secondary source. These data were divided into four strata comprising the most performing stock, the least performing stock, most performing firms and the least performing firms. Each stratum contains 35 firms with characteristic of most performing stock, most performing firms, least performing stock and least performing firms. Multiple linear regression models were used to analyse the data while statistical/econometrics package of Stata 11.0 version was used to run the data. The study found that, relationship exists between selected firm performance parameters (operating efficiency, firm profit, earning per share and working capital) and stock price. As such firm performance gave sufficient information or has predictive power on stock prices movements in Nigeria for all the years under study.. The study recommends among others that Managers of firms in Nigeria should formulate policies and exert effort geared towards improving firm performance that will enhance stock prices movements.Keywords: firm, Nigeria, performance, stock price
Procedia PDF Downloads 48124395 An Analysis of a Canadian Personalized Learning Curriculum
Authors: Ruthanne Tobin
Abstract:
The shift to a personalized learning (PL) curriculum in Canada represents an innovative approach to teaching and learning that is also evident in various initiatives across the 32-nation OECD. The premise behind PL is that empowering individual learners to have more input into how they access and construct knowledge, and express their understanding of it, will result in more meaningful school experiences and academic success. In this paper presentation, the author reports on a document analysis of the new curriculum in the province of British Columbia. Three theoretical frameworks are used to analyze the new curriculum. Framework 1 focuses on five dominant aspects (FDA) of PL at the classroom level. Framework 2 focuses on conceptualizing and enacting personalized learning (CEPL) within three spheres of influence. Framework 3 focuses on the integration of three types of knowledge (content, technological, and pedagogical). Analysis is ongoing, but preliminary findings suggest that the new curriculum addresses framework 1 quite well, which identifies five areas of personalized learning: 1) assessment for learning; 2) effective teaching and learning; 3) curriculum entitlement (choice); 4) school organization; and 5) “beyond the classroom walls” (learning in the community). Framework 2 appears to be less well developed in the new curriculum. This framework speaks to the dynamics of PL within three spheres of interaction: 1) nested agency, comprised of overarching constraints [and enablers] from policy makers, school administrators and community; 2) relational agency, which refers to a capacity for professionals to develop a network of expertise to serve shared goals; and 3) students’ personalized learning experience, which integrates differentiation with self-regulation strategies. Framework 3 appears to be well executed in the new PL curriculum, as it employs the theoretical model of technological, pedagogical content knowledge (TPACK) in which there are three interdependent bodies of knowledge. Notable within this framework is the emphasis on the pairing of technologies with excellent pedagogies to significantly assist students and teachers. This work will be of high relevance to educators interested in innovative school reform.Keywords: curriculum reform, K-12 school change, innovations in education, personalized learning
Procedia PDF Downloads 28724394 Analysis and Identification of Different Factors Affecting Students’ Performance Using a Correlation-Based Network Approach
Authors: Jeff Chak-Fu Wong, Tony Chun Yin Yip
Abstract:
The transition from secondary school to university seems exciting for many first-year students but can be more challenging than expected. Enabling instructors to know students’ learning habits and styles enhances their understanding of the students’ learning backgrounds, allows teachers to provide better support for their students, and has therefore high potential to improve teaching quality and learning, especially in any mathematics-related courses. The aim of this research is to collect students’ data using online surveys, to analyze students’ factors using learning analytics and educational data mining and to discover the characteristics of the students at risk of falling behind in their studies based on students’ previous academic backgrounds and collected data. In this paper, we use correlation-based distance methods and mutual information for measuring student factor relationships. We then develop a factor network using the Minimum Spanning Tree method and consider further study for analyzing the topological properties of these networks using social network analysis tools. Under the framework of mutual information, two graph-based feature filtering methods, i.e., unsupervised and supervised infinite feature selection algorithms, are used to analyze the results for students’ data to rank and select the appropriate subsets of features and yield effective results in identifying the factors affecting students at risk of failing. This discovered knowledge may help students as well as instructors enhance educational quality by finding out possible under-performers at the beginning of the first semester and applying more special attention to them in order to help in their learning process and improve their learning outcomes.Keywords: students' academic performance, correlation-based distance method, social network analysis, feature selection, graph-based feature filtering method
Procedia PDF Downloads 13624393 Determinants of Green Strategy: Analysis Using Probit and Logit Models
Authors: Ayushi Modi, Eliot Bochet-Merand
Abstract:
This study investigates the structural determinants of green strategies among Small and Medium Enterprises (SMEs) in the European Union and select countries, utilizing data from the Flash Eurobarometer 498 - SMEs, Resource Efficiency, and Green Markets. By applying sequential logit analysis, we explore the drivers behind the adoption and scaling of green actions, such as resource efficiency, waste management, and product innovation, while also examining the provision of green products and services. A key contribution of this research is the novel distinction between the process stage (green actions) and the product stage (green outputs), allowing for a deeper analysis of how green initiatives translate into sustainable business outcomes. Our findings reveal that structural characteristics, such as firm size, sector, and turnover growth, significantly influence the likelihood of both providing green products and implementing comprehensive green actions. Smaller, younger firms in high-impact sectors like construction and industry are more likely to engage in sustainability efforts, particularly when they have a green strategy and a dedicated green workforce. Furthermore, companies serving B2B and B2C clients and experiencing turnover growth are more inclined to offer green products. The study underscores the economic implications of these insights, suggesting that financial flexibility, strategic commitment, and human capital investments are critical for scaling green initiatives. By refining variables and excluding heterogeneous countries, our data management ensures robust results. This research provides novel insights into the distinct roles of process and product stages in sustainability, offering valuable policy recommendations for promoting environmental performance in SMEs.Keywords: green strategy, resource efficiency, SMES, sustainability, product innovation, environmental performance
Procedia PDF Downloads 2624392 Aligning Informatics Study Programs with Occupational and Qualifications Standards
Authors: Patrizia Poscic, Sanja Candrlic, Danijela Jaksic
Abstract:
The University of Rijeka, Department of Informatics participated in the Stand4Info project, co-financed by the European Union, with the main idea of an alignment of study programs with occupational and qualifications standards in the field of Informatics. A brief overview of our research methodology, goals and deliverables is shown. Our main research and project objectives were: a) development of occupational standards, qualification standards and study programs based on the Croatian Qualifications Framework (CROQF), b) higher education quality improvement in the field of information and communication sciences, c) increasing the employability of students of information and communication technology (ICT) and science, and d) continuously improving competencies of teachers in accordance with the principles of CROQF. CROQF is a reform instrument in the Republic of Croatia for regulating the system of qualifications at all levels through qualifications standards based on learning outcomes and following the needs of the labor market, individuals and society. The central elements of CROQF are learning outcomes - competences acquired by the individual through the learning process and proved afterward. The place of each acquired qualification is set by the level of the learning outcomes belonging to that qualification. The placement of qualifications at respective levels allows the comparison and linking of different qualifications, as well as linking of Croatian qualifications' levels to the levels of the European Qualifications Framework and the levels of the Qualifications framework of the European Higher Education Area. This research has made 3 proposals of occupational standards for undergraduate study level (System Analyst, Developer, ICT Operations Manager), and 2 for graduate (master) level (System Architect, Business Architect). For each occupational standard employers have provided a list of key tasks and associated competencies necessary to perform them. A set of competencies required for each particular job in the workplace was defined and each set of competencies as described in more details by its individual competencies. Based on sets of competencies from occupational standards, sets of learning outcomes were defined and competencies from the occupational standard were linked with learning outcomes. For each learning outcome, as well as for the set of learning outcomes, it was necessary to specify verification method, material, and human resources. The task of the project was to suggest revision and improvement of the existing study programs. It was necessary to analyze existing programs and determine how they meet and fulfill defined learning outcomes. This way, one could see: a) which learning outcomes from the qualifications standards are covered by existing courses, b) which learning outcomes have yet to be covered, c) are they covered by mandatory or elective courses, and d) are some courses unnecessary or redundant. Overall, the main research results are: a) completed proposals of qualification and occupational standards in the field of ICT, b) revised curricula of undergraduate and master study programs in ICT, c) sustainable partnership and association stakeholders network, d) knowledge network - informing the public and stakeholders (teachers, students, and employers) about the importance of CROQF establishment, and e) teachers educated in innovative methods of teaching.Keywords: study program, qualification standard, occupational standard, higher education, informatics and computer science
Procedia PDF Downloads 14524391 Regional Advantages Analysis: An Interactive Approach of Comparative and Competitive Advantages
Authors: Abdolrasoul Ghasemi, Ali Arabmazar Yazdi, Yasaman Boroumand, Aliasghar Banouei
Abstract:
In regional studies, choosing an appropriate approach to analyze regional success or failure has always been a challenge. Hence, this study introduces an innovative approach to establish a link between regional success and failure in the past as well as the potential success of a region in the future. The former can be sought in the historical evaluation of comparative advantages, while the latter is portrayed as competitive advantage analysis with a forward-looking approach. Based on the interaction of comparative and competitive advantages, activities are classified into four groups, including activities with no advantage, hidden advantage, fragile advantage and synergistic advantage. In analyzing the comparative advantage of activities, the location quotient method is applied, and in analyzing their competitive advantage, Porter`s diamond model using the survey method is applied. According to the results, the share of no advantage, fragile advantage, hidden advantage and synergic advantage activities are respectively 10%, 42%, 16%, and 32%. Also, to achieve economic development in regional activities, our model provides various levels of priority. First, the activities with synergistic advantage should be prioritized, then the ones with hidden advantage, and finally the activities with fragile advantage.Keywords: regional advantage, comparative advantage, competitive advantage, Porter's diamond model
Procedia PDF Downloads 35824390 A Review on Comparative Analysis of Path Planning and Collision Avoidance Algorithms
Authors: Divya Agarwal, Pushpendra S. Bharti
Abstract:
Autonomous mobile robots (AMR) are expected as smart tools for operations in every automation industry. Path planning and obstacle avoidance is the backbone of AMR as robots have to reach their goal location avoiding obstacles while traversing through optimized path defined according to some criteria such as distance, time or energy. Path planning can be classified into global and local path planning where environmental information is known and unknown/partially known, respectively. A number of sensors are used for data collection. A number of algorithms such as artificial potential field (APF), rapidly exploring random trees (RRT), bidirectional RRT, Fuzzy approach, Purepursuit, A* algorithm, vector field histogram (VFH) and modified local path planning algorithm, etc. have been used in the last three decades for path planning and obstacle avoidance for AMR. This paper makes an attempt to review some of the path planning and obstacle avoidance algorithms used in the field of AMR. The review includes comparative analysis of simulation and mathematical computations of path planning and obstacle avoidance algorithms using MATLAB 2018a. From the review, it could be concluded that different algorithms may complete the same task (i.e. with a different set of instructions) in less or more time, space, effort, etc.Keywords: path planning, obstacle avoidance, autonomous mobile robots, algorithms
Procedia PDF Downloads 23924389 Oxygen Transport in Blood Flows Pasts Staggered Fiber Arrays: A Computational Fluid Dynamics Study of an Oxygenator in Artificial Lung
Authors: Yu-Chen Hsu, Kuang C. Lin
Abstract:
The artificial lung called extracorporeal membrane oxygenation (ECMO) is an important medical machine that supports persons whose heart and lungs dysfunction. Previously, investigation of steady deoxygenated blood flows passing through hollow fibers for oxygen transport was carried out experimentally and computationally. The present study computationally analyzes the effect of biological pulsatile flow on the oxygen transport in blood. A 2-D model with a pulsatile flow condition is employed. The power law model is used to describe the non-Newtonian flow and the Hill equation is utilized to simulate the oxygen saturation of hemoglobin. The dimensionless parameters for the physical model include Reynolds numbers (Re), Womersley parameters (α), pulsation amplitudes (A), Sherwood number (Sh) and Schmidt number (Sc). The present model with steady-state flow conditions is well validated against previous experiment and simulations. It is observed that pulsating flow amplitudes significantly influence the velocity profile, pressure of oxygen (PO2), saturation of oxygen (SO2) and the oxygen mass transfer rates (m ̇_O2). In comparison between steady-state and pulsating flows, our findings suggest that the consideration of pulsating flow in the computational model is needed when Re is raised from 2 to 10 in a typical range for flow in artificial lung.Keywords: artificial lung, oxygen transport, non-Newtonian flows, pulsating flows
Procedia PDF Downloads 31324388 A Protocol for Usability of Teaching to Students with Learning Difficulties at University: An Italian Research
Authors: Tamara Zappaterra
Abstract:
The Learning Difficulties have an evolutionary nature. The international research has focused its analysis on the characteristics of Learning Difficulties in childhood, but we are still far from a thorough understanding of the nature of such disorders in adolescence and adulthood. Such issues become even more urgent in the university context. Spelling, meaning, and appropriate use of the specific vocabulary of the various disciplines represent an additional challenge for the dyslexic student. This paper explores the characteristics of Learning Difficulties in adulthood and the impact with the university teaching. It presents the results of an interdisciplinary project (educational, medical and engineering area) at University of Florence. The purpose of project is to design of a protocol for usability of teaching and individual study at university level. The project, after a first reconnaissance of user needs that have been reached with the participation of the very same protagonists, is at the stage of guidelines drafting for inclusion and education, to be used by teachers, students and administrative staff. The methodologies used are a questionnaire built on purpose and a series of focus groups with users. For collecting data during the focus groups it was decided to use a method typical of the Quality Function Deployment, a tool originally used for quality management, whose versatility makes it easy to use in a number of different context. The paper presents furthermore the findings of the project, the most significant elements of the guidelines for teaching, i.e. the section for teachers, whose aim is to implement a Learning Difficulties-friendly teaching, even at the university level, in compliance with italian Law 170/2010. The Guidelines for the didactic and inclusion of Learning Difficulties students of the University of Florence are articulated around a global and systemic plan of action, meant to accompany and protect the students during their study career, even before enrolling at the University, with different declination: the logistical, relational, educational, and didactic levels have been considered. These guidelines in Italy received the endorsement of the CNUDD. It is a systemic intervention plan for Learning Difficulties students, which roused and keeps rousing the interest of all the university system, with a radical consideration on academic teaching. Since while we try to provide the best Learning Difficulties-friendly didactic in compliance with the rules, no one can be exempted from a wider consideration on the nature and the quality of university teaching offered to all students.Keywords: didactic tools, learning difficulties, special and inclusive education, university teaching
Procedia PDF Downloads 28424387 Fully Coupled Porous Media Model
Authors: Nia Mair Fry, Matthew Profit, Chenfeng Li
Abstract:
This work focuses on the development and implementation of a fully implicit-implicit, coupled mechanical deformation and porous flow, finite element software tool. The fully implicit software accurately predicts classical fundamental analytical solutions such as the Terzaghi consolidation problem. Furthermore, it can capture other analytical solutions less well known in the literature, such as Gibson’s sedimentation rate problem and Coussy’s problems investigating wellbore stability for poroelastic rocks. The mechanical volume strains are transferred to the porous flow governing equation in an implicit framework. This will overcome some of the many current industrial issues, which use explicit solvers for the mechanical governing equations and only implicit solvers on the porous flow side. This can potentially lead to instability and non-convergence issues in the coupled system, plus giving results with an accountable degree of error. The specification of a fully monolithic implicit-implicit coupled porous media code sees the solution of both seepage-mechanical equations in one matrix system, under a unified time-stepping scheme, which makes the problem definition much easier. When using an explicit solver, additional input such as the damping coefficient and mass scaling factor is required, which are circumvented with a fully implicit solution. Further, improved accuracy is achieved as the solution is not dependent on predictor-corrector methods for the pore fluid pressure solution, but at the potential cost of reduced stability. In testing of this fully monolithic porous media code, there is the comparison of the fully implicit coupled scheme against an existing staggered explicit-implicit coupled scheme solution across a range of geotechnical problems. These cases include 1) Biot coefficient calculation, 2) consolidation theory with Terzaghi analytical solution, 3) sedimentation theory with Gibson analytical solution, and 4) Coussy well-bore poroelastic analytical solutions.Keywords: coupled, implicit, monolithic, porous media
Procedia PDF Downloads 14124386 Physicochemical Characterization of Asphalt Ridge Froth Bitumen
Authors: Nader Nciri, Suil Song, Namho Kim, Namjun Cho
Abstract:
Properties and compositions of bitumen and bitumen-derived liquids have significant influences on the selection of recovery, upgrading and refining processes. Optimal process conditions can often be directly related to these properties. The end uses of bitumen and bitumen products are thus related to their compositions. Because it is not possible to conduct a complete analysis of the molecular structure of bitumen, characterization must be made in other terms. The present paper focuses on physico-chemical analysis of two different types of bitumens. These bitumen samples were chosen based on: the original crude oil (sand oil and crude petroleum), and mode of process. The aim of this study is to determine both the manufacturing effect on chemical species and the chemical organization as a function of the type of bitumen sample. In order to obtain information on bitumen chemistry, elemental analysis (C, H, N, S, and O), heavy metal (Ni, V) concentrations, IATROSCAN chromatography (thin layer chromatography-flame ionization detection), FTIR spectroscopy, and 1H NMR spectroscopy have all been used. The characterization includes information about the major compound types (saturates, aromatics, resins and asphaltenes) which can be compared with similar data for other bitumens, more importantly, can be correlated with data from petroleum samples for which refining characteristics are known. Examination of Asphalt Ridge froth bitumen showed that it differed significantly from representative petroleum pitches, principally in their nonhydrocarbon content, heavy metal content and aromatic compounds. When possible, properties and composition were related to recovery and refining processes. This information is important because of the effects that composition has on recovery and processing reactions.Keywords: froth bitumen, oil sand, asphalt ridge, petroleum pitch, thin layer chromatography-flame ionization detection, infrared spectroscopy, 1H nuclear magnetic resonance spectroscopy
Procedia PDF Downloads 43224385 Experimental Stress Analysis on Pipeline in Condition of Frost Heave and Thaw Settlement
Authors: Zhiqiang Cheng, Qingliang He, Lu Li, Jie Ren
Abstract:
The safety of pipelines in the condition of frost heave or thaw settlement is necessarily evaluated. A full-scale experiment pipe with the typical structure configuration in station pipeline is constructed, the residual stress is tested with X-ray residual stress device, and the residual stress field of pipe is analyzed. The evolution of pipe strain with pressure in the scope of maximum allowable operation pressure (MAOP) is investigated by both strain gauge and X-ray methods. Load caused by frost heave or thaw settlement is simulated by two ways of lifting jack. The relation of maximum stress of pipe and clearances between supporter and pipe is studied in case of frost heave. The relation of maximum stress of pipe and maximum deformation of pipe on the ground is studied in case of thaw settlement. The study methods and results are valuable for safety assessment of station pipeline according to clearances or deformation in the condition of frost heave or thaw settlement.Keywords: frost heave, pipeline, stress analysis, thaw settlement
Procedia PDF Downloads 19224384 Sentiment Analysis on University Students’ Evaluation of Teaching and Their Emotional Engagement
Authors: Elisa Santana-Monagas, Juan L. Núñez, Jaime León, Samuel Falcón, Celia Fernández, Rocío P. Solís
Abstract:
Teaching practices have been widely studied in relation to students' outcomes, positioning themselves as one of their strongest catalysts and influencing students' emotional experiences. In the higher education context, teachers become even more crucial as many students ground their decisions on which courses to enroll in based on opinions and ratings of teachers from other students. Unfortunately, sometimes universities do not provide the personal, social, and academic stimulation students demand to be actively engaged. To evaluate their teachers, universities often rely on students' evaluations of teaching (SET) collected via Likert scale surveys. Despite its usefulness, such a method has been questioned in terms of validity and reliability. Alternatively, researchers can rely on qualitative answers to open-ended questions. However, the unstructured nature of the answers and a large amount of information obtained requires an overwhelming amount of work. The present work presents an alternative approach to analyse such data: sentiment analysis. To the best of our knowledge, no research before has included results from SA into an explanatory model to test how students' sentiments affect their emotional engagement in class. The sample of the present study included a total of 225 university students (Mean age = 26.16, SD = 7.4, 78.7 % women) from the Educational Sciences faculty of a public university in Spain. Data collection took place during the academic year 2021-2022. Students accessed an online questionnaire using a QR code. They were asked to answer the following open-ended question: "If you had to explain to a peer who doesn't know your teacher how he or she communicates in class, what would you tell them?". Sentiment analysis was performed using Microsoft's pre-trained model. The reliability of the measure was estimated between the tool and one of the researchers who coded all answers independently. The Cohen's kappa and the average pairwise percent agreement were estimated with ReCal2. Cohen's kappa was .68, and the agreement reached was 90.8%, both considered satisfactory. To test the hypothesis relations among SA and students' emotional engagement, a structural equation model (SEM) was estimated. Results demonstrated a good fit of the data: RMSEA = .04, SRMR = .03, TLI = .99, CFI = .99. Specifically, the results showed that student’s sentiment regarding their teachers’ teaching positively predicted their emotional engagement (β == .16 [.02, -.30]). In other words, when students' opinion toward their instructors' teaching practices is positive, it is more likely for students to engage emotionally in the subject. Altogether, the results show a promising future for sentiment analysis techniques in the field of education. They suggest the usefulness of this tool when evaluating relations among teaching practices and student outcomes.Keywords: sentiment analysis, students' evaluation of teaching, structural-equation modelling, emotional engagement
Procedia PDF Downloads 89