Search results for: current transformer
652 Patterns of Change in Specific Behaviors of Autism Symptoms for Boys and for Girls Across Childhood
Authors: Einat Waizbard, Emilio Ferrer, Meghan Miller, Brianna Heath, Derek S. Andrews, Sally J. Rogers, Christine Wu Nordahl, Marjorie Solomon, David G. Amaral
Abstract:
Background: Autism symptoms are comprised of social-communication deficits and restricted/repetitive behaviors (RRB). The severity of these symptoms can change during childhood, with differences between boys and girls. From the literature, it was found that young autistic girls show a stronger tendency to decrease and a weaker tendency to increase their overall autism symptom severity levels compared to young autistic boys. It is not clear, however, which symptoms are driving these sex differences across childhood. In the current study, we evaluated the trajectories of independent autism symptoms across childhood and compared the patterns of change in such symptoms between boys and girls. Method: The study included 183 children diagnosed with autism (55 girls) evaluated three times across childhood, at ages 3, 6 and 11. We analyzed 22 independent items from the Autism Diagnostic Observation Scheudule-2 (ADOS-2), the gold-standard assessment tool for autism symptoms, each item representing a specific autism symptom. First, we used latent growth curve models to estimate the trajectories for the 22 ADOS-2 items for each child in the study. Second, we extracted the factor scores representing the individual slopes for each ADOS-2 item (i.e., slope representing that child’s change in that specific item). Third, we used factor analysis to identify common patterns of change among the ADOS-2 items, separately for boys and girls, i.e., which autism symptoms tend to change together and which change independently across childhood. Results: The best-emerging patterns for both boys and girls identified four common factors: three factors representative of changes in social-communication symptoms and one factor describing changes in RRB. Boys and girls showed the same pattern of change in RRB, with four items (e.g., speech abnormalities) changing together across childhood and three items (e.g., mannerisms) changing independently of other items. For social-communication deficits in boys, three factors were identified: the first factor included six items representing initiating and engaging in social-communication (e.g., quality of social overtures, conversation), the second factor included five items describing responsive social-communication (e.g., response to name) and the third factor included three items related to different aspects of social-communication (e.g., level of language). Girls’ social-communications deficits also loaded onto three factors: the first factor included five items (e.g., unusual eye contact), the second factor included six items (e.g., quality of social response), and the third factor included four items (e.g., showing). Some items showed similar patterns of change for both sexes (e.g., responsive joint attention), while other items showed differences (e.g., shared enjoyment). Conclusions: Girls and boys had different patterns of change in autism symptom severity across childhood. For RRB, both sexes showed similar patterns. For social-communication symptoms, however, there were both similarities and differences between boys and girls in the way symptoms changed over time. The strongest patterns of change were identified for initiating and engaging in social communication for boys and responsive social communication for girls.Keywords: autism spectrum disorder, autism symptom severity, symptom trajectories, sex differences
Procedia PDF Downloads 51651 Seismic Response of Reinforced Concrete Buildings: Field Challenges and Simplified Code Formulas
Authors: Michel Soto Chalhoub
Abstract:
Building code-related literature provides recommendations on normalizing approaches to the calculation of the dynamic properties of structures. Most building codes make a distinction among types of structural systems, construction material, and configuration through a numerical coefficient in the expression for the fundamental period. The period is then used in normalized response spectra to compute base shear. The typical parameter used in simplified code formulas for the fundamental period is overall building height raised to a power determined from analytical and experimental results. However, reinforced concrete buildings which constitute the majority of built space in less developed countries pose additional challenges to the ones built with homogeneous material such as steel, or with concrete under stricter quality control. In the present paper, the particularities of reinforced concrete buildings are explored and related to current methods of equivalent static analysis. A comparative study is presented between the Uniform Building Code, commonly used for buildings within and outside the USA, and data from the Middle East used to model 151 reinforced concrete buildings of varying number of bays, number of floors, overall building height, and individual story height. The fundamental period was calculated using eigenvalue matrix computation. The results were also used in a separate regression analysis where the computed period serves as dependent variable, while five building properties serve as independent variables. The statistical analysis shed light on important parameters that simplified code formulas need to account for including individual story height, overall building height, floor plan, number of bays, and concrete properties. Such inclusions are important for reinforced concrete buildings of special conditions due to the level of concrete damage, aging, or materials quality control during construction. Overall results of the present analysis show that simplified code formulas for fundamental period and base shear may be applied but they require revisions to account for multiple parameters. The conclusion above is confirmed by the analytical model where fundamental periods were computed using numerical techniques and eigenvalue solutions. This recommendation is particularly relevant to code upgrades in less developed countries where it is customary to adopt, and mildly adapt international codes. We also note the necessity of further research using empirical data from buildings in Lebanon that were subjected to severe damage due to impulse loading or accelerated aging. However, we excluded this study from the present paper and left it for future research as it has its own peculiarities and requires a different type of analysis.Keywords: seismic behaviour, reinforced concrete, simplified code formulas, equivalent static analysis, base shear, response spectra
Procedia PDF Downloads 232650 Bariatric Surgery Referral as an Alternative to Fundoplication in Obese Patients Presenting with GORD: A Retrospective Hospital-Based Cohort Study
Authors: T. Arkle, D. Pournaras, S. Lam, B. Kumar
Abstract:
Introduction: Fundoplication is widely recognised as the best surgical option for gastro-oesophageal reflux disease (GORD) in the general population. However, there is controversy surrounding the use of conventional fundoplication in obese patients. Whilst the intra-operative failure of fundoplication, including wrap disruption, is reportedly higher in obese individuals, the more significant issue surrounds symptom recurrence post-surgery. Could a bariatric procedure be considered in obese patients for weight management, to treat the GORD, and to also reduce the risk of recurrence? Roux-en-Y gastric bypass, a widely performed bariatric procedure, has been shown to be highly successful both in controlling GORD symptoms and in weight management in obese patients. Furthermore, NICE has published clear guidelines on eligibility for bariatric surgery, with the main criteria being type 3 obesity or type 2 obesity with the presence of significant co-morbidities that would improve with weight loss. This study aims to identify the proportion of patients who undergo conventional fundoplication for GORD and/or hiatus hernia, which would have been eligible for bariatric surgery referral according to NICE guidelines. Methods: All patients who underwent fundoplication procedures for GORD and/or hiatus hernia repair at a single NHS foundation trust over a 10-year period will be identified using the Trust’s health records database. Pre-operative patient records will be used to find BMI and the presence of significant co-morbidities at the time of consideration for surgery. This information will be compared to NICE guidelines to determine potential eligibility for the bariatric surgical referral at the time of initial surgical intervention. Results: A total of 321 patients underwent fundoplication procedures between January 2011 and December 2020; 133 (41.4%) had available data for BMI or to allow BMI to be estimated. Of those 133, 40 patients (30%) had a BMI greater than 30kg/m², and 7 (5.3%) had BMI >35kg/m². One patient (0.75%) had a BMI >40 and would therefore be automatically eligible according to NICE guidelines. 4 further patients had significant co-morbidities, such as hypertension and osteoarthritis, which likely be improved by weight management surgery and therefore also indicated eligibility for referral. Overall, 3.75% (5/133) of patients undergoing conventional fundoplication procedures would have been eligible for bariatric surgical referral, these patients were all female, and the average age was 60.4 years. Conclusions: Based on this Trust’s experience, around 4% of obese patients undergoing fundoplication would have been eligible for bariatric surgical intervention. Based on current evidence, in class 2/3 obese patients, there is likely to have been a notable proportion with recurrent disease, potentially requiring further intervention. These patient’s may have benefitted more through undergoing bariatric surgery, for example a Roux-en-Y gastric bypass, addressing both their obesity and GORD. Use of patient written notes to obtain BMI data for the 188 patients with missing BMI data and further analysis to determine outcomes following fundoplication in all patients, assessing for incidence of recurrent disease, will be undertaken to strengthen conclusions.Keywords: bariatric surgery, GORD, Nissen fundoplication, nice guidelines
Procedia PDF Downloads 60649 Status of Vocational Education and Training in India: Policies and Practices
Authors: Vineeta Sirohi
Abstract:
The development of critical skills and competencies becomes imperative for young people to cope with the unpredicted challenges of the time and prepare for work and life. Recognizing that education has a critical role in reaching sustainability goals as emphasized by 2030 agenda for sustainability development, educating youth in global competence, meta-cognitive competencies, and skills from the initial stages of formal education are vital. Further, educating for global competence would help in developing work readiness and boost employability. Vocational education and training in India as envisaged in various policy documents remain marginalized in practice as compared to general education. The country is still far away from the national policy goal of tracking 25% of the secondary students at grade eleven and twelve under the vocational stream. In recent years, the importance of skill development has been recognized in the present context of globalization and change in the demographic structure of the Indian population. As a result, it has become a national policy priority and taken up with renewed focus by the government, which has set the target of skilling 500 million people by 2022. This paper provides an overview of the policies, practices, and current status of vocational education and training in India supported by statistics from the National Sample Survey, the official statistics of India. The national policy documents and annual reports of the organizations actively involved in vocational education and training have also been examined to capture relevant data and information. It has also highlighted major initiatives taken by the government to promote skill development. The data indicates that in the age group 15-59 years, only 2.2 percent reported having received formal vocational training, and 8.6 percent have received non-formal vocational training, whereas 88.3 percent did not receive any vocational training. At present, the coverage of vocational education is abysmal as less than 5 percent of the students are covered by the vocational education programme. Besides, launching various schemes to address the mismatch of skills supply and demand, the government through its National Policy on Skill Development and Entrepreneurship 2015 proposes to bring about inclusivity by bridging the gender, social and sectoral divide, ensuring that the skilling needs of socially disadvantaged and marginalized groups are appropriately addressed. It is fundamental that the curriculum is aligned with the demands of the labor market, incorporating more of the entrepreneur skills. Creating nonfarm employment opportunities for educated youth will be a challenge for the country in the near future. Hence, there is a need to formulate specific skill development programs for this sector and also programs for upgrading their skills to enhance their employability. There is a need to promote female participation in work and in non-traditional courses. Moreover, rigorous research and development of a robust information base for skills are required to inform policy decisions on vocational education and training.Keywords: policy, skill, training, vocational education
Procedia PDF Downloads 153648 Predicting Loss of Containment in Surface Pipeline using Computational Fluid Dynamics and Supervised Machine Learning Model to Improve Process Safety in Oil and Gas Operations
Authors: Muhammmad Riandhy Anindika Yudhy, Harry Patria, Ramadhani Santoso
Abstract:
Loss of containment is the primary hazard that process safety management is concerned within the oil and gas industry. Escalation to more serious consequences all begins with the loss of containment, starting with oil and gas release from leakage or spillage from primary containment resulting in pool fire, jet fire and even explosion when reacted with various ignition sources in the operations. Therefore, the heart of process safety management is avoiding loss of containment and mitigating its impact through the implementation of safeguards. The most effective safeguard for the case is an early detection system to alert Operations to take action prior to a potential case of loss of containment. The detection system value increases when applied to a long surface pipeline that is naturally difficult to monitor at all times and is exposed to multiple causes of loss of containment, from natural corrosion to illegal tapping. Based on prior researches and studies, detecting loss of containment accurately in the surface pipeline is difficult. The trade-off between cost-effectiveness and high accuracy has been the main issue when selecting the traditional detection method. The current best-performing method, Real-Time Transient Model (RTTM), requires analysis of closely positioned pressure, flow and temperature (PVT) points in the pipeline to be accurate. Having multiple adjacent PVT sensors along the pipeline is expensive, hence generally not a viable alternative from an economic standpoint.A conceptual approach to combine mathematical modeling using computational fluid dynamics and a supervised machine learning model has shown promising results to predict leakage in the pipeline. Mathematical modeling is used to generate simulation data where this data is used to train the leak detection and localization models. Mathematical models and simulation software have also been shown to provide comparable results with experimental data with very high levels of accuracy. While the supervised machine learning model requires a large training dataset for the development of accurate models, mathematical modeling has been shown to be able to generate the required datasets to justify the application of data analytics for the development of model-based leak detection systems for petroleum pipelines. This paper presents a review of key leak detection strategies for oil and gas pipelines, with a specific focus on crude oil applications, and presents the opportunities for the use of data analytics tools and mathematical modeling for the development of robust real-time leak detection and localization system for surface pipelines. A case study is also presented.Keywords: pipeline, leakage, detection, AI
Procedia PDF Downloads 191647 Computer Aide Discrimination of Benign and Malignant Thyroid Nodules by Ultrasound Imaging
Authors: Akbar Gharbali, Ali Abbasian Ardekani, Afshin Mohammadi
Abstract:
Introduction: Thyroid nodules have an incidence of 33-68% in the general population. More than 5-15% of these nodules are malignant. Early detection and treatment of thyroid nodules increase the cure rate and provide optimal treatment. Between the medical imaging methods, Ultrasound is the chosen imaging technique for assessment of thyroid nodules. The confirming of the diagnosis usually demands repeated fine-needle aspiration biopsy (FNAB). So, current management has morbidity and non-zero mortality. Objective: To explore diagnostic potential of automatic texture analysis (TA) methods in differentiation benign and malignant thyroid nodules by ultrasound imaging in order to help for reliable diagnosis and monitoring of the thyroid nodules in their early stages with no need biopsy. Material and Methods: The thyroid US image database consists of 70 patients (26 benign and 44 malignant) which were reported by Radiologist and proven by the biopsy. Two slices per patient were loaded in Mazda Software version 4.6 for automatic texture analysis. Regions of interests (ROIs) were defined within the abnormal part of the thyroid nodules ultrasound images. Gray levels within an ROI normalized according to three normalization schemes: N1: default or original gray levels, N2: +/- 3 Sigma or dynamic intensity limited to µ+/- 3σ, and N3: present intensity limited to 1% - 99%. Up to 270 multiscale texture features parameters per ROIs per each normalization schemes were computed from well-known statistical methods employed in Mazda software. From the statistical point of view, all calculated texture features parameters are not useful for texture analysis. So, the features based on maximum Fisher coefficient and the minimum probability of classification error and average correlation coefficients (POE+ACC) eliminated to 10 best and most effective features per normalization schemes. We analyze this feature under two standardization states (standard (S) and non-standard (NS)) with Principle Component Analysis (PCA), Linear Discriminant Analysis (LDA) and Non-Linear Discriminant Analysis (NDA). The 1NN classifier was performed to distinguish between benign and malignant tumors. The confusion matrix and Receiver operating characteristic (ROC) curve analysis were used for the formulation of more reliable criteria of the performance of employed texture analysis methods. Results: The results demonstrated the influence of the normalization schemes and reduction methods on the effectiveness of the obtained features as a descriptor on discrimination power and classification results. The selected subset features under 1%-99% normalization, POE+ACC reduction and NDA texture analysis yielded a high discrimination performance with the area under the ROC curve (Az) of 0.9722, in distinguishing Benign from Malignant Thyroid Nodules which correspond to sensitivity of 94.45%, specificity of 100%, and accuracy of 97.14%. Conclusions: Our results indicate computer-aided diagnosis is a reliable method, and can provide useful information to help radiologists in the detection and classification of benign and malignant thyroid nodules.Keywords: ultrasound imaging, thyroid nodules, computer aided diagnosis, texture analysis, PCA, LDA, NDA
Procedia PDF Downloads 279646 Contextual Toxicity Detection with Data Augmentation
Authors: Julia Ive, Lucia Specia
Abstract:
Understanding and detecting toxicity is an important problem to support safer human interactions online. Our work focuses on the important problem of contextual toxicity detection, where automated classifiers are tasked with determining whether a short textual segment (usually a sentence) is toxic within its conversational context. We use “toxicity” as an umbrella term to denote a number of variants commonly named in the literature, including hate, abuse, offence, among others. Detecting toxicity in context is a non-trivial problem and has been addressed by very few previous studies. These previous studies have analysed the influence of conversational context in human perception of toxicity in controlled experiments and concluded that humans rarely change their judgements in the presence of context. They have also evaluated contextual detection models based on state-of-the-art Deep Learning and Natural Language Processing (NLP) techniques. Counterintuitively, they reached the general conclusion that computational models tend to suffer performance degradation in the presence of context. We challenge these empirical observations by devising better contextual predictive models that also rely on NLP data augmentation techniques to create larger and better data. In our study, we start by further analysing the human perception of toxicity in conversational data (i.e., tweets), in the absence versus presence of context, in this case, previous tweets in the same conversational thread. We observed that the conclusions of previous work on human perception are mainly due to data issues: The contextual data available does not provide sufficient evidence that context is indeed important (even for humans). The data problem is common in current toxicity datasets: cases labelled as toxic are either obviously toxic (i.e., overt toxicity with swear, racist, etc. words), and thus context does is not needed for a decision, or are ambiguous, vague or unclear even in the presence of context; in addition, the data contains labeling inconsistencies. To address this problem, we propose to automatically generate contextual samples where toxicity is not obvious (i.e., covert cases) without context or where different contexts can lead to different toxicity judgements for the same tweet. We generate toxic and non-toxic utterances conditioned on the context or on target tweets using a range of techniques for controlled text generation(e.g., Generative Adversarial Networks and steering techniques). On the contextual detection models, we posit that their poor performance is due to limitations on both of the data they are trained on (same problems stated above) and the architectures they use, which are not able to leverage context in effective ways. To improve on that, we propose text classification architectures that take the hierarchy of conversational utterances into account. In experiments benchmarking ours against previous models on existing and automatically generated data, we show that both data and architectural choices are very important. Our model achieves substantial performance improvements as compared to the baselines that are non-contextual or contextual but agnostic of the conversation structure.Keywords: contextual toxicity detection, data augmentation, hierarchical text classification models, natural language processing
Procedia PDF Downloads 170645 Analyzing Data Protection in the Era of Big Data under the Framework of Virtual Property Layer Theory
Authors: Xiaochen Mu
Abstract:
Data rights confirmation, as a key legal issue in the development of the digital economy, is undergoing a transition from a traditional rights paradigm to a more complex private-economic paradigm. In this process, data rights confirmation has evolved from a simple claim of rights to a complex structure encompassing multiple dimensions of personality rights and property rights. Current data rights confirmation practices are primarily reflected in two models: holistic rights confirmation and process rights confirmation. The holistic rights confirmation model continues the traditional "one object, one right" theory, while the process rights confirmation model, through contractual relationships in the data processing process, recognizes rights that are more adaptable to the needs of data circulation and value release. In the design of the data property rights system, there is a hierarchical characteristic aimed at decoupling from raw data to data applications through horizontal stratification and vertical staging. This design not only respects the ownership rights of data originators but also, based on the usufructuary rights of enterprises, constructs a corresponding rights system for different stages of data processing activities. The subjects of data property rights include both data originators, such as users, and data producers, such as enterprises, who enjoy different rights at different stages of data processing. The intellectual property rights system, with the mission of incentivizing innovation and promoting the advancement of science, culture, and the arts, provides a complete set of mechanisms for protecting innovative results. However, unlike traditional private property rights, the granting of intellectual property rights is not an end in itself; the purpose of the intellectual property system is to balance the exclusive rights of the rights holders with the prosperity and long-term development of society's public learning and the entire field of science, culture, and the arts. Therefore, the intellectual property granting mechanism provides both protection and limitations for the rights holder. This perfectly aligns with the dual attributes of data. In terms of achieving the protection of data property rights, the granting of intellectual property rights is an important institutional choice that can enhance the effectiveness of the data property exchange mechanism. Although this is not the only path, the granting of data property rights within the framework of the intellectual property rights system helps to establish fundamental legal relationships and rights confirmation mechanisms and is more compatible with the classification and grading system of data. The modernity of the intellectual property rights system allows it to adapt to the needs of big data technology development through special clauses or industry guidelines, thus promoting the comprehensive advancement of data intellectual property rights legislation. This paper analyzes data protection under the virtual property layer theory and two-fold virtual property rights system. Based on the “bundle of right” theory, this paper establishes specific three-level data rights. This paper analyzes the cases: Google v. Vidal-Hall, Halliday v Creation Consumer Finance, Douglas v Hello Limited, Campbell v MGN and Imerman v Tchenquiz. This paper concluded that recognizing property rights over personal data and protecting data under the framework of intellectual property will be beneficial to establish the tort of misuse of personal information.Keywords: data protection, property rights, intellectual property, Big data
Procedia PDF Downloads 39644 Socio-Cultural Economic and Demographic Profile of Return Migration: A Case Study of Mahaboobnagar District in ‘Andhra Pradesh’
Authors: Ramanamurthi Botlagunta
Abstract:
Return migrate on is a process; it’s not a new phenomenal. People are migrating since civilization started. In the case of Indian Diaspora, peoples migrated before the Independence of India. Even after the independence. There are various reasons for the migration. According to the characteristics of the migrants, geographical, political, and economic factors there are many changes occur in the mode of migration. In India currently almost 25 million peoples are outside of the country. But all of them not able to get the immigrants status in their respective host society due to the nature of individual perception and the immigration policies of the host countries. They came back to homeland after spending days/months/years. They are known as the return migrants. Returning migrants are 'persons returning to their country of citizenship after having been international migrants, whether short term or long-term'. Increasingly, migration is seen very differently from what was once believed to be a one-way phenomenon. The renewed interest of return migration can be seen through two aspects one is that growing importance of temporary migration programmers in other countries and other one is that potential role of migrants in developing their home countries. Conceptualized return migration in several ways: occasional return, seasonal return, temporary return, permanent return, and circular return. The reasons for the return migration are retirement, failure to assimilate in the host country, problems with acculturation in the destination country, being unsuccessful in the emigrating country, acquiring the desired wealth, innovate and to serve as change agents in the birth country. With the advent of globalization and the rapid development of transportation systems and communication technologies, this is a process by which immigrants forge and sustain simultaneous multi-stranded social relations that link together their societies of origin and settlement. We can find that Current theories of transnational migration are greatly focused on the economic impacts on the home countries, while social, cultural and political impacts have recently started gaining momentum. This, however, has been changing as globalization is radically transforming the way people move around the world. One of the reasons for the return migration is that lack of proportionate representation of Asian immigrants in positions of authority and decision-making can be a result of challenges confronted in cultural and structural assimilation. The present study mainly focuses socioeconomic and demographic profile of return migration of Indians from other countries in general and particularly on Andhra Pradesh the people who are returning from other countries. Migration is that lack of proportionate representation of Asian immigrants in positions of authority and decision-making can be a result of challenges confronted in cultural and structural assimilation. The present study mainly focuses socioeconomic and demographic profile of return migration of Indians from other countries in general and particularly on Andhra Pradesh the people who are returning from other countries.Keywords: migration, return migration, globalization, development, socio- economic, Asian immigrants, UN, Andhra Pradesh
Procedia PDF Downloads 372643 The Lived Experience of Pregnant Saudi Women Carrying a Fetus with Structural Abnormalities
Authors: Nasreen Abdulmannan
Abstract:
Fetal abnormalities are categorized as a structural abnormality, non-structural abnormality, or a combination of both. Fetal structural abnormalities (FSA) include, but are not limited, to Down syndrome, congenital diaphragmatic hernia, and cleft lip and palate. These abnormalities can be detected in the first weeks of pregnancy, which is almost around 9 - 20 weeks gestational. Etiological factors for FSA are unknown; however, transmitted genetic risk can be one of these factors. Consanguineous marriage often referred to as inbreeding, represents a significant risk factor for FSA due to the increased likelihood of deleterious genetic traits shared by both biological parents. In a country such as the Kingdom of Saudi Arabia (KSA), consanguineous marriage is high, which creates a significant risk of children being born with congenital abnormalities. Historically, the practice of consanguinity occurred commonly among European royalty. For example, Great Britain’s Queen Victoria married her German first cousin, Prince Albert of Coburg. Although a distant blood relationship, the United Kingdom’s Queen Elizabeth II married her cousin, Prince Philip of Greece and Denmark—both of them direct descendants of Queen Victoria. In Middle Eastern countries, a high incidence of consanguineous unions still exists, including in the KSA. Previous studies indicated that a significant gap exists in understanding the lived experiences of Saudi women dealing with an FSA-complicated pregnancy. Eleven participants were interviewed using a semi-structured interview format for this qualitative phenomenological study investigating the lived experiences of pregnant Saudi women carrying a child with FSA. This study explored the gaps in current literature regarding the lived experiences of pregnant Saudi women whose pregnancies were complicated by FSA. In addition, the researcher acquired knowledge about the available support and resources as well as the Saudi cultural perspective on FSA. This research explored the lived experiences of pregnant Saudi women utilizing Giorgi’s (2009) approach to data collection and data management. Findings for this study cover five major themes: (1) initial maternal reaction to the FSA diagnosis per ultrasound screening; (2) strengthening of the maternal relationship with God; (3) maternal concern for their child’s future; (4) feeling supported by their loved ones; and (5) lack of healthcare provider support and guidance. Future research in the KSA is needed to explore the network support for these mothers. This study recommended further clinical nursing research, nursing education, clinical practice, and healthcare policy/procedures to provide opportunities for improvement in nursing care and increase awareness in KSA society.Keywords: fetal structural abnormalities, psychological distress, health provider, health care
Procedia PDF Downloads 155642 Development of a Novel Ankle-Foot Orthotic Using a User Centered Approach for Improved Satisfaction
Authors: Ahlad Neti, Elisa Arch, Martha Hall
Abstract:
Studies have shown that individuals who use Ankle-Foot-Orthoses (AFOs) have a high level of dissatisfaction regarding their current AFOs. Studies point to the focus on technical design with little attention given to the user perspective as a source of AFO designs that leave users dissatisfied. To design a new AFO that satisfies users and thereby improves their quality of life, the reasons for their dissatisfaction and their wants and needs for an improved AFO design must be identified. There has been little research into the user perspective on AFO use and desired improvements, so the relationship between AFO design and satisfaction in daily use must be assessed to develop appropriate metrics and constraints prior to designing a novel AFO. To assess the user perspective on AFO design, structured interviews were conducted with 7 individuals (average age of 64.29±8.81 years) who use AFOs. All interviews were transcribed and coded to identify common themes using Grounded Theory Method in NVivo 12. Qualitative analysis of these results identified sources of user dissatisfaction such as heaviness, bulk, and uncomfortable material and overall needs and wants for an AFO. Beyond the user perspective, certain objective factors must be considered in the construction of metrics and constraints to ensure that the AFO fulfills its medical purpose. These more objective metrics are rooted in a common medical device market and technical standards. Given the large body of research concerning these standards, these objective metrics and constraints were derived through a literature review. Through these two methods, a comprehensive list of metrics and constraints accounting for both the user perspective on AFO design and the AFO’s medical purpose was compiled. These metrics and constraints will establish the framework for designing a new AFO that carries out its medical purpose while also improving the user experience. The metrics can be categorized into several overarching areas for AFO improvement. Categories of user perspective related metrics include comfort, discreteness, aesthetics, ease of use, and compatibility with clothing. Categories of medical purpose related metrics include biomechanical functionality, durability, and affordability. These metrics were used to guide an iterative prototyping process. Six concepts were ideated and compared using system-level analysis. From these six concepts, two concepts – the piano wire model and the segmented model – were selected to move forward into prototyping. Evaluation of non-functional prototypes of the piano wire and segmented models determined that the piano wire model better fulfilled the metrics by offering increased stability, longer durability, fewer points for failure, and a strong enough core component to allow a sock to cover over the AFO while maintaining the overall structure. As such, the piano wire AFO has moved forward into the functional prototyping phase, and healthy subject testing is being designed and recruited to conduct design validation and verification.Keywords: ankle-foot orthotic, assistive technology, human centered design, medical devices
Procedia PDF Downloads 156641 An Investigation of Wind Loading Effects on the Design of Elevated Steel Tanks with Lattice Tower Supporting Structures
Authors: J. van Vuuren, D. J. van Vuuren, R. Muigai
Abstract:
In recent times, South Africa has experienced extensive droughts that created the need for reliable small water reservoirs. These reservoirs have comparatively quick fabrication and installation times compared to market alternatives. An elevated water tank has inherent potential energy, resulting in that no additional water pumps are required to sustain water pressure at the outlet point – thus ensuring that, without electricity, a water source is available. The initial construction formwork and the complex geometric shape of concrete towers that requires casting can become time-consuming, rendering steel towers preferable. Reinforced concrete foundations, cast in advance, are required to be of sufficient strength. Thereafter, the prefabricated steel supporting structure and tank, which consist of steel panels, can be assembled and erected on site within a couple of days. Due to the time effectiveness of this system, it has become a popular solution to aid drought-stricken areas. These sites are normally in rural, schools or farmland areas. As these tanks can contain up to 2000kL (approximately 19.62MN) of water, combined with supporting lattice steel structures ranging between 5m and 30m in height, failure of one of the supporting members will result in system failure. Thus, there is a need to gain a comprehensive understanding of the operation conditions because of wind loadings on both the tank and the supporting structure. The aim of the research is to investigate the relationship between the theoretical wind loading on a lattice steel tower in combination with an elevated sectional steel tank, and the current wind loading codes, as applicable to South Africa. The research compares the respective design parameters (both theoretical and wind loading codes) whereby FEA analyses are conducted on the various design solutions. The currently available wind loading codes are not sufficient to design slender cantilever latticed steel towers that support elevated water storage tanks. Numerous factors in the design codes are not comprehensively considered when designing the system as these codes are dependent on various assumptions. Factors that require investigation for the study are; the wind loading angle to the face of the structure that will result in maximum load; the internal structural effects on models with different bracing patterns; the loading influence of the aspect ratio of the tank; and the clearance height of the tank on the structural members. Wind loads, as the variable that results in the highest failure rate of cantilevered lattice steel tower structures, require greater understanding. This study aims to contribute towards the design process of elevated steel tanks with lattice tower supporting structures.Keywords: aspect ratio, bracing patterns, clearance height, elevated steel tanks, lattice steel tower, wind loads
Procedia PDF Downloads 150640 Devulcanization of Waste Rubber Using Thermomechanical Method Combined with Supercritical CO₂
Authors: L. Asaro, M. Gratton, S. Seghar, N. Poirot, N. Ait Hocine
Abstract:
Rubber waste disposal is an environmental problem. Particularly, many researches are centered in the management of discarded tires. In spite of all different ways of handling used tires, the most common is to deposit them in a landfill, creating a stock of tires. These stocks can cause fire danger and provide ambient for rodents, mosquitoes and other pests, causing health hazards and environmental problems. Because of the three-dimensional structure of the rubbers and their specific composition that include several additives, their recycling is a current technological challenge. The technique which can break down the crosslink bonds in the rubber is called devulcanization. Strictly, devulcanization can be defined as a process where poly-, di-, and mono-sulfidic bonds, formed during vulcanization, are totally or partially broken. In the recent years, super critical carbon dioxide (scCO₂) was proposed as a green devulcanization atmosphere. This is because it is chemically inactive, nontoxic, nonflammable and inexpensive. Its critical point can be easily reached (31.1 °C and 7.38 MPa), and residual scCO₂ in the devulcanized rubber can be easily and rapidly removed by releasing pressure. In this study thermomechanical devulcanization of ground tire rubber (GTR) was performed in a twin screw extruder under diverse operation conditions. Supercritical CO₂ was added in different quantities to promote the devulcanization. Temperature, screw speed and quantity of CO₂ were the parameters that were varied during the process. The devulcanized rubber was characterized by its devulcanization percent and crosslink density by swelling in toluene. Infrared spectroscopy (FTIR) and Gel permeation chromatography (GPC) were also done, and the results were related with the Mooney viscosity. The results showed that the crosslink density decreases as the extruder temperature and speed increases, and, as expected, the soluble fraction increase with both parameters. The Mooney viscosity of the devulcanized rubber decreases as the extruder temperature increases. The reached values were in good correlation (R= 0.96) with de the soluble fraction. In order to analyze if the devulcanization was caused by main chains or crosslink scission, the Horikx's theory was used. Results showed that all tests fall in the curve that corresponds to the sulfur bond scission, which indicates that the devulcanization has successfully happened without degradation of the rubber. In the spectra obtained by FTIR, it was observed that none of the characteristic peaks of the GTR were modified by the different devulcanization conditions. This was expected, because due to the low sulfur content (~1.4 phr) and the multiphasic composition of the GTR, it is very difficult to evaluate the devulcanization by this technique. The lowest crosslink density was reached with 1 cm³/min of CO₂, and the power consumed in that process was also near to the minimum. These results encourage us to do further analyses to better understand the effect of the different conditions on the devulcanization process. The analysis is currently extended to monophasic rubbers as ethylene propylene diene monomer rubber (EPDM) and natural rubber (NR).Keywords: devulcanization, recycling, rubber, waste
Procedia PDF Downloads 385639 Evaluation of Coupled CFD-FEA Simulation for Fire Determination
Authors: Daniel Martin Fellows, Sean P. Walton, Jennifer Thompson, Oubay Hassan, Ella Quigley, Kevin Tinkham
Abstract:
Fire performance is a crucial aspect to consider when designing cladding products, and testing this performance is extremely expensive. Appropriate use of numerical simulation of fire performance has the potential to reduce the total number of fire tests required when designing a product by eliminating poor-performing design ideas early in the design phase. Due to the complexity of fire and the large spectrum of failures it can cause, multi-disciplinary models are needed to capture the complex fire behavior and its structural effects on its surroundings. Working alongside Tata Steel U.K., the authors have focused on completing a coupled CFD-FEA simulation model suited to test Polyisocyanurate (PIR) based sandwich panel products to gain confidence before costly experimental standards testing. The sandwich panels are part of a thermally insulating façade system primarily for large non-domestic buildings. The work presented in this paper compares two coupling methodologies of a replicated physical experimental standards test LPS 1181-1, carried out by Tata Steel U.K. The two coupling methodologies that are considered within this research are; one-way and two-way. A one-way coupled analysis consists of importing thermal data from the CFD solver into the FEA solver. A two-way coupling analysis consists of continuously importing the updated changes in thermal data, due to the fire's behavior, to the FEA solver throughout the simulation. Likewise, the mechanical changes will also be updated back to the CFD solver to include geometric changes within the solution. For CFD calculations, a solver called Fire Dynamic Simulator (FDS) has been chosen due to its adapted numerical scheme to focus solely on fire problems. Validation of FDS applicability has been achieved in past benchmark cases. In addition, an FEA solver called ABAQUS has been chosen to model the structural response to the fire due to its crushable foam plasticity model, which can accurately model the compressibility of PIR foam. An open-source code called FDS-2-ABAQUS is used to couple the two solvers together, using several python modules to complete the process, including failure checks. The coupling methodologies and experimental data acquired from Tata Steel U.K are compared using several variables. The comparison data includes; gas temperatures, surface temperatures, and mechanical deformation of the panels. Conclusions are drawn, noting improvements to be made on the current coupling open-source code FDS-2-ABAQUS to make it more applicable to Tata Steel U.K sandwich panel products. Future directions for reducing the computational cost of the simulation are also considered.Keywords: fire engineering, numerical coupling, sandwich panels, thermo fluids
Procedia PDF Downloads 90638 Ascidian Styela rustica Proteins’ Structural Domains Predicted to Participate in the Tunic Formation
Authors: M. I. Tyletc, O. I. Podgornya, T. G. Shaposhnikova, S. V. Shabelnikov, A. G. Mittenberg, M. A. Daugavet
Abstract:
Ascidiacea is the most numerous class of the Tunicata subtype. These chordates' distinctive feature of the anatomical structure is a tunic consisting of cellulose fibrils, protein molecules, and single cells. The mechanisms of the tunic formation are not known in detail; tunic formation could be used as the model system for studying the interaction of cells with the extracellular matrix. Our model species is the ascidian Styela rustica, which is prevalent in benthic communities of the White Sea. As previously shown, the tunic formation involves morula blood cells, which contain the major 48 kDa protein p48. P48 participation in the tunic formation was proved using antibodies against the protein. The nature of the protein and its function remains unknown. The current research aims to determine the amino acid sequence of p48, as well as to clarify its role in the tunic formation. The peptides that make up the p48 amino acid sequence were determined by mass spectrometry. A search for peptides in protein sequence databases identified sequences homologous to p48 in Styela clava, Styela plicata, and Styela canopus. Based on sequence alignment, their level of similarity was determined as 81-87%. The correspondent sequence of ascidian Styela canopus was used for further analysis. The Styela rustica p48 sequence begins with a signal peptide, which could indicate that the protein is secretory. This is consistent with experimentally obtained data: the contents of morula cells secreted in the tunic matrix. The isoelectric point of p48 is 9.77, which is consistent with the experimental results of acid electrophoresis of morula cell proteins. However, the molecular weight of the amino acid sequence of ascidian Styela canopus is 103 kDa, so p48 of Styela rustica is a shorter homolog. The search for conservative functional domains revealed the presence of two Ca-binding EGF-like domains, thrombospondin (TSP1) and tyrosinase domains. The p48 peptides determined by mass spectrometry fall into the region of the sequence corresponding to the last two domains and have amino acid substitutions as compared to Styela canopus homolog. The tyrosinase domain (pfam00264) is known to be part of the phenoloxidase enzyme, which participates in melanization processes and the immune response. The thrombospondin domain (smart00209) interacts with a wide range of proteins, and is involved in several biological processes, including coagulation, cell adhesion, modulation of intercellular and cell-matrix interactions, angiogenesis, wound healing and tissue remodeling. It can be assumed that the tyrosinase domain in p48 plays the role of the phenoloxidase enzyme, and TSP1 provides a link between the extracellular matrix and cell surface receptors, and may also be responsible for the repair of the tunic. The results obtained are consistent with experimental data on p48. The domain organization of protein suggests that p48 is an enzyme involved in the tunic tunning and is an important regulator of the organization of the extracellular matrix.Keywords: ascidian, p48, thrombospondin, tyrosinase, tunic, tunning
Procedia PDF Downloads 115637 Methodological Deficiencies in Knowledge Representation Conceptual Theories of Artificial Intelligence
Authors: Nasser Salah Eldin Mohammed Salih Shebka
Abstract:
Current problematic issues in AI fields are mainly due to those of knowledge representation conceptual theories, which in turn reflected on the entire scope of cognitive sciences. Knowledge representation methods and tools are driven from theoretical concepts regarding human scientific perception of the conception, nature, and process of knowledge acquisition, knowledge engineering and knowledge generation. And although, these theoretical conceptions were themselves driven from the study of the human knowledge representation process and related theories; some essential factors were overlooked or underestimated, thus causing critical methodological deficiencies in the conceptual theories of human knowledge and knowledge representation conceptions. The evaluation criteria of human cumulative knowledge from the perspectives of nature and theoretical aspects of knowledge representation conceptions are affected greatly by the very materialistic nature of cognitive sciences. This nature caused what we define as methodological deficiencies in the nature of theoretical aspects of knowledge representation concepts in AI. These methodological deficiencies are not confined to applications of knowledge representation theories throughout AI fields, but also exceeds to cover the scientific nature of cognitive sciences. The methodological deficiencies we investigated in our work are: - The Segregation between cognitive abilities in knowledge driven models.- Insufficiency of the two-value logic used to represent knowledge particularly on machine language level in relation to the problematic issues of semantics and meaning theories. - Deficient consideration of the parameters of (existence) and (time) in the structure of knowledge. The latter requires that we present a more detailed introduction of the manner in which the meanings of Existence and Time are to be considered in the structure of knowledge. This doesn’t imply that it’s easy to apply in structures of knowledge representation systems, but outlining a deficiency caused by the absence of such essential parameters, can be considered as an attempt to redefine knowledge representation conceptual approaches, or if proven impossible; constructs a perspective on the possibility of simulating human cognition on machines. Furthermore, a redirection of the aforementioned expressions is required in order to formulate the exact meaning under discussion. This redirection of meaning alters the role of Existence and time factors to the Frame Work Environment of knowledge structure; and therefore; knowledge representation conceptual theories. Findings of our work indicate the necessity to differentiate between two comparative concepts when addressing the relation between existence and time parameters, and between that of the structure of human knowledge. The topics presented throughout the paper can also be viewed as an evaluation criterion to determine AI’s capability to achieve its ultimate objectives. Ultimately, we argue some of the implications of our findings that suggests that; although scientific progress may have not reached its peak, or that human scientific evolution has reached a point where it’s not possible to discover evolutionary facts about the human Brain and detailed descriptions of how it represents knowledge, but it simply implies that; unless these methodological deficiencies are properly addressed; the future of AI’s qualitative progress remains questionable.Keywords: cognitive sciences, knowledge representation, ontological reasoning, temporal logic
Procedia PDF Downloads 113636 Topographic and Thermal Analysis of Plasma Polymer Coated Hybrid Fibers for Composite Applications
Authors: Hande Yavuz, Grégory Girard, Jinbo Bai
Abstract:
Manufacturing of hybrid composites requires particular attention to overcome various critical weaknesses that are originated from poor interfacial compatibility. A large number of parameters have to be considered to optimize the interfacial bond strength either to avoid flaw sensitivity or delamination that occurs in composites. For this reason, surface characterization of reinforcement phase is needed in order to provide necessary data to drive an assessment of fiber-matrix interfacial compatibility prior to fabrication of composite structures. Compared to conventional plasma polymerization processes such as radiofrequency and microwave, dielectric barrier discharge assisted plasma polymerization is a promising process that can be utilized to modify the surface properties of carbon fibers in a continuous manner. Finding the most suitable conditions (e.g., plasma power, plasma duration, precursor proportion) for plasma polymerization of pyrrole in post-discharge region either in the presence or in the absence of p-toluene sulfonic acid monohydrate as well as the characterization of plasma polypyrrole coated fibers are the important aspects of this work. Throughout the current investigation, atomic force microscopy (AFM) and thermogravimetric analysis (TGA) are used to characterize plasma treated hybrid fibers (CNT-grafted Toray T700-12K carbon fibers, referred as T700/CNT). TGA results show the trend in the change of decomposition process of deposited polymer on fibers as a function of temperature up to 900 °C. Within the same period of time, all plasma pyrrole treated samples began to lose weight with relatively fast rate up to 400 °C which suggests the loss of polymeric structures. The weight loss between 300 and 600 °C is attributed to evolution of CO2 due to decomposition of functional groups (e.g. carboxyl compounds). With keeping in mind the surface chemical structure, the higher the amount of carbonyl, alcohols, and ether compounds, the lower the stability of deposited polymer. Thus, the highest weight loss is observed in 1400 W 45 s pyrrole+pTSA.H2O plasma treated sample probably because of the presence of less stable polymer than that of other plasma treated samples. Comparison of the AFM images for untreated and plasma treated samples shows that the surface topography may change on a microscopic scale. The AFM image of 1800 W 45 s treated T700/CNT fiber possesses the most significant increase in roughening compared to untreated T700/CNT fiber. Namely, the fiber surface became rougher with ~3.6 fold that of the T700/CNT fiber. The increase observed in surface roughness compared to untreated T700/CNT fiber may provide more contact points between fiber and matrix due to increased surface area. It is believed to be beneficial for their application as reinforcement in composites.Keywords: hybrid fibers, surface characterization, surface roughness, thermal stability
Procedia PDF Downloads 233635 The Role of Non-Governmental Organizations in Promoting Humanitarian Development: A Case Study in Saudi Arabia
Authors: Muamar Salameh, Rania Sinno
Abstract:
Non-governmental organizations in Saudi Arabia play a vital role in promoting humanitarian development. Though this paper will emphasize this role and will provide a specific case study on the role of Prince Mohammad Bin Fahd Foundation for Humanitarian Development, yet many organizations do not provide transparent information for the accomplishments of the NGOs. This study will provide answers to the main research question regarding this role that NGOs play in promoting humanitarian development. The recent law regulating associations and foundations in Saudi Arabia was issued in December 2015 and went into effect March 2016. Any new association or foundation will need to follow these regulations. Though the registration, implementation, and workflow of the organizations still need major improvement and development, yet, the currently-registered organizations have several notable achievements. Most of these organizations adopt a centralized administration approach which in many cases still hinders progress and may be an obstacle in achieving and reaching a larger population of beneficiaries. A large portion of the existing organizations are charities, some of which have some sort of government affiliation. The laws and regulations limit registration of new organizations. Any violations to Islamic Sharia, contradictions to public order, breach to national unity, foreign and foreign-affiliation organizations prohibits any organization from registration. The lack of transparency in the operations and inner-working of NGOs in Saudi Arabia is apparent for the public. However, the regulations invoke full transparency with the governing ministry. This transparency should be available to the public and in specific to the target population that are eligible to benefit from the NGOs services. In this study, we will provide an extensive review of all related laws, regulations, policies and procedures related to all NGOs in the Eastern Province of Saudi Arabia. This review will include some examples of current NGOs, services and target population. The study will determine the main accomplishments of reputable NGOs that have impacted positively the Saudi communities. The results will highlight and concentrate on actions, services and accomplishments that achieve sustainable assistance in promoting humanitarian development and advance living conditions of target populations of the Saudi community. In particular, we will concentrate on a case study related to PMFHD; one of the largest foundations in the Eastern Province of Saudi Arabia. The authors have access to the data related to this foundation and have access to the foundation administration to gather, analyze and conclude the findings of this group. The study will also analyze whether the practices, budgets, services and annual accomplishments of the foundation have fulfilled the humanitarian role of the foundation while meeting the governmental requirements, with an analysis in the light of the new laws. The findings of the study show that great accomplishments for advancing and promoting humanitarian development in Saudi community and international communities have been achieved. Several examples will be included from several NGOs, with specific examples from PMFHD.Keywords: development, foundation, humanitarian, non-governmental organization, Saudi Arabia
Procedia PDF Downloads 296634 A Case Study on the Development and Application of Media Literacy Education Program Based on Circular Learning
Authors: Kim Hyekyoung, Au Yunkyung
Abstract:
As media plays an increasingly important role in our lives, the age at which media usage begins is getting younger worldwide. Particularly, young children are exposed to media at an early age, making early childhood media literacy education an essential task. However, most existing early childhood media literacy education programs focus solely on teaching children how to use media, and practical implementation and application are challenging. Therefore, this study aims to develop a play-based early childhood media literacy education program utilizing topic-based media content and explore the potential application and impact of this program on young children's media literacy learning. Based on theoretical and literature review on media literacy education, analysis of existing educational programs, and a survey on the current status and teacher perceptions of media literacy education for preschool children, this study developed a media literacy education program for preschool children, considering the components of media literacy (understanding media characteristics, self-regulation, self-expression, critical understanding, ethical norms, and social communication). To verify the effectiveness of the program, 20 preschool children aged 5 from C City M Kindergarten were chosen as participants, and the program was implemented from March 28th to July 4th, 2022, once a week for a total of 7 sessions. The program was developed based on Gallenstain's (2003) iterative learning model (participation-exploration-explanation-extension-evaluation). To explore the quantitative changes before and after the program, a repeated measures analysis of variance was conducted, and qualitative analysis was employed to examine the observed process changes. It was found that after the application of the education program, media literacy levels such as understanding media characteristics, self-regulation, self-expression, critical understanding, ethical norms, and social communication significantly improved. The recursive learning-based early childhood media literacy education program developed in this study can be effectively applied to young children's media literacy education and help enhance their media literacy levels. In terms of observed process changes, it was confirmed that children learned about various topics, expressed their thoughts, and improved their ability to communicate with others using media content. These findings emphasize the importance of developing and implementing media literacy education programs and can contribute to empowering young children to safely and effectively utilize media in their media environment. The results of this study, exploring the potential application and impact of the recursive learning-based early childhood media literacy education program on young children's media literacy learning, demonstrated positive changes in young children's media literacy levels. These results go beyond teaching children how to use media and can help foster their ability to safely and effectively utilize media in their media environment. Additionally, to enhance young children's media literacy levels and create a safe media environment, diverse content and methodologies are needed, and the continuous development and evaluation of education programs should be conducted.Keywords: young children, media literacy, recursive learning, education program
Procedia PDF Downloads 77633 Assessment of DNA Sequence Encoding Techniques for Machine Learning Algorithms Using a Universal Bacterial Marker
Authors: Diego Santibañez Oyarce, Fernanda Bravo Cornejo, Camilo Cerda Sarabia, Belén Díaz Díaz, Esteban Gómez Terán, Hugo Osses Prado, Raúl Caulier-Cisterna, Jorge Vergara-Quezada, Ana Moya-Beltrán
Abstract:
The advent of high-throughput sequencing technologies has revolutionized genomics, generating vast amounts of genetic data that challenge traditional bioinformatics methods. Machine learning addresses these challenges by leveraging computational power to identify patterns and extract information from large datasets. However, biological sequence data, being symbolic and non-numeric, must be converted into numerical formats for machine learning algorithms to process effectively. So far, some encoding methods, such as one-hot encoding or k-mers, have been explored. This work proposes additional approaches for encoding DNA sequences in order to compare them with existing techniques and determine if they can provide improvements or if current methods offer superior results. Data from the 16S rRNA gene, a universal marker, was used to analyze eight bacterial groups that are significant in the pulmonary environment and have clinical implications. The bacterial genes included in this analysis are Prevotella, Abiotrophia, Acidovorax, Streptococcus, Neisseria, Veillonella, Mycobacterium, and Megasphaera. These data were downloaded from the NCBI database in Genbank file format, followed by a syntactic analysis to selectively extract relevant information from each file. For data encoding, a sequence normalization process was carried out as the first step. From approximately 22,000 initial data points, a subset was generated for testing purposes. Specifically, 55 sequences from each bacterial group met the length criteria, resulting in an initial sample of approximately 440 sequences. The sequences were encoded using different methods, including one-hot encoding, k-mers, Fourier transform, and Wavelet transform. Various machine learning algorithms, such as support vector machines, random forests, and neural networks, were trained to evaluate these encoding methods. The performance of these models was assessed using multiple metrics, including the confusion matrix, ROC curve, and F1 Score, providing a comprehensive evaluation of their classification capabilities. The results show that accuracies between encoding methods vary by up to approximately 15%, with the Fourier transform obtaining the best results for the evaluated machine learning algorithms. These findings, supported by the detailed analysis using the confusion matrix, ROC curve, and F1 Score, provide valuable insights into the effectiveness of different encoding methods and machine learning algorithms for genomic data analysis, potentially improving the accuracy and efficiency of bacterial classification and related genomic studies.Keywords: DNA encoding, machine learning, Fourier transform, Fourier transformation
Procedia PDF Downloads 23632 Pioneering Conservation of Aquatic Ecosystems under Australian Law
Authors: Gina M. Newton
Abstract:
Australia’s Environment Protection and Biodiversity Conservation Act (EPBC Act) is the premiere, national law under which species and 'ecological communities' (i.e., like ecosystems) can be formally recognised and 'listed' as threatened across all jurisdictions. The listing process involves assessment against a range of criteria (similar to the IUCN process) to demonstrate conservation status (i.e., vulnerable, endangered, critically endangered, etc.) based on the best available science. Over the past decade in Australia, there’s been a transition from almost solely terrestrial to the first aquatic threatened ecological community (TEC or ecosystem) listings (e.g., River Murray, Macquarie Marshes, Coastal Saltmarsh, Salt-wedge Estuaries). All constitute large areas, with some including multiple state jurisdictions. Development of these conservation and listing advices has enabled, for the first time, a more forensic analysis of three key factors across a range of aquatic and coastal ecosystems: -the contribution of invasive species to conservation status, -how to demonstrate and attribute decline in 'ecological integrity' to conservation status, and, -identification of related priority conservation actions for management. There is increasing global recognition of the disproportionate degree of biodiversity loss within aquatic ecosystems. In Australia, legislative protection at Commonwealth or State levels remains one of the strongest conservation measures. Such laws have associated compliance mechanisms for breaches to the protected status. They also trigger the need for environment impact statements during applications for major developments (which may be denied). However, not all jurisdictions have such laws in place. There remains much opposition to the listing of freshwater systems – for example, the River Murray (Australia's largest river) and Macquarie Marshes (an internationally significant wetland) were both disallowed by parliament four months after formal listing. This was mainly due to a change of government, dissent from a major industry sector, and a 'loophole' in the law. In Australia, at least in the immediate to medium-term time frames, invasive species (aliens, native pests, pathogens, etc.) appear to be the number one biotic threat to the biodiversity and ecological function and integrity of our aquatic ecosystems. Consequently, this should be considered a current priority for research, conservation, and management actions. Another key outcome from this analysis was the recognition that drawing together multiple lines of evidence to form a 'conservation narrative' is a more useful approach to assigning conservation status. This also helps to addresses a glaring gap in long-term ecological data sets in Australia, which often precludes a more empirical data-driven approach. An important lesson also emerged – the recognition that while conservation must be underpinned by the best available scientific evidence, it remains a 'social and policy' goal rather than a 'scientific' goal. Communication, engagement, and 'politics' necessarily play a significant role in achieving conservation goals and need to be managed and resourced accordingly.Keywords: aquatic ecosystem conservation, conservation law, ecological integrity, invasive species
Procedia PDF Downloads 132631 Steel Concrete Composite Bridge: Modelling Approach and Analysis
Authors: Kaviyarasan D., Satish Kumar S. R.
Abstract:
India being vast in area and population with great scope of international business, roadways and railways network connection within the country is expected to have a big growth. There are numerous rail-cum-road bridges constructed across many major rivers in India and few are getting very old. So there is more possibility of repairing or coming up with such new bridges in India. Analysis and design of such bridges are practiced through conventional procedure and end up with heavy and uneconomical sections. Such heavy class steel bridges when subjected to high seismic shaking has more chance to fail by stability because the members are too much rigid and stocky rather than being flexible to dissipate the energy. This work is the collective study of the researches done in the truss bridge and steel concrete composite truss bridges presenting the method of analysis, tools for numerical and analytical modeling which evaluates its seismic behaviour and collapse mechanisms. To ascertain the inelastic and nonlinear behaviour of the structure, generally at research level static pushover analysis is adopted. Though the static pushover analysis is now extensively used for the framed steel and concrete buildings to study its lateral action behaviour, those findings by pushover analysis done for the buildings cannot directly be used for the bridges as such, because the bridges have completely a different performance requirement, behaviour and typology as compared to that of the buildings. Long span steel bridges are mostly the truss bridges. Truss bridges being formed by many members and connections, the failure of the system does not happen suddenly with single event or failure of one member. Failure usually initiates from one member and progresses gradually to the next member and so on when subjected to further loading. This kind of progressive collapse of the truss bridge structure is dependent on many factors, in which the live load distribution and span to length ratio are most significant. The ultimate collapse is anyhow by the buckling of the compression members only. For regular bridges, single step pushover analysis gives results closer to that of the non-linear dynamic analysis. But for a complicated bridge like heavy class steel bridge or the skewed bridges or complicated dynamic behaviour bridges, nonlinear analysis capturing the progressive yielding and collapse pattern is mandatory. With the knowledge of the postelastic behaviour of the bridge and advancements in the computational facility, the current level of analysis and design of bridges has moved to state of ascertaining the performance levels of the bridges based on the damage caused by seismic shaking. This is because the buildings performance levels deals much with the life safety and collapse prevention levels, whereas the bridges mostly deal with the extent damages and how quick it can be repaired with or without disturbing the traffic after a strong earthquake event. The paper would compile the wide spectrum of modeling to analysis of the steel concrete composite truss bridges in general.Keywords: bridge engineering, performance based design of steel truss bridge, seismic design of composite bridge, steel-concrete composite bridge
Procedia PDF Downloads 185630 2,7-diazaindole as a Potential Photophysical Probe for Excited State Deactivation Processes
Authors: Simran Baweja, Bhavika Kalal, Surajit Maity
Abstract:
Photoinduced tautomerization reactions have been the centre of attention among scientific community over past several decades because of their significance in various biological systems. 7-azaindole (7AI) is considered as a model system for DNA base pairing and to understand the role of such tautomerization reactions in mutations. To the best of our knowledge, extensive studies have been carried on 7-azaindole and its solvent clusters exhibiting proton/ hydrogen transfer in both solution as well as gas phase. Derivatives of above molecule, like 2,7- and 2,6-diazaindoles are proposed to have even better photophysical properties due to the presence of -aza group on the 2nd position. However, there are a few studies in the solution phase which suggest the relevance of these molecules, but there are no experimental studies reported in the gas phase yet. In our current investigation, we present the first gas phase spectroscopic data of 2,7-diazaindole (2,7-DAI) and its solvent cluster (2,7-DAI-H2O). In this, we have employed state-of-the-art laser spectroscopic methods such as fluorescence excitation (LIF), dispersed fluorescence (DF), resonant two-photon ionization time of flight mass spectrometry (2C-R2PI), photoionization efficiency spectroscopy (PIE), IR-UV double resonance spectroscopy i.e. fluorescence-dip infrared spectroscopy (FDIR) and resonant ion-dip infrared spectroscopy (IDIR) to understand the electronic structure of the molecule. The origin band corresponding to S1 ← S0 transition of the bare 2,7-DAI is found to be positioned at 33910 cm-1 whereas the origin band corresponding to S1 ← S0 transition of the 2,7-DAI-H2O is positioned at 33074 cm-1. The red shifted transition in case of solvent cluster suggests the enhanced feasibility of excited state hydrogen/ proton transfer. The ionization potential for the 2,7-DAI molecule is found to be 8.92 eV, which is significantly higher that the previously reported 7AI (8.11 eV) molecule, making it a comparatively complex molecule to study. The ionization potential is reduced by 0.14 eV in case of 2,7-DAI-H2O (8.78 eV) cluster compared to that of 2,7-DAI. Moreover, on comparison with the available literature values of 7AI, we found the origin band of 2,7-DAI and 2,7-DAI-H2O to be red shifted by -729 and -280 cm-1 respectively. The ground and excited state N-H stretching frequencies of the 27DAI molecule were determined using fluorescence-dip infrared spectra (FDIR) and resonant ion dip infrared spectroscopy (IDIR), obtained at 3523 and 3467 cm-1, respectively. The lower value of vNH in the electronic excited state of 27DAI implies the higher acidity of the group compared to the ground state. Moreover, we have done extensive computational analysis, which suggests that the energy barrier in excited state reduces significantly as we increase the number of catalytic solvent molecules (S= H2O, NH3) as well as the polarity of solvent molecules. We found that the ammonia molecule is a better candidate for hydrogen transfer compared to water because of its higher gas-phase basicity. Further studies are underway to understand the excited state dynamics and photochemistry of such N-rich chromophores.Keywords: photoinduced tautomerization reactions, gas phse spectroscopy, ), IR-UV double resonance spectroscopy, resonant two-photon ionization time of flight mass spectrometry (2C-R2PI)
Procedia PDF Downloads 86629 Strategies for Synchronizing Chocolate Conching Data Using Dynamic Time Warping
Authors: Fernanda A. P. Peres, Thiago N. Peres, Flavio S. Fogliatto, Michel J. Anzanello
Abstract:
Batch processes are widely used in food industry and have an important role in the production of high added value products, such as chocolate. Process performance is usually described by variables that are monitored as the batch progresses. Data arising from these processes are likely to display a strong correlation-autocorrelation structure, and are usually monitored using control charts based on multiway principal components analysis (MPCA). Process control of a new batch is carried out comparing the trajectories of its relevant process variables with those in a reference set of batches that yielded products within specifications; it is clear that proper determination of the reference set is key for the success of a correct signalization of non-conforming batches in such quality control schemes. In chocolate manufacturing, misclassifications of non-conforming batches in the conching phase may lead to significant financial losses. In such context, the accuracy of process control grows in relevance. In addition to that, the main assumption in MPCA-based monitoring strategies is that all batches are synchronized in duration, both the new batch being monitored and those in the reference set. Such assumption is often not satisfied in chocolate manufacturing process. As a consequence, traditional techniques as MPCA-based charts are not suitable for process control and monitoring. To address that issue, the objective of this work is to compare the performance of three dynamic time warping (DTW) methods in the alignment and synchronization of chocolate conching process variables’ trajectories, aimed at properly determining the reference distribution for multivariate statistical process control. The power of classification of batches in two categories (conforming and non-conforming) was evaluated using the k-nearest neighbor (KNN) algorithm. Real data from a milk chocolate conching process was collected and the following variables were monitored over time: frequency of soybean lecithin dosage, rotation speed of the shovels, current of the main motor of the conche, and chocolate temperature. A set of 62 batches with durations between 495 and 1,170 minutes was considered; 53% of the batches were known to be conforming based on lab test results and experts’ evaluations. Results showed that all three DTW methods tested were able to align and synchronize the conching dataset. However, synchronized datasets obtained from these methods performed differently when inputted in the KNN classification algorithm. Kassidas, MacGregor and Taylor’s (named KMT) method was deemed the best DTW method for aligning and synchronizing a milk chocolate conching dataset, presenting 93.7% accuracy, 97.2% sensitivity and 90.3% specificity in batch classification, being considered the best option to determine the reference set for the milk chocolate dataset. Such method was recommended due to the lowest number of iterations required to achieve convergence and highest average accuracy in the testing portion using the KNN classification technique.Keywords: batch process monitoring, chocolate conching, dynamic time warping, reference set distribution, variable duration
Procedia PDF Downloads 167628 Modification of a Commercial Ultrafiltration Membrane by Electrospray Deposition for Performance Adjustment
Authors: Elizaveta Korzhova, Sebastien Deon, Patrick Fievet, Dmitry Lopatin, Oleg Baranov
Abstract:
Filtration with nanoporous ultrafiltration membranes is an attractive option to remove ionic pollutants from contaminated effluents. Unfortunately, commercial membranes are not necessarily suitable for specific applications, and their modification by polymer deposition is a fruitful way to adapt their performances accordingly. Many methods are usually used for surface modification, but a novel technique based on electrospray is proposed here. Various quantities of polymers were deposited on a commercial membrane, and the impact of the deposit is investigated on filtration performances and discussed in terms of charge and hydrophobicity. The electrospray deposition is a technique which has not been used for membrane modification up to now. It consists of spraying small drops of polymer solution under a high voltage between the needle containing the solution and the metallic support on which membrane is stuck. The advantage of this process lies in the small quantities of polymer that can be coated on the membrane surface compared with immersion technique. In this study, various quantities (from 2 to 40 μL/cm²) of solutions containing two charged polymers (13 mmol/L of monomer unit), namely polyethyleneimine (PEI) and polystyrene sulfonate (PSS), were sprayed on a negatively charged polyethersulfone membrane (PLEIADE, Orelis Environment). The efficacy of the polymer deposition was then investigated by estimating ion rejection, permeation flux, zeta-potential and contact angle before and after the polymer deposition. Firstly, contact angle (θ) measurements show that the surface hydrophilicity is notably improved by coating both PEI and PSS. Moreover, it was highlighted that the contact angle decreases monotonously with the amount of sprayed solution. Additionally, hydrophilicity enhancement was proved to be better with PSS (from 62 to 35°) than PEI (from 62 to 53°). Values of zeta-potential (ζ were estimated by measuring the streaming current generated by a pressure difference on both sides of a channel made by clamping two membranes. The ζ-values demonstrate that the deposits of PSS (negative at pH=5.5) allow an increase of the negative membrane charge, whereas the deposits of PEI (positive) lead to a positive surface charge. Zeta-potentials measurements also emphasize that the sprayed quantity has little impact on the membrane charge, except for very low quantities (2 μL/m²). The cross-flow filtration of salt solutions containing mono and divalent ions demonstrate that polymer deposition allows a strong enhancement of ion rejection. For instance, it is shown that rejection of a salt containing a divalent cation can be increased from 1 to 20 % and even to 35% by deposing 2 and 4 μL/cm² of PEI solution, respectively. This observation is coherent with the reversal of the membrane charge induced by PEI deposition. Similarly, the increase of negative charge induced by PSS deposition leads to an increase of NaCl rejection from 5 to 45 % due to electrostatic repulsion of the Cl- ion by the negative surface charge. Finally, a notable fall in the permeation flux due to the polymer layer coated at the surface was observed and the best polymer concentration in the sprayed solution remains to be determined to optimize performances.Keywords: ultrafiltration, electrospray deposition, ion rejection, permeation flux, zeta-potential, hydrophobicity
Procedia PDF Downloads 187627 Understanding the Experiences of School Teachers and Administrators Involved in a Multi-Sectoral Approach to the Creation of a Physical Literacy Enriched Community
Authors: M. Louise Humbert, Karen E. Chad, Natalie E. Houser, Marta E. Erlandson
Abstract:
Physical literacy is the motivation, confidence, physical competence, knowledge, and understanding to value and takes responsibility for engagement in physical activities for life. In recent years, physical literacy has emerged as a determinant of health, promoting a positive lifelong physical activity trajectory. Physical literacy’s holistic approach and emphasis on the intrinsic valuation of movement provide an encouraging avenue for intervention among children to develop competent and confident movers. Although there is research on physical literacy interventions, no evidence exists on the outcomes of multi-sectoral interventions involving a combination of home, school, and community contexts. Since children interact with and in a wide range of contexts (home, school, community) daily, interventions designed to address a combination of these contexts are critical to the development of physical literacy. Working with school administrators and teachers, sports and recreation leaders, and community members, our team of university and community researchers conducted and evaluated one of the first multi-contextual and multi-sectoral physical literacy interventions in Canada. Schools played a critical role in this multi-sector intervention, and in this project, teachers and administrators focused their actions on developing physical literacy in students 10 to 14 years of age through the instruction of physical literacy-focused physical education lessons. Little is known about the experiences of educators when they work alongside an array of community representatives to develop physical literacy in school-aged children. Given the uniqueness of this intervention, we sought to answer the question, ‘What were the experiences of school-based educators involved in a multi-sectoral partnership focused on creating a physical literacy enriched community intervention?’ A thematic analysis approach was used to analyze data collected from interviews with educators and administrators, informal conversations, documents, and observations at workshops and meetings. Results indicated that schools and educators played the largest role in this multi-sector intervention. Educators initially reported a limited understanding of physical literacy and expressed a need for resources linked to the physical education curriculum. Some anxiety was expressed by the teachers as their students were measured, and educators noted they wanted to increase their understanding and become more involved in the assessment of physical literacy. Teachers reported that the intervention’s focus on physical literacy positively impacted the scheduling and their instruction of physical education. Administrators shared their desire for school and division-level actions targeting physical literacy development like the current focus on numeracy and literacy, treaty education, and safe schools. As this was one of the first multi-contextual and multi-sectoral physical literacy interventions, it was important to document creation and delivery experiences to encourage future growth in the area and develop suggested best practices.Keywords: physical literacy, multi sector intervention, physical education, teachers
Procedia PDF Downloads 102626 Ammonia Bunkering Spill Scenarios: Modelling Plume’s Behaviour and Potential to Trigger Harmful Algal Blooms in the Singapore Straits
Authors: Bryan Low
Abstract:
In the coming decades, the global maritime industry will face a most formidable environmental challenge -achieving net zero carbon emissions by 2050. To meet this target, the Maritime Port Authority of Singapore (MPA) has worked to establish green shipping and digital corridors with ports of several other countries around the world where ships will use low-carbon alternative fuels such as ammonia for power generation. While this paradigm shift to the bunkering of greener fuels is encouraging, fuels like ammonia will also introduce a new and unique type of environmental risk in the unlikely scenario of a spill. While numerous modelling studies have been conducted for oil spills and their associated environmental impact on coastal and marine ecosystems, ammonia spills are comparatively less well understood. For example, there is a knowledge gap regarding how the complex hydrodynamic conditions of the Singapore Straits may influence the dispersion of a hypothetical ammonia plume, which has different physical and chemical properties compared to an oil slick. Chemically, ammonia can be absorbed by phytoplankton, thus altering the balance of the marine nitrogen cycle. Biologically, ammonia generally serves the role of a nutrient in coastal ecosystems at lower concentrations. However, at higher concentrations, it has been found to be toxic to many local species. It may also have the potential to trigger eutrophication and harmful algal blooms (HABs) in coastal waters, depending on local hydrodynamic conditions. Thus, the key objective of this research paper is to support the development of a model-based forecasting system that can predict ammonia plume behaviour in coastal waters, given prevailing hydrodynamic conditions and their environmental impact. This will be essential as ammonia bunkering becomes more commonplace in Singapore’s ports and around the world. Specifically, this system must be able to assess the HAB-triggering potential of an ammonia plume, as well as its lethal and sub-lethal toxic effects on local species. This will allow the relevant authorities to better plan risk mitigation measures or choose a time window with the ideal hydrodynamic conditions to conduct ammonia bunkering operations with minimal risk. In this paper, we present the first part of such a forecasting system: a jointly coupled hydrodynamic-water quality model that can capture how advection-diffusion processes driven by ocean currents influence plume behaviour and how the plume interacts with the marine nitrogen cycle. The model is then applied to various ammonia spill scenarios where the results are discussed in the context of current ammonia toxicity guidelines, impact on local ecosystems, and mitigation measures for future bunkering operations conducted in the Singapore Straits.Keywords: ammonia bunkering, forecasting, harmful algal blooms, hydrodynamics, marine nitrogen cycle, oceanography, water quality modeling
Procedia PDF Downloads 83625 Application of the State of the Art of Hydraulic Models to Manage Coastal Problems, Case Study: The Egyptian Mediterranean Coast Model
Authors: Al. I. Diwedar, Moheb Iskander, Mohamed Yossef, Ahmed ElKut, Noha Fouad, Radwa Fathy, Mustafa M. Almaghraby, Amira Samir, Ahmed Romya, Nourhan Hassan, Asmaa Abo Zed, Bas Reijmerink, Julien Groenenboom
Abstract:
Coastal problems are stressing the coastal environment due to its complexity. The dynamic interaction between the sea and the land results in serious problems that threaten coastal areas worldwide, in addition to human interventions and activities. This makes the coastal environment highly vulnerable to natural processes like flooding, erosion, and the impact of human activities as pollution. Protecting and preserving this vulnerable coastal zone with its valuable ecosystems calls for addressing the coastal problems. This, in the end, will support the sustainability of the coastal communities and maintain the current and future generations. Consequently applying suitable management strategies and sustainable development that consider the unique characteristics of the coastal system is a must. The coastal management philosophy aims to solve the conflicts of interest between human development activities and this dynamic nature. Modeling emerges as a successful tool that provides support to decision-makers, engineers, and researchers for better management practices. Modeling tools proved that it is accurate and reliable in prediction. With its capability to integrate data from various sources such as bathymetric surveys, satellite images, and meteorological data, it offers the possibility for engineers and scientists to understand this complex dynamic system and get in-depth into the interaction between both the natural and human-induced factors. This enables decision-makers to make informed choices and develop effective strategies for sustainable development and risk mitigation of the coastal zone. The application of modeling tools supports the evaluation of various scenarios by affording the possibility to simulate and forecast different coastal processes from the hydrodynamic and wave actions and the resulting flooding and erosion. The state-of-the-art application of modeling tools in coastal management allows for better understanding and predicting coastal processes, optimizing infrastructure planning and design, supporting ecosystem-based approaches, assessing climate change impacts, managing hazards, and finally facilitating stakeholder engagement. This paper emphasizes the role of hydraulic models in enhancing the management of coastal problems by discussing the diverse applications of modeling in coastal management. It highlights the modelling role in understanding complex coastal processes, and predicting outcomes. The importance of informing decision-makers with modeling results which gives technical and scientific support to achieve sustainable coastal development and protection.Keywords: coastal problems, coastal management, hydraulic model, numerical model, physical model
Procedia PDF Downloads 29624 The Effect of Degraded Shock Absorbers on the Safety-Critical Tipping and Rolling Behaviour of Passenger Cars
Authors: Tobias Schramm, Günther Prokop
Abstract:
In Germany, the number of road fatalities has been falling since 2010 at a more moderate rate than before. At the same time, the average age of all registered passenger cars in Germany is rising continuously. Studies show that there is a correlation between the age and mileage of passenger cars and the degradation of their chassis components. Various studies show that degraded shock absorbers increase the braking distance of passenger cars and have a negative impact on driving stability. The exact effect of degraded vehicle shock absorbers on road safety is still the subject of research. A shock absorber examination as part of the periodic technical inspection is only mandatory in very few countries. In Germany, there is as yet no requirement for such a shock absorber examination. More comprehensive findings on the effect of degraded shock absorbers on the safety-critical driving dynamics of passenger cars can provide further arguments for the introduction of mandatory shock absorber testing as part of the periodic technical inspection. The specific effect chains of untripped rollover accidents are also still the subject of research. However, current research results show that the high proportion of sport utility vehicles in the vehicle field significantly increases the probability of untripped rollover accidents. The aim of this work is to estimate the effect of degraded twin-tube shock absorbers on the safety-critical tipping and rolling behaviour of passenger cars, which can lead to untripped rollover accidents. A characteristic curve-based five-mass full vehicle model and a semi-physical phenomenological shock absorber model were set up, parameterized and validated. The shock absorber model is able to reproduce the damping characteristics of vehicle twin-tube shock absorbers with oil and gas loss for various excitations. The full vehicle model was validated with steering wheel angle sinus sweep driving maneuvers. The model was then used to simulate steering wheel angle sine and fishhook maneuvers, which investigate the safety-critical tipping and rolling behavior of passenger cars. The simulations were carried out in a realistic parameter space in order to demonstrate the effect of various vehicle characteristics on the effect of degraded shock absorbers. As a result, it was shown that degraded shock absorbers have a negative effect on the tipping and rolling behavior of all passenger cars. Shock absorber degradation leads to a significant increase in the observed roll angles, particularly in the range of the roll natural frequency. This superelevation has a negative effect on the wheel load distribution during the driving maneuvers investigated. In particular, the height of the vehicle's center of gravity and the stabilizer stiffness of the vehicles has a major influence on the effect of degraded shock absorbers on the overturning and rolling behaviour of passenger cars.Keywords: numerical simulation, safety-critical driving dynamics, suspension degradation, tipping and rolling behavior of passenger cars, vehicle shock absorber
Procedia PDF Downloads 12623 A Lexicographic Approach to Obstacles Identified in the Ontological Representation of the Tree of Life
Authors: Sandra Young
Abstract:
The biodiversity literature is vast and heterogeneous. In today’s data age, numbers of data integration and standardisation initiatives aim to facilitate simultaneous access to all the literature across biodiversity domains for research and forecasting purposes. Ontologies are being used increasingly to organise this information, but the rationalisation intrinsic to ontologies can hit obstacles when faced with the intrinsic fluidity and inconsistency found in the domains comprising biodiversity. Essentially the problem is a conceptual one: biological taxonomies are formed on the basis of specific, physical specimens yet nomenclatural rules are used to provide labels to describe these physical objects. These labels are ambiguous representations of the physical specimen. An example of this is with the genus Melpomene, the scientific nomenclatural representation of a genus of ferns, but also for a genus of spiders. The physical specimens for each of these are vastly different, but they have been assigned the same nomenclatural reference. While there is much research into the conceptual stability of the taxonomic concept versus the nomenclature used, to the best of our knowledge as yet no research has looked empirically at the literature to see the conceptual plurality or singularity of the use of these species’ names, the linguistic representation of a physical entity. Language itself uses words as symbols to represent real world concepts, whether physical entities or otherwise, and as such lexicography has a well-founded history in the conceptual mapping of words in context for dictionary making. This makes it an ideal candidate to explore this problem. The lexicographic approach uses corpus-based analysis to look at word use in context, with a specific focus on collocated word frequencies (the frequencies of words used in specific grammatical and collocational contexts). It allows for inconsistencies and contradictions in the source data and in fact includes these in the word characterisation so that 100% of the available evidence is counted. Corpus analysis is indeed suggested as one of the ways to identify concepts for ontology building, because of its ability to look empirically at data and show patterns in language usage, which can indicate conceptual ideas which go beyond words themselves. In this sense it could potentially be used to identify if the hierarchical structures present within the empirical body of literature match those which have been identified in ontologies created to represent them. The first stages of this research have revealed a hierarchical structure that becomes apparent in the biodiversity literature when annotating scientific species’ names, common names and more general names as classes, which will be the focus of this paper. The next step in the research is focusing on a larger corpus in which specific words can be analysed and then compared with existing ontological structures looking at the same material, to evaluate the methods by means of an alternative perspective. This research aims to provide evidence as to the validity of the current methods in knowledge representation for biological entities, and also shed light on the way that scientific nomenclature is used within the literature.Keywords: ontology, biodiversity, lexicography, knowledge representation, corpus linguistics
Procedia PDF Downloads 137