Search results for: finite element formulation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4831

Search results for: finite element formulation

451 Innovation in "Low-Tech" Industries: Portuguese Footwear Industry

Authors: Antonio Marques, Graça Guedes

Abstract:

The Portuguese footwear industry had in the last five years a remarkable performance in the exportation values, the trade balance and others economic indicators. After a long period of difficulties and with a strong reduction of companies and employees since 1994 until 2009, the Portuguese footwear industry changed the strategy and is now a success case between the international players of footwear. Only the Italian industry sells footwear with a higher value than the Portuguese and the distance between them is decreasing year by year. This paper analyses how the Portuguese footwear companies innovate and make innovation, according the classification proposed by the Oslo Manual. Also analyses the strategy follow in the innovation process, as suggested by Freeman and Soete, and shows the linkage between the type of innovation and the strategy of innovation. The research methodology was qualitative and the strategy for data collection was the case study. The qualitative data will be analyzed with the MAXQDA software. The economic results of the footwear companies studied shows differences between all of them and these differences are related with the innovation strategy adopted. The companies focused in product and marketing innovation, oriented to their target market, have higher ratios “turnover per worker” than the companies focused in process innovation. However, all the footwear companies in this “low-tech” industry create value and contribute to a positive foreign trade of 1.310 million euros in 2013. The growth strategies implemented has the participation of the sectorial organizations in several innovative projects. And it’s obvious that cooperation between all of them is a critical element to the performance achieved by the companies and the innovation observed. Can conclude that the Portuguese footwear sector has in the last years an excellent performance (economic results, exportation values, trade balance, brands and international image) and his performance is strongly related with the strategy in innovation followed, the type of innovation and the networks in the cluster. A simplified model, called “Ace of Diamonds”, is proposed by the authors and explains the way how this performance was reached by the seven companies that participate in the study (two of them are the leaders in the setor), and if this model can be used in others traditional and “low-tech” industries.

Keywords: footwear, innovation, “low-tech” industry, Oslo manual

Procedia PDF Downloads 373
450 Label Survey in Romania: A Study on How Consumers Use Food Labeling

Authors: Gabriela Iordachescu, Mariana Cretu Stuparu, Mirela Praisler, Camelia Busila, Doina Voinescu, Camelia Vizireanu

Abstract:

The aim of the study was to evaluate the consumers’ degree of confidence in food labeling, how they use and understand the label and respectively food labeling elements. The label is a bridge between producers, suppliers, and consumers. It has to offer enough information in terms of public health and food safety, statement of ingredients, nutritional information, warnings and advisory statements, producing date and shelf-life, instructions for storage and preparation (if required). The survey was conducted on 500 consumers group in Romania, aged 15+, males and females, from urban and rural areas and with different graduation levels. The questionnaire was distributed face to face and online. It had single or multiple choices questions and label images for the efficiency and best understanding of the question. The law 1169/2011 applied to food products from 13 of December 2016 improved and adapted the requirements for labeling in a clear manner. The questions were divided on following topics: interest and general trust in labeling, use and understanding of label elements, understanding of the ingredient list and safety information, nutrition information, advisory statements, serving sizes, best before/use by meanings, intelligent labeling, and demographic data. Three choice selection exercises were also included. In this case, the consumers had to choose between two similar products and evaluate which label element is most important in product choice. The data were analysed using MINITAB 17 and PCA analysis. Most of the respondents trust the food label, taking into account some elements especially when they buy the first time the product. They usually check the sugar content and type of sugar, saturated fat and use the mandatory label elements and nutrition information panel. Also, the consumers pay attention to advisory statements, especially if one of the items is relevant to them or the family. Intelligent labeling is a challenging option. In addition, the paper underlines that the consumer is more careful and selective with the food consumption and the label is the main helper for these.

Keywords: consumers, food safety information, labeling, labeling nutritional information

Procedia PDF Downloads 208
449 A Corpus Study of English Verbs in Chinese EFL Learners’ Academic Writing Abstracts

Authors: Shuaili Ji

Abstract:

The correct use of verbs is an important element of high-quality research articles, and thus for Chinese EFL learners, it is significant to master characteristics of verbs and to precisely use verbs. However, some researches have shown that there are differences in using verbs between learners and native speakers and learners have difficulty in using English verbs. This corpus-based quantitative research can enhance learners’ knowledge of English verbs and promote the quality of research article abstracts even of the whole academic writing. The aim of this study is to find the differences between learners’ and native speakers’ use of verbs and to study the factors that contribute to those differences. To this end, the research question is as follows: What are the differences between most frequently used verbs by learners and those by native speakers? The research question is answered through a study that uses corpus-based data-driven approach to analyze the verbs used by learners in their abstract writings in terms of collocation, colligation and semantic prosody. The results show that: (1) EFL learners obviously overused ‘be, can, find, make’ and underused ‘investigate, examine, may’. As to modal verbs, learners obviously overused ‘can’ while underused ‘may’. (2) Learners obviously overused ‘we find + object clauses’ while underused ‘nouns (results, findings, data) + suggest/indicate/reveal + object clauses’ when expressing research results. (3) Learners tended to transfer the collocation, colligation and semantic prosody of shǐ and zuò to make. (4) Learners obviously overused ‘BE+V-ed’ and used BE as the main verb. They also obviously overused the basic forms of BE such as be, is, are, while obviously underused its inflections (was, were). These results manifested learners’ lack of accuracy and idiomatic property in verb usage. Due to the influence of the concept transfer of Chinese, the verbs in learners’ abstracts showed obvious transfer of mother language. In addition, learners have not fully mastered the use of verbs, avoiding using complex colligations to prevent errors. Based on these findings, the present study has implications for English teaching, seeking to have implications for English academic abstract writing in China. Further research could be undertaken to study the use of verbs in the whole dissertation to find out whether the characteristic of the verbs in abstracts can apply in the whole dissertation or not.

Keywords: academic writing abstracts, Chinese EFL learners, corpus-based, data-driven, verbs

Procedia PDF Downloads 325
448 Enhancing Sell-In and Sell-Out Forecasting Using Ensemble Machine Learning Method

Authors: Vishal Das, Tianyi Mao, Zhicheng Geng, Carmen Flores, Diego Pelloso, Fang Wang

Abstract:

Accurate sell-in and sell-out forecasting is a ubiquitous problem in the retail industry. It is an important element of any demand planning activity. As a global food and beverage company, Nestlé has hundreds of products in each geographical location that they operate in. Each product has its sell-in and sell-out time series data, which are forecasted on a weekly and monthly scale for demand and financial planning. To address this challenge, Nestlé Chilein collaboration with Amazon Machine Learning Solutions Labhas developed their in-house solution of using machine learning models for forecasting. Similar products are combined together such that there is one model for each product category. In this way, the models learn from a larger set of data, and there are fewer models to maintain. The solution is scalable to all product categories and is developed to be flexible enough to include any new product or eliminate any existing product in a product category based on requirements. We show how we can use the machine learning development environment on Amazon Web Services (AWS) to explore a set of forecasting models and create business intelligence dashboards that can be used with the existing demand planning tools in Nestlé. We explored recent deep learning networks (DNN), which show promising results for a variety of time series forecasting problems. Specifically, we used a DeepAR autoregressive model that can group similar time series together and provide robust predictions. To further enhance the accuracy of the predictions and include domain-specific knowledge, we designed an ensemble approach using DeepAR and XGBoost regression model. As part of the ensemble approach, we interlinked the sell-out and sell-in information to ensure that a future sell-out influences the current sell-in predictions. Our approach outperforms the benchmark statistical models by more than 50%. The machine learning (ML) pipeline implemented in the cloud is currently being extended for other product categories and is getting adopted by other geomarkets.

Keywords: sell-in and sell-out forecasting, demand planning, DeepAR, retail, ensemble machine learning, time-series

Procedia PDF Downloads 249
447 Numerical Study of Piled Raft Foundation Under Vertical Static and Seismic Loads

Authors: Hamid Oumer Seid

Abstract:

Piled raft foundation (PRF) is a union of pile and raft working together through the interaction of soil-pile, pile-raft, soil-raft and pile-pile to provide adequate bearing capacity and controlled settlement. A uniform pile positioning is used in PRF; however, there is a wide room for optimization through parametric study under vertical load to result in a safer and economical foundation. Addis Ababa is found in seismic zone 3 with a peak ground acceleration (PGA) above the threshold of damage, which makes investigating the performance of PRF under seismic load considering the dynamic kinematic soil structure interaction (SSI) vital. The study area is located in Addis Ababa around Mexico (commercial bank) and Kirkos (Nib, Zemen and United Bank) in which input parameters (pile length, pile diameter, pile spacing, raft area, raft thickness and load) are taken. A finite difference-based numerical software, FLAC3D V6, was used for the analysis. The Kobe (1995) and Northridge (1994) earthquakes were selected, and deconvolution analysis was done. A close load sharing between pile and raft was achieved at a spacing of 7D with different pile lengths and diameters. The maximum settlement reduction achieved is 9% for a pile of 2m diameter by increasing length from 10m to 20m, which shows pile length is not effective in reducing settlement. The installation of piles results in an increase in the negative bending moment of the raft compared with an unpiled raft. Hence, the optimized design depends on pile spacing and the raft edge length, while pile length and diameter are not significant parameters. An optimized piled raft configuration (𝐴𝐺/𝐴𝑅 = 0.25 at the center and piles provided around the edge) has reduced pile number by 40% and differential settlement by 95%. The dynamic analysis shows acceleration plot at the top of the piled raft has PGA of 0.25𝑚2/𝑠𝑒𝑐 and 0.63𝑚2/𝑠𝑒𝑐 for Northridge (1994) and Kobe (1995) earthquakes, respectively, due to attenuation of seismic waves. Pile head displacement (maximum is 2mm, and it is under the allowable limit) is affected by the PGA rather than the duration of an earthquake. End bearing and friction PRF performed similarly under two different earthquakes except for their vertical settlement considering SSI. Hence, PRF has shown adequate resistance to seismic loads.

Keywords: FLAC3D V6, earthquake, optimized piled raft foundation, pile head department

Procedia PDF Downloads 17
446 The Effects of a Nursing Dignity Care Program on Patients’ Dignity in Care

Authors: Yea-Pyng Lin

Abstract:

Dignity is a core element of nursing care. Maintaining the dignity of patients is an important issue because the health and recovery of patients can be adversely affected by a lack of dignity in their care. The aim of this study was to explore the effects of a nursing dignity care program upon patients’ dignity in care. A quasi-experimental research design was implemented. Nurses were recruited by purposive sampling, and their patients were recruited by simple random sampling. Nurses in the experimental group received the nursing educational program on dignity care, while nurses in the control group received in-service education as usual. Data were collected via two instruments: the dignity in care scale for nurses and the dignity in care scale to patients, both of which were developed by the researcher. Both questionnaires consisted of three domains: agreement, importance, and frequencies of providing dignity care. A total of 178 nurses in the experimental group and 193 nurses in the control group completed the pretest and the follow-up evaluations at the first month, the third month, and the sixth month. The number of patients who were cared for by the nurses in the experimental group was 94 in the pretest. The number of patients in the post-test at the first, third, and sixth months were 91, 85, and 77, respectively. In the control group, 88 patients completed the II pretest, and 80 filled out the post-test at the first month, 77 at the third, and 74 at the sixth month. The major findings revealed the scores of agreement domain among nurses in the experimental group were found significantly different from those who in the control group at each point of time. The scores of importance domain between these two groups also displayed significant differences at pretest and the first month of post-test. Moreover, the frequencies of proving dignity care to patients were significant at pretest, the third month and sixth month of post-test. However, the experimental group had only significantly different from those who in the control group on the frequencies of receiving dignity care especially in the items of ‘privacy care,’ ‘communication care,’ and ‘emotional care’ for the patients. The results show that the nursing program on dignity care could increase nurses’ dignity care for patients in three domains of agreement, importance, and frequencies of providing dignity care. For patients, only the frequencies of receiving dignity care were significantly increased. Therefore, the nursing program on dignity care could be applicable for nurses’ in-service education and practice to enhance the ability of nurses to care for patient’s dignity.

Keywords: nurses, patients, dignity care, quasi-experimental, nursing education

Procedia PDF Downloads 461
445 The Marriage of a Sui Juris Girl: Permission of Wali (Guardian) or Consent of Ward in the Context of Personal Law in Pakistan

Authors: Muhammad Farooq

Abstract:

The present article explores the woman's consent as a paramount element in contracting a Muslim marriage. Also, whether permission of the wali (guardian) is a condition per se for a valid nikah (marriage deed) in the eye of law and Sharia. The researcher attempts to treat it through the related issues, inter alia; the marriage guardian, the women's legal capacity to give consent whether she is a virgin or nonvirgin and how that consent is to be given or may be understood. Does her laugh, tears or salience needs a legal interpretation as well as other female manifestations of emotion explained by the Muslim jurists? The silence of Muslim Family Law Ordinance 1961 (hereafter; MFLO 1961) in this regard and the likely reasons behind such silence is also inquired in brief. Germane to the theme, the various cases in which the true notion of woman's consent is interpreted by courts in Pakistan are also examined. In order to address the issue in hand, it is proposed to provide a brief overview of a few contemporary writers' opinions in which the real place of woman's consent in Muslim marriage is highlighted. Key to the idea of young Muslim woman's marriage, the doctrine of kafa'a (equality or suitability) between the man and woman is argued here to be grounded in the patriarchal and social norms. It is, therefore, concluded that such concept was the result of analogical reasoning and has less importance in the present time. As such it is not a valid factor in current scenarios to validate or invalidate marital bonds. A standard qualitative convention is used for this research. Among primary and secondary sources; for examples, Qur'an, Sunnah, Books, Scholarly articles, texts of law and case law is used to point out the researcher's view. In summation, the article is concluded with a bold statement that a young woman being a party to the contract, is absolutely entitled to 'full and free' consent for the Muslim marriage contract. It is the woman, an indispensable partaker and her consent (not the guardian' permission) that does validate or invalidate the said agreement in the eye of contemporary personal law and in Sharia.

Keywords: consent of woman, ejab (declaration), Nikah (marriage agreement), qabol (acceptance), sui juris (of age; independent), wali (guardian), wilayah (guardianship)

Procedia PDF Downloads 134
444 Fijian Women’s Role in Disaster Risk Management: Climate Change

Authors: Priyatma Singh, Manpreet Kaur

Abstract:

Climate change is progressively being identified as a global crisis and this has immediate repercussions for Fiji Islands due to its geographical location being prone to natural disasters. In the Pacific, it is common to find significant differences between men and women, in terms of their roles and responsibilities. In the pursuit of prudent preparedness before disasters, Fijian women’s engagement is constrained due to socially constructed roles and expectation of women here in Fiji. This vulnerability is aggravated by viewing women as victims, rather than as key people who have vital information of their society, economy, and environment, as well as useful skills, which, when recognized and used, can be effective in disaster risk reduction. The focus of this study on disaster management is to outline ways in which Fijian women can be actively engaged in disaster risk management, articulating in decision-making, negating the perceived ideology of women’s constricted roles in Fiji and unveiling social constraints that limit women’s access to practical disaster management strategic plan. This paper outlines the importance of gender mainstreaming in disaster risk reduction and the ways of mainstreaming gender based on a literature review. It analyses theoretical study of academic literature as well as papers and reports produced by various national and international institutions and explores ways to better inform and engage women for climate change per ser disaster management in Fiji. The empowerment of women is believed to be a critical element in constructing disaster resilience as women are often considered to be the designers of community resilience at the local level. Gender mainstreaming as a way of bringing a gender perspective into climate related disasters can be applied to distinguish the varying needs and capacities of women, and integrate them into climate change adaptation strategies. This study will advocate women articulation in disaster risk management, thus giving equal standing to females in Fiji and also identify the gaps and inform national and local Disaster Risk Management authorities to implement processes that enhance gender equality and women’s empowerment towards a more equitable and effective disaster practice.

Keywords: disaster risk management, climate change, gender mainstreaming, women empowerment

Procedia PDF Downloads 381
443 Development of a Context Specific Planning Model for Achieving a Sustainable Urban City

Authors: Jothilakshmy Nagammal

Abstract:

This research paper deals with the different case studies, where the Form-Based Codes are adopted in general and the different implementation methods in particular are discussed to develop a method for formulating a new planning model. The organizing principle of the Form-Based Codes, the transect is used to zone the city into various context specific transects. An approach is adopted to develop the new planning model, city Specific Planning Model (CSPM), as a tool to achieve sustainability for any city in general. A case study comparison method in terms of the planning tools used, the code process adopted and the various control regulations implemented in thirty two different cities are done. The analysis shows that there are a variety of ways to implement form-based zoning concepts: Specific plans, a parallel or optional form-based code, transect-based code /smart code, required form-based standards or design guidelines. The case studies describe the positive and negative results from based zoning, Where it is implemented. From the different case studies on the method of the FBC, it is understood that the scale for formulating the Form-Based Code varies from parts of the city to the whole city. The regulating plan is prepared with the organizing principle as the transect in most of the cases. The various implementation methods adopted in these case studies for the formulation of Form-Based Codes are special districts like the Transit Oriented Development (TOD), traditional Neighbourhood Development (TND), specific plan and Street based. The implementation methods vary from mandatory, integrated and floating. To attain sustainability the research takes the approach of developing a regulating plan, using the transect as the organizing principle for the entire area of the city in general in formulating the Form-Based Codes for the selected Special Districts in the study area in specific, street based. Planning is most powerful when it is embedded in the broader context of systemic change and improvement. Systemic is best thought of as holistic, contextualized and stake holder-owned, While systematic can be thought of more as linear, generalisable, and typically top-down or expert driven. The systemic approach is a process that is based on the system theory and system design principles, which are too often ill understood by the general population and policy makers. The system theory embraces the importance of a global perspective, multiple components, interdependencies and interconnections in any system. In addition, the recognition that a change in one part of a system necessarily alters the rest of the system is a cornerstone of the system theory. The proposed regulating plan taking the transect as an organizing principle and Form-Based Codes to achieve sustainability of the city has to be a hybrid code, which is to be integrated within the existing system - A Systemic Approach with a Systematic Process. This approach of introducing a few form based zones into a conventional code could be effective in the phased replacement of an existing code. It could also be an effective way of responding to the near-term pressure of physical change in “sensitive” areas of the community. With this approach and method the new Context Specific Planning Model is created towards achieving sustainability is explained in detail this research paper.

Keywords: context based planning model, form based code, transect, systemic approach

Procedia PDF Downloads 328
442 Valuing Academic Excellence in Higher Education: The Case of Establishing a Human Development Unit in a European Start-up University

Authors: Eleftheria Atta, Yianna Vovides, Marios Katsioloudes

Abstract:

In the fusion of neoliberalism and globalization, Higher Education (HE) is becoming increasingly complex. The changing patterns of the economy worldwide caused the development of high value-added economy HE has been viewed as a social investment, significant for the development of knowledge-based societies and economies. In order to contribute to economic competitiveness universities are required to produce local and employable workers in order to fit into the neoliberal economic environment. The emergence of neoliberal performativity, which measures outcomes, is a key aspect in a neoliberal era. It facilitates the redesign of institutions making organizations and individuals to think about themselves in relation to their performance. Performativity and performance management systems lead academics to become more effective, professionally advance, improve and become better than others and therefore act competitively. Besides the aforementioned complexities, universities also encounter the challenge of maintaining a set of values to guide an institution’s actions and which have always been highly respected in developing a HE institution. The formulation of a clear set of values also determines the institutional culture which will be maintained. It is evident that values create a significant framework for the workplace and may determine positive institutional results. Universities are required to engage in activities for capacity building which will improve their students’ competence as well as offer opportunities to administrative and academic staff to professionally develop in light of neoliberal performativity. Additionally, the University is now considered as an innovation ecosystem playing a significant role in providing education, research and innovation to help create solutions to meet social, environmental and economic challenges. Thus, Universities become central in orchestrating multi-actor innovation networks. This presentation will discuss the establishment of an institutional unit entitled ‘Human Development Unit’ (HDU) in a European start-up university. The activities of the HDU are envisioned as drivers for innovation that would enable the university as a whole to maintain its position in a fast-changing world and be ready to face adaptive challenges. In addition, the HDU provides its students, staff, and faculty with opportunities to advance their academic and professional development through engagement in programs that align with institutional values. It also serves as a connector with the broader community. The presentation will highlight the functions of three centers which the unit will coordinate namely, the Student Development Center (SDC), the Faculty & Staff Development Center (FSDC) and the Continuing Education Center (CEC). The presentation aligns with the aim of the conference as it welcomes presentations to discuss innovations and challenges encountered in HE. Particularly, this presentation seeks to discuss the establishment of an innovative unit at a start-up university which will contribute to creating an institutional culture shaped by the value of academic excellence for students as well as for staff, shaping and defining the functions and activities of the unit. The establishment of the proposed unit is crucial in a start-up university both to differentiate from other competitors but also to sustain its presence given the pressures in a neoliberal HE context.

Keywords: academic excellence, globalization, human development unit, neoliberalism

Procedia PDF Downloads 134
441 Preparation and Characterization of Anti-Acne Dermal Products Based on Erythromycin β-Cyclodextrin Lactide Complex

Authors: Lacramioara Ochiuz, Manuela Hortolomei, Aurelia Vasile, Iulian Stoleriu, Marcel Popa, Cristian Peptu

Abstract:

Local antibiotherapy is one of the most effective acne therapies. Erythromycin (ER) is a macrolide antibiotic topically administered for over 30 years in the form of gel, ointment or hydroalcoholic solution for the acne therapy. The use of ER as a base for topical dosage forms raises some technological challenges due to the physicochemical properties of this substance. The main disadvantage of ER is the poor water solubility (2 mg/mL) that limits both formulation using hydrophilic bases and skin permeability. Cyclodextrins (CDs) are biocompatible cyclic oligomers of glucose, with hydrophobic core and hydrophilic exterior. CDs are used to improve the bioavailability of drugs by increasing their solubility and/or their rate of dissolution after including the poorly water soluble substances (such as ER) in the hydrophobic cavity of CDs. Adding CDs leads to the increase of solubility and improved stability of the drug substance, increased permeability of substances of low water solubility, decreased toxicity and even to active dose reduction as a result of increased bioavailability. CDs increase skin tolerability by reducing the irritant effect of certain substances. We have included ER to lactide modified β-cyclodextrin, in order to improve the therapeutic effect of topically administered ER. The aims of the present study were to synthesise and describe a new complex with prolonged release of ER with lactide modified β-cyclodextrin (CD-LA_E), to investigate the CD-LA_E complex by scanning electron microscopy (SEM) and Fourier transform infrared spectroscopy (FTIR), to analyse the effect of semisolid base on the in vitro and ex vivo release characteristics of ER in the CD-LA_E complex by assessing the permeability coefficient and the release kinetics by fitting on mathematical models. SEM showed that, by complexation, ER changes its crystal structure and enters the amorphous phase. FTIR analysis has shown that certain specific bands of some groups in the ER structure move during the incapsulation process. The structure of the CD-LA_E complex has a molar ratio of 2.12 to 1 between lactide modified β-cyclodextrin and ER. The three semisolid bases (2% Carbopol, 13% Lutrol 127 and organogel based on Lutrol and isopropyl myristate) show a good capacity for incorporating the CD-LA_E complex, having a content of active ingredient ranging from 98.3% to 101.5% as compared to the declared value of 2% ER. The results of the in vitro dissolution test showed that the ER solubility was significantly increased by CDs incapsulation. The amount of ER released from the CD-LA_E gels was in the range of 76.23% to 89.01%, whereas gels based on ER released a maximum percentage of 26.01% ER. The ex vivo dissolution test confirms the increased ER solubility achieved by complexation, and supports the assumption that the use of this process might increase ER permeability. The highest permeability coefficient was obtained in ER released from gel based on 2% Carbopol: in vitro 33.33 μg/cm2/h, and ex vivo 26.82 μg/cm2/h, respectively. The release kinetics of complexed ER is performed by Fickian diffusion, according to the results obtained by fitting the data in the Korsmeyer-Peppas model.

Keywords: erythromycin, acne, lactide, cyclodextrin

Procedia PDF Downloads 259
440 Deliberative Democracy: As an Approach for Analyzing Gezi Movement Public Forums

Authors: Çisem Gündüz Arabacı

Abstract:

Deliberation has been seen one of the most important components of democratic ideals especially since liberal democratic attributions have been under fire. Deliberative democracy advocates that people should participate in collective decision-making processes by other mechanisms rather than conventional ones in order to reach legitimate decisions. Deliberative democratic theory makes emphasis on deliberative communication between people and encourages them not to merely express their political opinions (through surveys and referendum) but to form those opinions through public debates. This paper focuses on deliberative democratic visions of Gezi Park Public Forums by taking deliberative democracy as theoretical basis and examining Gezi Park Public Forums in the light of core elements of deliberative democracy. Gezi Movement started on 28 May 2013 in İstanbul as a reaction to local government's revision plans for Taksim Gezi Park, spread throughout the country and created new zones in public sphere which are called Public Park Forums. During the summer of 2013, especially in İstanbul but also in other cities, people gathered in public parks, discussed and took collective decisions concerning actions which they will take. It is worth to mention that since 3 and half years some Public Park Forums are still continuing their meetings regularly in city of İzmir. This paper analyzes four 'Public Park Forums' in İzmir which are called Bornova Public Forum; Karşıyaka Public Forum, Foça Public Forum and Güzelyalı Public Forum. These Forums are under investigation in terms of their understanding of democracy and the values that support that understanding. Participant observation and in-depth interview methods are being used as research methods. Core element of deliberative democracy are being collected under three main category: common interest versus private interest, membership, rational argument and these values are being questioning within one of each Forum in order to draw an overall picture and also make comparison between them. Discourse analysis is being used in order to examine empirical data and paper aims to reveal how participants of public forums perceive deliberative democratic values and whether they give weight to these values.

Keywords: deliberative democracy, Gezi Park movement, public forums, social movement

Procedia PDF Downloads 312
439 Numerical Evaluation of Lateral Bearing Capacity of Piles in Cement-Treated Soils

Authors: Reza Ziaie Moayed, Saeideh Mohammadi

Abstract:

Soft soil is used in many of civil engineering projects like coastal, marine and road projects. Because of low shear strength and stiffness of soft soils, large settlement and low bearing capacity will occur under superstructure loads. This will make the civil engineering activities more difficult and costlier. In the case of soft soils, improvement is a suitable method to increase the shear strength and stiffness for engineering purposes. In recent years, the artificial cementation of soil by cement and lime has been extensively used for soft soil improvement. Cement stabilization is a well-established technique for improving soft soils. Artificial cementation increases the shear strength and hardness of the natural soils. On the other hand, in soft soils, the use of piles to transfer loads to the depths of ground is usual. By using cement treated soil around the piles, high bearing capacity and low settlement in piles can be achieved. In the present study, lateral bearing capacity of short piles in cemented soils is investigated by numerical approach. For this purpose, three dimensional (3D) finite difference software, FLAC 3D is used. Cement treated soil has a strain hardening-softening behavior, because of breaking of bonds between cement agent and soil particle. To simulate such behavior, strain hardening-softening soil constitutive model is used for cement treated soft soil. Additionally, conventional elastic-plastic Mohr Coulomb constitutive model and linear elastic model are used for stress-strain behavior of natural soils and pile. To determine the parameters of constitutive models and also for verification of numerical model, the results of available triaxial laboratory tests on and insitu loading of piles in cement treated soft soil are used. Different parameters are considered in parametric study to determine the effective parameters on the bearing of the piles on cemented treated soils. In the present paper, the effect of various length and height of the artificial cemented area, different diameter and length of the pile and the properties of the materials are studied. Also, the effect of choosing a constitutive model for cemented treated soils in the bearing capacity of the pile is investigated.

Keywords: bearing capacity, cement-treated soils, FLAC 3D, pile

Procedia PDF Downloads 120
438 Identifying Necessary Words for Understanding Academic Articles in English as a Second or a Foreign Language

Authors: Stephen Wagman

Abstract:

This paper identifies three common structures in English sentences that are important for understanding academic texts, regardless of the characteristics or background of the readers or whether they are reading English as a second or a foreign language. Adapting a model from the Humanities, the explication of texts used in literary studies, the paper analyses sample sentences to reveal structures that enable the reader not only to decide which words are necessary for understanding the main ideas but to make the decision without knowing the meaning of the words. By their very syntax noun structures point to the key word for understanding them. As a rule, the key noun is followed by easily identifiable prepositions, relative pronouns, or verbs and preceded by single adjectives. With few exceptions, the modifiers are unnecessary for understanding the idea of the sentence. In addition, sentences are often structured by lists in which the items frequently consist of parallel groups of words. The principle of a list is that all the items are similar in meaning and it is not necessary to understand all of the items to understand the point of the list. This principle is especially important when the items are long or there is more than one list in the same sentence. The similarity in meaning of these items enables readers to reduce sentences that are hard to grasp to an understandable core without excessive use of a dictionary. Finally, the idea of subordination and the identification of the subordinate parts of sentences through connecting words makes it possible for readers to focus on main ideas without having to sift through the less important and more numerous secondary structures. Sometimes a main idea requires a subordinate one to complete its meaning, but usually, subordinate ideas are unnecessary for understanding the main point of the sentence and its part in the development of the argument from sentence to sentence. Moreover, the connecting words themselves indicate the functions of the subordinate structures. These most frequently show similarity and difference or reasons and results. Recognition of all of these structures can not only enable students to read more efficiently but to focus their attention on the development of the argument and this rather than a multitude of unknown vocabulary items, the repetition in lists, or the subordination in sentences are the one necessary element for comprehension of academic articles.

Keywords: development of the argument, lists, noun structures, subordination

Procedia PDF Downloads 242
437 Customer Relationship Management: An Essential Tool for Librarians

Authors: Pushkar Lal Sharma, Sanjana Singh, Umesh Kumar Sahu

Abstract:

This paper helps to understand the need of Customer Relationship Management in Libraries and why Librarians should implement the marketing concept of Customer Relationship Management in their libraries. As like any industry, libraries too face growing challenges to continuously meet customer expectations, and attract and retain users in light of overflowing competition. The ability to understand customers, build relationships and market diverse services is essential when considering ways to expand service offerings and improve Return on Investment. Since Library is service oriented Enterprise, hence the Customer/User/ Reader/Patron are the most important element of Library & Information System to whom and for whom library offers various services. How to provide better and most efficient services to its users is the main concern of every Library & Information centre in the present era. The basic difference between Business Enterprise and Library Information System is that ‘in Business System ‘the efficiency is measured in terms of ’profit’ or ‘monetary gains’; whereas in a Library & Information System, the efficiency is measured in terms of ‘services’ and therefore the goals that are set in Business Enterprise are’ profit oriented’ whereas goals set in the Library & Information Centre are ‘Service-oriented’. With the explosion of information and advancement of technology readers have so many choices to get information rather than visiting a library. Everything is available at the click of a mouse, library customers have become more knowledgeable and demanding in an era marked by abundance of information resources and services. With this explosion of information in every field of knowledge and choice in selection of service, satisfying user has become a challenge now a day for libraries. Accordingly, Libraries have to build good relationship with its users by adopting Customer relationship Management. CRM refers to the methods and tools which help an organization to manage its relationship with its customers in an organized way. The Customer Relationship Management (CRM) combines business strategy and technology to identify, acquire and retain good customer relationship. The goal of CRM is to optimize management of customer information needs & interests and increase customer satisfaction and loyalty. Implementing CRM in Libraries can improve customer data and process management, customer loyalty, retention and satisfaction.

Keywords: customer relationship management, CRM, CRM tools, customer satisfaction

Procedia PDF Downloads 61
436 The Gender Digital Divide in Education: The Case of Students from Rural Area from Republic of Moldova

Authors: Bărbuță Alina

Abstract:

The inter-causal relationship between social inequalities and the digital divide raises the relation issue of gender and information and communication technologies (ICT) - a key element in achieving sustainable development. In preparing generations as future digital citizens and for active socio-economic participation, ICT plays a key role in respecting gender equality. Although several studies over the years have shown that gender plays an important role in digital exclusion, in recent years, many studies with a focus on economically developed or developing countries identify an improvement in these aspects and a gap narrowing. By measuring students' digital competencies level, this paper aims to identify and analyse the existing gender digital inequalities among students. Our analyses are based on a sample of 1526 middle school students residing in rural areas from Republic of Moldova (54.2% girls, mean age 14,00, SD = 1.02). During the online survey they filled in a questionnaire adapted from the (yDSI) ”The Youth Digital Skills Indicator”. The instrument measures the level of five digital competence areas indicated in The European Digital Competence Framework (DigiCom 2.3.). Our results, based on t-test, indicate that depending on gender, there are no statistically significant differences regarding the levels of digital skills in 3 areas: Information navigation and processing; Communication and interaction; Problem solving. However, were identified significant differences in the level of digital skills in the area of ”Digital content creation” [t(1425) = 4.20, p = .000] and ”Safety” [t(1421) = 2.49, p = .000], with higher scores recorded by girls. Our results contradicts the general stereotype regarding the low level of digital competence among girls, in our sample girls scores being on pear with boys and even bigger in knowledge related to digital content creation and online safety skills. Additional investigations related to boys competence on digital safety are necessary as the implication of their low scores on this dimension may suggest boys exposure to digital threats.

Keywords: digital divide, education, gender digital divide, digital literacy, remote learning

Procedia PDF Downloads 93
435 Cultural Intelligence for the Managers of Tomorrow: A Data-Based Analysis of the Antecedents and Training Needs of Today’s Business School Students

Authors: Justin Byrne, Jose Ramon Cobo

Abstract:

The growing importance of cross- or intercultural competencies (used here interchangeably) for the business and management professionals is now a commonplace in both academic and professional literature. This reflects two parallel developments. On the one hand, it is a consequence of the increased attention paid to a whole range of 'soft skills', now seen as fundamental in both individuals' and corporate success. On the other hand, and more specifically, the increasing demand for interculturally competent professionals is a corollary of ongoing processes of globalization, which multiply and intensify encounters between individuals and companies from different cultural backgrounds. Business schools have, for some decades, responded to the needs of the job market and their own students by providing students with training in intercultural skills, as they are encouraged to do so by the major accreditation agencies on both sides of the Atlantic. Adapting Early and Ang's (2003) formulation of Cultural Intelligence (CQ), this paper aims to help fill the lagunae in the current literature on intercultural training in three main ways. First, it offers an in-depth analysis of the CQ of a little studied group: contemporary Millenial and 'Generation Z' Business School students. The level of analysis distinguishes between the four different dimensions of CQ, cognition, metacognition, motivation and behaviour, and thereby provides a detailed picture of the strengths and weaknesses in CQ of the group as a whole, as well as of different sub-groups and profiles of students. Secondly, by crossing these individual-level findings with respondents' socio-cultural and educational data, this paper also proposes and tests hypotheses regarding the relative impact and importance of four possible antecedents of intercultural skills identified in the literature: prior international experience; intercultural training, foreign language proficiency, and experience of cultural diversity in habitual country of residence. Third, we use this analysis to suggest data-based intercultural training priorities for today's management students. These conclusions are based on the statistical analysis of individual responses of some 300 Bachelor or Masters students in a major European Business School provided to two on-line surveys: Ang, Van Dyne, et al's (2007) standard 20-question self-reporting CQ Scale, and an original questionnaire designed by the authors to collate information on respondent's socio-demographic and educational profile relevant to our four hypotheses and explanatory variables. The data from both instruments was crossed in both descriptive statistical analysis and regression analysis. This research shows that there is no statistically significant and positive relationship between the four antecedents analyzed and overall CQ level. The exception in this respect is the statistically significant correlation between international experience, and the cognitive dimension of CQ. In contrast, the results show that the combination of international experience and foreign language skills acting together, does have a strong overall impact on CQ levels. These results suggest that selecting and/or training students with strong foreign language skills and providing them with international experience (through multinational programmes, academic exchanges or international internships) constitutes one effective way of training culturally intelligent managers of tomorrow.

Keywords: business school, cultural intelligence, millennial, training

Procedia PDF Downloads 153
434 Machine Learning Framework: Competitive Intelligence and Key Drivers Identification of Market Share Trends among Healthcare Facilities

Authors: Anudeep Appe, Bhanu Poluparthi, Lakshmi Kasivajjula, Udai Mv, Sobha Bagadi, Punya Modi, Aditya Singh, Hemanth Gunupudi, Spenser Troiano, Jeff Paul, Justin Stovall, Justin Yamamoto

Abstract:

The necessity of data-driven decisions in healthcare strategy formulation is rapidly increasing. A reliable framework which helps identify factors impacting a healthcare provider facility or a hospital (from here on termed as facility) market share is of key importance. This pilot study aims at developing a data-driven machine learning-regression framework which aids strategists in formulating key decisions to improve the facility’s market share which in turn impacts in improving the quality of healthcare services. The US (United States) healthcare business is chosen for the study, and the data spanning 60 key facilities in Washington State and about 3 years of historical data is considered. In the current analysis, market share is termed as the ratio of the facility’s encounters to the total encounters among the group of potential competitor facilities. The current study proposes a two-pronged approach of competitor identification and regression approach to evaluate and predict market share, respectively. Leveraged model agnostic technique, SHAP, to quantify the relative importance of features impacting the market share. Typical techniques in literature to quantify the degree of competitiveness among facilities use an empirical method to calculate a competitive factor to interpret the severity of competition. The proposed method identifies a pool of competitors, develops Directed Acyclic Graphs (DAGs) and feature level word vectors, and evaluates the key connected components at the facility level. This technique is robust since its data-driven, which minimizes the bias from empirical techniques. The DAGs factor in partial correlations at various segregations and key demographics of facilities along with a placeholder to factor in various business rules (for ex. quantifying the patient exchanges, provider references, and sister facilities). Identified are the multiple groups of competitors among facilities. Leveraging the competitors' identified developed and fine-tuned Random Forest Regression model to predict the market share. To identify key drivers of market share at an overall level, permutation feature importance of the attributes was calculated. For relative quantification of features at a facility level, incorporated SHAP (SHapley Additive exPlanations), a model agnostic explainer. This helped to identify and rank the attributes at each facility which impacts the market share. This approach proposes an amalgamation of the two popular and efficient modeling practices, viz., machine learning with graphs and tree-based regression techniques to reduce the bias. With these, we helped to drive strategic business decisions.

Keywords: competition, DAGs, facility, healthcare, machine learning, market share, random forest, SHAP

Procedia PDF Downloads 85
433 A Modified Estimating Equations in Derivation of the Causal Effect on the Survival Time with Time-Varying Covariates

Authors: Yemane Hailu Fissuh, Zhongzhan Zhang

Abstract:

a systematic observation from a defined time of origin up to certain failure or censor is known as survival data. Survival analysis is a major area of interest in biostatistics and biomedical researches. At the heart of understanding, the most scientific and medical research inquiries lie for a causality analysis. Thus, the main concern of this study is to investigate the causal effect of treatment on survival time conditional to the possibly time-varying covariates. The theory of causality often differs from the simple association between the response variable and predictors. A causal estimation is a scientific concept to compare a pragmatic effect between two or more experimental arms. To evaluate an average treatment effect on survival outcome, the estimating equation was adjusted for time-varying covariates under the semi-parametric transformation models. The proposed model intuitively obtained the consistent estimators for unknown parameters and unspecified monotone transformation functions. In this article, the proposed method estimated an unbiased average causal effect of treatment on survival time of interest. The modified estimating equations of semiparametric transformation models have the advantage to include the time-varying effect in the model. Finally, the finite sample performance characteristics of the estimators proved through the simulation and Stanford heart transplant real data. To this end, the average effect of a treatment on survival time estimated after adjusting for biases raised due to the high correlation of the left-truncation and possibly time-varying covariates. The bias in covariates was restored, by estimating density function for left-truncation. Besides, to relax the independence assumption between failure time and truncation time, the model incorporated the left-truncation variable as a covariate. Moreover, the expectation-maximization (EM) algorithm iteratively obtained unknown parameters and unspecified monotone transformation functions. To summarize idea, the ratio of cumulative hazards functions between the treated and untreated experimental group has a sense of the average causal effect for the entire population.

Keywords: a modified estimation equation, causal effect, semiparametric transformation models, survival analysis, time-varying covariate

Procedia PDF Downloads 169
432 FMCW Doppler Radar Measurements with Microstrip Tx-Rx Antennas

Authors: Yusuf Ulaş Kabukçu, Si̇nan Çeli̇k, Onur Salan, Mai̇de Altuntaş, Mert Can Dalkiran, Gökseni̇n Bozdağ, Metehan Bulut, Fati̇h Yaman

Abstract:

This study presents a more compact implementation of the 2.4GHz MIT Coffee Can Doppler Radar for 2.6GHz operating frequency. The main difference of our prototype depends on the use of microstrip antennas which makes it possible to transport with a small robotic vehicle. We have designed our radar system with two different channels: Tx and Rx. The system mainly consists of Voltage Controlled Oscillator (VCO) source, low noise amplifiers, microstrip antennas, splitter, mixer, low pass filter, and necessary RF connectors with cables. The two microstrip antennas, one is element for transmitter and the other one is array for receiver channel, was designed, fabricated and verified by experiments. The system has two operation modes: speed detection and range detection. If the switch of the operation mode is ‘Off’, only CW signal transmitted for speed measurement. When the switch is ‘On’, CW is frequency-modulated and range detection is possible. In speed detection mode, high frequency (2.6 GHz) is generated by a VCO, and then amplified to reach a reasonable level of transmit power. Before transmitting the amplified signal through a microstrip patch antenna, a splitter used in order to compare the frequencies of transmitted and received signals. Half of amplified signal (LO) is forwarded to a mixer, which helps us to compare the frequencies of transmitted and received (RF) and has the IF output, or in other words information of Doppler frequency. Then, IF output is filtered and amplified to process the signal digitally. Filtered and amplified signal showing Doppler frequency is used as an input of audio input of a computer. After getting this data Doppler frequency is shown as a speed change on a figure via Matlab script. According to experimental field measurements the accuracy of speed measurement is approximately %90. In range detection mode, a chirp signal is used to form a FM chirp. This FM chirp helps to determine the range of the target since only Doppler frequency measured with CW is not enough for range detection. Such a FMCW Doppler radar may be used in border security of the countries since it is capable of both speed and range detection.

Keywords: doppler radar, FMCW, range detection, speed detection

Procedia PDF Downloads 388
431 Fine Needle Aspiration Biopsy of Thyroid Nodules

Authors: Ilirian Laçi, Alketa Spahiu

Abstract:

Big strums of thyroid glandule observed by a simple viewing can be witnessed in everyday life. Medical cabinets evidence patients withpalpablenodes of thyroid glandule, mainly nodes of the size of 10 millimeters. Further, more cases which have resulted in negative under palpation have resulted in positive at ultrasound examination. Therefore, the use of ultrasound for diagnosing has increased the number of patients with nodes of thyroid glandule in the last couple of decades in all countries, Albania included. Thus, there has been evidence of an increased number of patients affected by this pathology, where female patients dominate. Demographically, the capital shows high numbers due to the high population, but of interest is the high incidence of those areas distanced from the sea. While regarding related pathologies, no significant link was evidenced, an element of ancestry was evident in the nodes of the thyroid glandule. When we talk of nodes of the thyroid glandule, we should consider hyperplasia, neoplasia, and inflammatory diseases that cause nodes of the thyroid glandule. This increase parallels the world’s increase of the incidence of thyroid glandule, with malign cases, which are at about 5% and are not depended on size. Given the numbers, with most thyroid glandule nodes being benign, the main objective of the examination of the nodes was the determination of benign and malign cases to avoid undue surgery. Subject of this study were 212 patients that underwent fine-needle aspiration (FNA) under ultrasound guidance at the Medical University Center of Tirana. All the patients came to the Mother Teresa University Hospital from public and private hospitals and other polyclinics. These patients had an ultrasound examination before visiting the Center of Nuclear Medicine for a scintigraph of thyroid glandule in the period September 2016 and September 2017. To correlate, all patients had been examined via ultrasound of the thyroid glandule prior to the scintigraph. The ultrasound included evaluation of the number of nodes, their size, their solid, cystic, or solid-cystic structure, echogenicity according to the gray scale, the presence of calcification, the presence of lymph nodes, the presence of adenopathy, and the correlation of the cytology results from the Laboratory of Pathological Anatomy of Medical University Center of Tirana.

Keywords: thyroid nodes, fine needle aspiration, ultrasound, scintigraphy

Procedia PDF Downloads 95
430 Presenting of 'Local Wishes Map' as a Tool for Promoting Dialogue and Developing Healthy Cities

Authors: Ana Maria G. Sperandio, Murilo U. Malek-Zadeh, João Luiz de S. Areas, Jussara C. Guarnieri

Abstract:

Intersectoral governance is a requirement for developing healthy cities. However, this achievement is difficult to be succeeded, especially in regions at low resources condition. Therefore, it was developed a cheap investigative procedure to diagnose sectoral wishes related to urban planning and health promotion. This procedure is composed of two phases, which can be applied to different groups in order to compare the results. The first phase is a conversation guided by a list of questions. Some of those questions aim to gather information about how individuals understand concepts such as healthy city or a health promotion and what they believe that constitutes the relation between urban planning and urban health. Other questions investigate local issues, and how citizens would like to promote dialogue between sectors. At second phase individuals stand around the investigated city (or city region) map and are asked to represent their wishes on it. They can represent it by writing text notations or inserting icons on it, with the latter representing a city element, for example, some trees, a square, a playground, a hospital, a cycle track. After groups had represented their wishes, the map can be photographed, and then the results from distinct groups can be compared. This procedure was conducted at a small city in Brazil (Holambra), in 2017 which is the first out of four years of the mayor’s term. The prefecture asked for this tool in order to make Holambra become a city of Potential Healthy Municipalities Network in Brazil. Two sectors were investigated: the government and the urban population. By the end of our investigation, the intersection from the group (i.e., population and government) maps was accounted for creating a map of common wishes. Therefore, the material produced can be used as a guide for promoting dialogue between sectors and as a tool of monitoring politics progress. The report of this procedure was directed to public managers, so they could see the common wishes between themselves and local populations, and use this tool as a guide for creating urban politics which intends to enhance health promotion and to develop a healthy city, even at low resources condition.

Keywords: governance, health promotion, intersectorality, urban planning

Procedia PDF Downloads 132
429 Malpractice, Even in Conditions of Compliance With the Rules of Dental Ethics

Authors: Saimir Heta, Kers Kapaj, Rialda Xhizdari, Ilma Robo

Abstract:

Despite the existence of different dental specialties, the dentist-patient relationship is unique, in the very fact that the treatment is performed by one doctor and the patient identifies the malpractice presented as part of that doctor's practice; this is in complete contrast to cases of medical treatments where the patient can be presented to a team of doctors, to treat a specific pathology. The rules of dental ethics are almost the same as the rules of medical ethics. The appearance of dental malpractice affects exactly this two-party relationship, created on the basis of professionalism, without deviations in this direction, between the dentist and the patient, but with very narrow individual boundaries, compared to cases of medical malpractice. Main text: Malpractice can have different reasons for its appearance, starting from professional negligence, but also from the lack of professional knowledge of the dentist who undertakes the dental treatment. It should always be seen in perspective that we are not talking about the individual - the dentist who goes to work with the intention of harming their patients. Malpractice can also be a consequence of the impossibility, for anatomical or physiological reasons of the tooth under dental treatment, to realize the predetermined dental treatment plan. On the other hand, the dentist himself is an individual who can be affected by health conditions, or have vices that affect the systemic health of the dentist as an individual, which in these conditions can cause malpractice. So, depending on the reason that led to the appearance of malpractice, the method of treatment from a legal point of view also varies, for the dentist who committed the malpractice, evaluating the latter if the malpractice came under the conditions of applying the rules of dental ethics. Conclusions: The deviation from the predetermined dental plan is the minimum sign of malpractice and the latter should not be definitively related only to cases of difficult dental treatments. The identification of the reason for the appearance of malpractice is the initial element, which makes the difference in the way of its treatment, from a legal point of view, and the involvement of the dentist in the assessment of the malpractice committed, must be based on the legislation in force, which must be said to have their specific changes in different states. Malpractice should be referred to, or included in the lectures or in the continuing education of professionals, because it serves as a method of obtaining professional experience in order not to repeat the same thing several times, by different professionals.

Keywords: dental ethics, malpractice, negligence, legal basis, continuing education, dental treatments

Procedia PDF Downloads 54
428 Regional Flood Frequency Analysis in Narmada Basin: A Case Study

Authors: Ankit Shah, R. K. Shrivastava

Abstract:

Flood and drought are two main features of hydrology which affect the human life. Floods are natural disasters which cause millions of rupees’ worth of damage each year in India and the whole world. Flood causes destruction in form of life and property. An accurate estimate of the flood damage potential is a key element to an effective, nationwide flood damage abatement program. Also, the increase in demand of water due to increase in population, industrial and agricultural growth, has let us know that though being a renewable resource it cannot be taken for granted. We have to optimize the use of water according to circumstances and conditions and need to harness it which can be done by construction of hydraulic structures. For their safe and proper functioning of hydraulic structures, we need to predict the flood magnitude and its impact. Hydraulic structures play a key role in harnessing and optimization of flood water which in turn results in safe and maximum use of water available. Mainly hydraulic structures are constructed on ungauged sites. There are two methods by which we can estimate flood viz. generation of Unit Hydrographs and Flood Frequency Analysis. In this study, Regional Flood Frequency Analysis has been employed. There are many methods for estimating the ‘Regional Flood Frequency Analysis’ viz. Index Flood Method. National Environmental and Research Council (NERC Methods), Multiple Regression Method, etc. However, none of the methods can be considered universal for every situation and location. The Narmada basin is located in Central India. It is drained by most of the tributaries, most of which are ungauged. Therefore it is very difficult to estimate flood on these tributaries and in the main river. As mentioned above Artificial Neural Network (ANN)s and Multiple Regression Method is used for determination of Regional flood Frequency. The annual peak flood data of 20 sites gauging sites of Narmada Basin is used in the present study to determine the Regional Flood relationships. Homogeneity of the considered sites is determined by using the Index Flood Method. Flood relationships obtained by both the methods are compared with each other, and it is found that ANN is more reliable than Multiple Regression Method for the present study area.

Keywords: artificial neural network, index flood method, multi layer perceptrons, multiple regression, Narmada basin, regional flood frequency

Procedia PDF Downloads 415
427 Ethical Decision-Making in AI and Robotics Research: A Proposed Model

Authors: Sylvie Michel, Emmanuelle Gagnou, Joanne Hamet

Abstract:

Researchers in the fields of AI and Robotics frequently encounter ethical dilemmas throughout their research endeavors. Various ethical challenges have been pinpointed in the existing literature, including biases and discriminatory outcomes, diffusion of responsibility, and a deficit in transparency within AI operations. This research aims to pinpoint these ethical quandaries faced by researchers and shed light on the mechanisms behind ethical decision-making in the research process. By synthesizing insights from existing literature and acknowledging prevalent shortcomings, such as overlooking the heterogeneous nature of decision-making, non-accumulative results, and a lack of consensus on numerous factors due to limited empirical research, the objective is to conceptualize and validate a model. This model will incorporate influences from individual perspectives and situational contexts, considering potential moderating factors in the ethical decision-making process. Qualitative analyses were conducted based on direct observation of an AI/Robotics research team focusing on collaborative robotics for several months. Subsequently, semi-structured interviews with 16 team members were conducted. The entire process took place during the first semester of 2023. Observations were analyzed using an analysis grid, and the interviews underwent thematic analysis using Nvivo software. An initial finding involves identifying the ethical challenges that AI/robotics researchers confront, underlining a disparity between practical applications and theoretical considerations regarding ethical dilemmas in the realm of AI. Notably, researchers in AI prioritize the publication and recognition of their work, sparking the genesis of these ethical inquiries. Furthermore, this article illustrated that researchers tend to embrace a consequentialist ethical framework concerning safety (for humans engaging with robots/AI), worker autonomy in relation to robots, and the societal implications of labor (can robots displace jobs?). A second significant contribution entails proposing a model for ethical decision-making within the AI/Robotics research sphere. The model proposed adopts a process-oriented approach, delineating various research stages (topic proposal, hypothesis formulation, experimentation, conclusion, and valorization). Across these stages and the ethical queries, they entail, a comprehensive four-point comprehension of ethical decision-making is presented: recognition of the moral quandary; moral judgment, signifying the decision-maker's aptitude to discern the morally righteous course of action; moral intention, reflecting the ability to prioritize moral values above others; and moral behavior, denoting the application of moral intention to the situation. Variables such as political inclinations ((anti)-capitalism, environmentalism, veganism) seem to wield significant influence. Moreover, age emerges as a noteworthy moderating factor. AI and robotics researchers are continually confronted with ethical dilemmas during their research endeavors, necessitating thoughtful decision-making. The contribution involves introducing a contextually tailored model, derived from meticulous observations and insightful interviews, enabling the identification of factors that shape ethical decision-making at different stages of the research process.

Keywords: ethical decision making, artificial intelligence, robotics, research

Procedia PDF Downloads 72
426 Using Optical Character Recognition to Manage the Unstructured Disaster Data into Smart Disaster Management System

Authors: Dong Seop Lee, Byung Sik Kim

Abstract:

In the 4th Industrial Revolution, various intelligent technologies have been developed in many fields. These artificial intelligence technologies are applied in various services, including disaster management. Disaster information management does not just support disaster work, but it is also the foundation of smart disaster management. Furthermore, it gets historical disaster information using artificial intelligence technology. Disaster information is one of important elements of entire disaster cycle. Disaster information management refers to the act of managing and processing electronic data about disaster cycle from its’ occurrence to progress, response, and plan. However, information about status control, response, recovery from natural and social disaster events, etc. is mainly managed in the structured and unstructured form of reports. Those exist as handouts or hard-copies of reports. Such unstructured form of data is often lost or destroyed due to inefficient management. It is necessary to manage unstructured data for disaster information. In this paper, the Optical Character Recognition approach is used to convert handout, hard-copies, images or reports, which is printed or generated by scanners, etc. into electronic documents. Following that, the converted disaster data is organized into the disaster code system as disaster information. Those data are stored in the disaster database system. Gathering and creating disaster information based on Optical Character Recognition for unstructured data is important element as realm of the smart disaster management. In this paper, Korean characters were improved to over 90% character recognition rate by using upgraded OCR. In the case of character recognition, the recognition rate depends on the fonts, size, and special symbols of character. We improved it through the machine learning algorithm. These converted structured data is managed in a standardized disaster information form connected with the disaster code system. The disaster code system is covered that the structured information is stored and retrieve on entire disaster cycle such as historical disaster progress, damages, response, and recovery. The expected effect of this research will be able to apply it to smart disaster management and decision making by combining artificial intelligence technologies and historical big data.

Keywords: disaster information management, unstructured data, optical character recognition, machine learning

Procedia PDF Downloads 120
425 Laminar Periodic Vortex Shedding over a Square Cylinder in Pseudoplastic Fluid Flow

Authors: Shubham Kumar, Chaitanya Goswami, Sudipto Sarkar

Abstract:

Pseudoplastic (n < 1, n being the power index) fluid flow can be found in food, pharmaceutical and process industries and has very complex flow nature. To our knowledge, inadequate research work has been done in this kind of flow even at very low Reynolds numbers. Here, in the present computation, we have considered unsteady laminar flow over a square cylinder in pseudoplastic flow environment. For Newtonian fluid flow, this laminar vortex shedding range lies between Re = 47-180. In this problem, we consider Re = 100 (Re = U∞ a/ ν, U∞ is the free stream velocity of the flow, a is the side of the cylinder and ν is the kinematic viscosity of the fluid). The pseudoplastic fluid range has been chosen from close to the Newtonian fluid (n = 0.8) to very high pseudoplasticity (n = 0.1). The flow domain is constituted using Gambit 2.2.30 and this software is also used to generate mesh and to impose the boundary conditions. For all places, the domain size is considered as 36a × 16a with 280 ×192 grid point in the streamwise and flow normal directions respectively. The domain and the grid points are selected after a thorough grid independent study at n = 1.0. Fine and equal grid spacing is used close to the square cylinder to capture the upper and lower shear layers shed from the cylinder. Away from the cylinder the grid is unequal in size and stretched out in all direction. Velocity inlet (u = U∞), pressure outlet (Neumann condition), symmetry (free-slip boundary condition du/dy = 0, v = 0) at upper and lower domain boundary conditions are used for this simulation. Wall boundary (u = v = 0) is considered on the square cylinder surface. Fully conservative 2-D unsteady Navier-Stokes equations are discretized and then solved by Ansys Fluent 14.5 to understand the flow nature. SIMPLE algorithm written in finite volume method is selected for this purpose which is the default solver in scripted in Fluent. The result obtained for Newtonian fluid flow agrees well with previous work supporting Fluent’s usefulness in academic research. A minute analysis of instantaneous and time averaged flow field is obtained both for Newtonian and pseudoplastic fluid flow. It has been observed that drag coefficient increases continuously with the reduced value of n. Also, the vortex shedding phenomenon changes at n = 0.4 due to flow instability. These are some of the remarkable findings for laminar periodic vortex shedding regime in pseudoplastic flow environment.

Keywords: Ansys Fluent, CFD, periodic vortex shedding, pseudoplastic fluid flow

Procedia PDF Downloads 190
424 Research on the Conservation Strategy of Territorial Landscape Based on Characteristics: The Case of Fujian, China

Authors: Tingting Huang, Sha Li, Geoffrey Griffiths, Martin Lukac, Jianning Zhu

Abstract:

Territorial landscapes have experienced a gradual loss of their typical characteristics during long-term human activities. In order to protect the integrity of regional landscapes, it is necessary to characterize, evaluate and protect them in a graded manner. The study takes Fujian, China, as an example and classifies the landscape characters of the site at the regional scale, middle scale, and detailed scale. A multi-scale approach combining parametric and holistic approaches is used to classify and partition the landscape character types (LCTs) and landscape character areas (LCAs) at different scales, and a multi-element landscape assessment approach is adopted to explore the conservation strategies of the landscape character. Firstly, multiple fields and multiple elements of geography, nature and humanities were selected as the basis of assessment according to the scales. Secondly, the study takes a parametric approach to the classification and partitioning of landscape character, Principal Component Analysis, and two-stage cluster analysis (K-means and GMM) in MATLAB software to obtain LCTs, combines with Canny Operator Edge Detection Algorithm to obtain landscape character contours and corrects LCTs and LCAs by field survey and manual identification methods. Finally, the study adopts the Landscape Sensitivity Assessment method to perform landscape character conservation analysis and formulates five strategies for different LCAs: conservation, enhancement, restoration, creation, and combination. This multi-scale identification approach can efficiently integrate multiple types of landscape character elements, reduce the difficulty of broad-scale operations in the process of landscape character conservation, and provide a basis for landscape character conservation strategies. Based on the natural background and the restoration of regional characteristics, the results of landscape character assessment are scientific and objective and can provide a strong reference in regional and national scale territorial spatial planning.

Keywords: parameterization, multi-scale, landscape character identify, landscape character assessment

Procedia PDF Downloads 90
423 Standard Essential Patents for Artificial Intelligence Hardware and the Implications For Intellectual Property Rights

Authors: Wendy de Gomez

Abstract:

Standardization is a critical element in the ability of a society to reduce uncertainty, subjectivity, misrepresentation, and interpretation while simultaneously contributing to innovation. Technological standardization is critical to codify specific operationalization through legal instruments that provide rules of development, expectation, and use. In the current emerging technology landscape Artificial Intelligence (AI) hardware as a general use technology has seen incredible growth as evidenced from AI technology patents between 2012 and 2018 in the United States Patent Trademark Office (USPTO) AI dataset. However, as outlined in the 2023 United States Government National Standards Strategy for Critical and Emerging Technology the codification through standardization of emerging technologies such as AI has not kept pace with its actual technological proliferation. This gap has the potential to cause significant divergent possibilities for the downstream outcomes of AI in both the short and long term. This original empirical research provides an overview of the standardization efforts around AI in different geographies and provides a background to standardization law. It quantifies the longitudinal trend of Artificial Intelligence hardware patents through the USPTO AI dataset. It seeks evidence of existing Standard Essential Patents from these AI hardware patents through a text analysis of the Statement of patent history and the Field of the invention of these patents in Patent Vector and examines their determination as a Standard Essential Patent and their inclusion in existing AI technology standards across the four main AI standards bodies- European Telecommunications Standards Institute (ETSI); International Telecommunication Union (ITU)/ Telecommunication Standardization Sector (-T); Institute of Electrical and Electronics Engineers (IEEE); and the International Organization for Standardization (ISO). Once the analysis is complete the paper will discuss both the theoretical and operational implications of F/Rand Licensing Agreements for the owners of these Standard Essential Patents in the United States Court and Administrative system. It will conclude with an evaluation of how Standard Setting Organizations (SSOs) can work with SEP owners more effectively through various forms of Intellectual Property mechanisms such as patent pools.

Keywords: patents, artifical intelligence, standards, F/Rand agreements

Procedia PDF Downloads 78
422 The Investigate Relationship between Moral Hazard and Corporate Governance with Earning Forecast Quality in the Tehran Stock Exchange

Authors: Fatemeh Rouhi, Hadi Nassiri

Abstract:

Earning forecast is a key element in economic decisions but there are some situations, such as conflicts of interest in financial reporting, complexity and lack of direct access to information has led to the phenomenon of information asymmetry among individuals within the organization and external investors and creditors that appear. The adverse selection and moral hazard in the investor's decision and allows direct assessment of the difficulties associated with data by users makes. In this regard, the role of trustees in corporate governance disclosure is crystallized that includes controls and procedures to ensure the lack of movement in the interests of the company's management and move in the direction of maximizing shareholder and company value. Therefore, the earning forecast of companies in the capital market and the need to identify factors influencing this study was an attempt to make relationship between moral hazard and corporate governance with earning forecast quality companies operating in the capital market and its impact on Earnings Forecasts quality by the company to be established. Getting inspiring from the theoretical basis of research, two main hypotheses and sub-hypotheses are presented in this study, which have been examined on the basis of available models, and with the use of Panel-Data method, and at the end, the conclusion has been made at the assurance level of 95% according to the meaningfulness of the model and each independent variable. In examining the models, firstly, Chow Test was used to specify either Panel Data method should be used or Pooled method. Following that Housman Test was applied to make use of Random Effects or Fixed Effects. Findings of the study show because most of the variables are positively associated with moral hazard with earnings forecasts quality, with increasing moral hazard, earning forecast quality companies listed on the Tehran Stock Exchange is increasing. Among the variables related to corporate governance, board independence variables have a significant relationship with earnings forecast accuracy and earnings forecast bias but the relationship between board size and earnings forecast quality is not statistically significant.

Keywords: corporate governance, earning forecast quality, moral hazard, financial sciences

Procedia PDF Downloads 315