Search results for: zernike moment feature descriptor
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2441

Search results for: zernike moment feature descriptor

431 Robust Inference with a Skew T Distribution

Authors: M. Qamarul Islam, Ergun Dogan, Mehmet Yazici

Abstract:

There is a growing body of evidence that non-normal data is more prevalent in nature than the normal one. Examples can be quoted from, but not restricted to, the areas of Economics, Finance and Actuarial Science. The non-normality considered here is expressed in terms of fat-tailedness and asymmetry of the relevant distribution. In this study a skew t distribution that can be used to model a data that exhibit inherent non-normal behavior is considered. This distribution has tails fatter than a normal distribution and it also exhibits skewness. Although maximum likelihood estimates can be obtained by solving iteratively the likelihood equations that are non-linear in form, this can be problematic in terms of convergence and in many other respects as well. Therefore, it is preferred to use the method of modified maximum likelihood in which the likelihood estimates are derived by expressing the intractable non-linear likelihood equations in terms of standardized ordered variates and replacing the intractable terms by their linear approximations obtained from the first two terms of a Taylor series expansion about the quantiles of the distribution. These estimates, called modified maximum likelihood estimates, are obtained in closed form. Hence, they are easy to compute and to manipulate analytically. In fact the modified maximum likelihood estimates are equivalent to maximum likelihood estimates, asymptotically. Even in small samples the modified maximum likelihood estimates are found to be approximately the same as maximum likelihood estimates that are obtained iteratively. It is shown in this study that the modified maximum likelihood estimates are not only unbiased but substantially more efficient than the commonly used moment estimates or the least square estimates that are known to be biased and inefficient in such cases. Furthermore, in conventional regression analysis, it is assumed that the error terms are distributed normally and, hence, the well-known least square method is considered to be a suitable and preferred method for making the relevant statistical inferences. However, a number of empirical researches have shown that non-normal errors are more prevalent. Even transforming and/or filtering techniques may not produce normally distributed residuals. Here, a study is done for multiple linear regression models with random error having non-normal pattern. Through an extensive simulation it is shown that the modified maximum likelihood estimates of regression parameters are plausibly robust to the distributional assumptions and to various data anomalies as compared to the widely used least square estimates. Relevant tests of hypothesis are developed and are explored for desirable properties in terms of their size and power. The tests based upon modified maximum likelihood estimates are found to be substantially more powerful than the tests based upon least square estimates. Several examples are provided from the areas of Economics and Finance where such distributions are interpretable in terms of efficient market hypothesis with respect to asset pricing, portfolio selection, risk measurement and capital allocation, etc.

Keywords: least square estimates, linear regression, maximum likelihood estimates, modified maximum likelihood method, non-normality, robustness

Procedia PDF Downloads 397
430 Creating a Multilevel ESL Learning Community for Adults

Authors: Gloria Chen

Abstract:

When offering conventional level-appropriate ESL classes for adults is not feasible, a multilevel adult ESL class can be formed to benefit those who need to learn English for daily function. This paper examines the rationale, the process, the contents, and the outcomes of a multilevel ESL class for adults. The action research discusses a variety of assessments, lesson plans, teaching strategies that facilitate lifelong language learning. In small towns where adult ESL learners are only a handful, often advanced students and inexperienced students have to be placed in one class. Such class might not be viewed as desirable, but with on-going assessments, careful lesson plans, and purposeful strategies, a multilevel ESL class for adults can overcome the obstacles and help learners to reach a higher level of English proficiency. This research explores some hand-on strategies, such as group rotating, cooperative learning, and modifying textbook contents for practical purpose, and evaluate their effectiveness. The data collected in this research include Needs Assessment (beginning of class term), Mid-term Self-Assessment (5 months into class term), End-of-term Student Reflection (10 months into class), and End-of-term Assessment from the Instructor (10 months into class). A descriptive analysis of the data explains the practice of this particular learning community, and reveal the areas for improvement and enrichment. This research answers the following questions: (1) How do the assessments positively help both learners and instructors? (2) How do the learning strategies prepare students to become independent, life-long English learners? (3) How do materials, grouping, and class schedule enhance the learning? The result of the research contributes to the field of teaching and learning in language, not limited in English, by (a) examining strategies of conducting a multilevel adult class, (b) involving adult language learners with various backgrounds and learning styles for reflection and feedback, and (c) improving teaching and learning strategies upon research methods and results. One unique feature of this research is how students can work together with the instructor to form a learning community, seeking and exploring resources available to them, to become lifelong language learners.

Keywords: adult language learning, assessment, multilevel, teaching strategies

Procedia PDF Downloads 352
429 Capacity of Cold-Formed Steel Warping-Restrained Members Subjected to Combined Axial Compressive Load and Bending

Authors: Maryam Hasanali, Syed Mohammad Mojtabaei, Iman Hajirasouliha, G. Charles Clifton, James B. P. Lim

Abstract:

Cold-formed steel (CFS) elements are increasingly being used as main load-bearing components in the modern construction industry, including low- to mid-rise buildings. In typical multi-storey buildings, CFS structural members act as beam-column elements since they are exposed to combined axial compression and bending actions, both in moment-resisting frames and stud wall systems. Current design specifications, including the American Iron and Steel Institute (AISI S100) and the Australian/New Zealand Standard (AS/NZS 4600), neglect the beneficial effects of warping-restrained boundary conditions in the design of beam-column elements. Furthermore, while a non-linear relationship governs the interaction of axial compression and bending, the combined effect of these actions is taken into account through a simplified linear expression combining pure axial and flexural strengths. This paper aims to evaluate the reliability of the well-known Direct Strength Method (DSM) as well as design proposals found in the literature to provide a better understanding of the efficiency of the code-prescribed linear interaction equation in the strength predictions of CFS beam columns and the effects of warping-restrained boundary conditions on their behavior. To this end, the experimentally validated finite element (FE) models of CFS elements under compression and bending were developed in ABAQUS software, which accounts for both non-linear material properties and geometric imperfections. The validated models were then used for a comprehensive parametric study containing 270 FE models, covering a wide range of key design parameters, such as length (i.e., 0.5, 1.5, and 3 m), thickness (i.e., 1, 2, and 4 mm) and cross-sectional dimensions under ten different load eccentricity levels. The results of this parametric study demonstrated that using the DSM led to the most conservative strength predictions for beam-column members by up to 55%, depending on the element’s length and thickness. This can be sourced by the errors associated with (i) the absence of warping-restrained boundary condition effects, (ii) equations for the calculations of buckling loads, and (iii) the linear interaction equation. While the influence of warping restraint is generally less than 6%, the code suggested interaction equation led to an average error of 4% to 22%, based on the element lengths. This paper highlights the need to provide more reliable design solutions for CFS beam-column elements for practical design purposes.

Keywords: beam-columns, cold-formed steel, finite element model, interaction equation, warping-restrained boundary conditions

Procedia PDF Downloads 104
428 Modeling and Simulating Productivity Loss Due to Project Changes

Authors: Robert Pellerin, Michel Gamache, Remi Trudeau, Nathalie Perrier

Abstract:

The context of large engineering projects is particularly favorable to the appearance of engineering changes and contractual modifications. These elements are potential causes for claims. In this paper, we investigate one of the critical components of the claim management process: the calculation of the impacts of changes in terms of losses of productivity due to the need to accelerate some project activities. When project changes are initiated, delays can arise. Indeed, project activities are often executed in fast-tracking in an attempt to respect the completion date. But the acceleration of project execution and the resulting rework can entail important costs as well as induce productivity losses. In the past, numerous methods have been proposed to quantify the duration of delays, the gains achieved by project acceleration, and the loss of productivity. The calculation related to those changes can be divided into two categories: direct cost and indirect cost. The direct cost is easily quantifiable as opposed to indirect costs which are rarely taken into account during the calculation of the cost of an engineering change or contract modification despite several research projects have been made on this subject. However, proposed models have not been accepted by companies yet, nor they have been accepted in court. Those models require extensive data and are often seen as too specific to be used for all projects. These techniques are also ignoring the resource constraints and the interdependencies between the causes of delays and the delays themselves. To resolve this issue, this research proposes a simulation model that mimics how major engineering changes or contract modifications are handled in large construction projects. The model replicates the use of overtime in a reactive scheduling mode in order to simulate the loss of productivity present when a project change occurs. Multiple tests were conducted to compare the results of the proposed simulation model with statistical analysis conducted by other researchers. Different scenarios were also conducted in order to determine the impact the number of activities, the time of occurrence of the change, the availability of resources, and the type of project changes on productivity loss. Our results demonstrate that the number of activities in the project is a critical variable influencing the productivity of a project. When changes occur, the presence of a large number of activities leads to a much lower productivity loss than a small number of activities. The speed of reducing productivity for 30-job projects is about 25 percent faster than the reduction speed for 120-job projects. The moment of occurrence of a change also shows a significant impact on productivity. Indeed, the sooner the change occurs, the lower the productivity of the labor force. The availability of resources also impacts the productivity of a project when a change is implemented. There is a higher loss of productivity when the amount of resources is restricted.

Keywords: engineering changes, indirect costs overtime, productivity, scheduling, simulation

Procedia PDF Downloads 238
427 Oligarchic Transitions within the Tunisian Autocratic Authoritarian System and the Struggle for Democratic Transformation: Before and beyond the 2010 Jasmine Revolution

Authors: M. Moncef Khaddar

Abstract:

This paper focuses mainly on a contextualized understanding of ‘autocratic authoritarianism’ in Tunisia without approaching its peculiarities in reference to the ideal type of capitalist-liberal democracy but rather analysing it as a Tunisian ‘civilian dictatorship’. This is reminiscent, to some extent, of the French ‘colonial authoritarianism’ in parallel with the legacy of the traditional formal monarchic absolutism. The Tunisian autocratic political system is here construed as a state manufactured nationalist-populist authoritarianism associated with a de facto presidential single party, two successive autocratic presidents and their subservient autocratic elites who ruled with an iron fist the de-colonialized ‘liberated nation’ that came to be subjected to a large scale oppression and domination under the new Tunisian Republic. The diachronic survey of Tunisia’s autocratic authoritarian system covers the early years of autocracy, under the first autocratic president Bourguiba, 1957-1987, as well as the different stages of its consolidation into a police-security state under the second autocratic president, Ben Ali, 1987-2011. Comparing the policies of authoritarian regimes, within what is identified synchronically as a bi-cephalous autocratic system, entails an in-depth study of the two autocrats, who ruled Tunisia for more than half a century, as modern adaptable autocrats. This is further supported by an exploration of the ruling authoritarian autocratic elites who played a decisive role in shaping the undemocratic state-society relations, under the 1st and 2nd President, and left an indelible mark, structurally and ideologically, on Tunisian polity. Emphasis is also put on the members of the governmental and state-party institutions and apparatuses that kept circulating and recycling from one authoritarian regime to another, and from the first ‘founding’ autocrat to his putschist successor who consolidated authoritarian stability, political continuity and autocratic governance. The reconfiguration of Tunisian political life, in the post-autocratic era, since 2011 will be analysed. This will be scrutinized, especially in light of the unexpected return of many high-profile figures and old guards of the autocratic authoritarian apparatchiks. How and why were, these public figures, from an autocratic era, able to return in a supposedly post-revolutionary moment? Finally, while some continue to celebrate the putative exceptional success of ‘democratic transition’ in Tunisia, within a context of ‘unfinished revolution’, others remain perplexed in the face of a creeping ‘oligarchic transition’ to a ‘hybrid regime’, characterized rather by elites’ reformist tradition than a bottom-up genuine democratic ‘change’. This latter is far from answering the 2010 ordinary people’s ‘uprisings’ and ‘aspirations, for ‘Dignity, Liberty and Social Justice’.

Keywords: authoritarianism, autocracy, democratization, democracy, populism, transition, Tunisia

Procedia PDF Downloads 147
426 Off-Line Text-Independent Arabic Writer Identification Using Optimum Codebooks

Authors: Ahmed Abdullah Ahmed

Abstract:

The task of recognizing the writer of a handwritten text has been an attractive research problem in the document analysis and recognition community with applications in handwriting forensics, paleography, document examination and handwriting recognition. This research presents an automatic method for writer recognition from digitized images of unconstrained writings. Although a great effort has been made by previous studies to come out with various methods, their performances, especially in terms of accuracy, are fallen short, and room for improvements is still wide open. The proposed technique employs optimal codebook based writer characterization where each writing sample is represented by a set of features computed from two codebooks, beginning and ending. Unlike most of the classical codebook based approaches which segment the writing into graphemes, this study is based on fragmenting a particular area of writing which are beginning and ending strokes. The proposed method starting with contour detection to extract significant information from the handwriting and the curve fragmentation is then employed to categorize the handwriting into Beginning and Ending zones into small fragments. The similar fragments of beginning strokes are grouped together to create Beginning cluster, and similarly, the ending strokes are grouped to create the ending cluster. These two clusters lead to the development of two codebooks (beginning and ending) by choosing the center of every similar fragments group. Writings under study are then represented by computing the probability of occurrence of codebook patterns. The probability distribution is used to characterize each writer. Two writings are then compared by computing distances between their respective probability distribution. The evaluations carried out on ICFHR standard dataset of 206 writers using Beginning and Ending codebooks separately. Finally, the Ending codebook achieved the highest identification rate of 98.23%, which is the best result so far on ICFHR dataset.

Keywords: off-line text-independent writer identification, feature extraction, codebook, fragments

Procedia PDF Downloads 512
425 Atomic Decomposition Audio Data Compression and Denoising Using Sparse Dictionary Feature Learning

Authors: T. Bryan , V. Kepuska, I. Kostnaic

Abstract:

A method of data compression and denoising is introduced that is based on atomic decomposition of audio data using “basis vectors” that are learned from the audio data itself. The basis vectors are shown to have higher data compression and better signal-to-noise enhancement than the Gabor and gammatone “seed atoms” that were used to generate them. The basis vectors are the input weights of a Sparse AutoEncoder (SAE) that is trained using “envelope samples” of windowed segments of the audio data. The envelope samples are extracted from the audio data by performing atomic decomposition with Gabor or gammatone seed atoms. This process identifies segments of audio data that are locally coherent with the seed atoms. Envelope samples are extracted by identifying locally coherent audio data segments with Gabor or gammatone seed atoms, found by matching pursuit. The envelope samples are formed by taking the kronecker products of the atomic envelopes with the locally coherent data segments. Oracle signal-to-noise ratio (SNR) verses data compression curves are generated for the seed atoms as well as the basis vectors learned from Gabor and gammatone seed atoms. SNR data compression curves are generated for speech signals as well as early American music recordings. The basis vectors are shown to have higher denoising capability for data compression rates ranging from 90% to 99.84% for speech as well as music. Envelope samples are displayed as images by folding the time series into column vectors. This display method is used to compare of the output of the SAE with the envelope samples that produced them. The basis vectors are also displayed as images. Sparsity is shown to play an important role in producing the highest denoising basis vectors.

Keywords: sparse dictionary learning, autoencoder, sparse autoencoder, basis vectors, atomic decomposition, envelope sampling, envelope samples, Gabor, gammatone, matching pursuit

Procedia PDF Downloads 252
424 Public Functions of Kazakh Modern Literature

Authors: Erkingul Soltanaeva, Omyrkhan Abdimanuly, Alua Temirbolat

Abstract:

In this article, the public and social functions of literature and art in the Republic of Kazakhstan were analyzed on the basis of formal and informal literary organizations. The external and internal, subjective and objective factors which influenced the modern literary process were determined. The literary forces, their consolidation, types of organization in the art of word were examined. The periods of the literary process as planning, organization, promotion, and evaluation and their leading forces and approaches were analyzed. The right point of view to the language and mentality of the society force will influence to the literary process. The Ministry of Culture, the Writers' Union of RK and various non-governmental organizations are having different events for the promotion of literary process and to glorify literary personalities in the entire territory of Kazakhstan. According to the cultural plan of different state administration, there was a big program in order to publish their literary encyclopedia, to glorify and distribute books of own poets and writers of their region to the country. All of these official measures will increase the reader's interest in the book and will also bring up people to the patriotic education and improve the status of the native language. The professional literary publications such as the newspaper ‘Kazakh literature’, magazine ‘Zhuldyz’, and journal ‘Zhalyn’ materials which were published in the periods 2013-2015 on the basis of statistical analysis of the Kazakh literature topical to the issues and the field of themes are identified and their level of connection with the public situations are defined. The creative freedom, relations between society and the individual, the state of the literature, the problems of advantages and disadvantages were taken into consideration in the same articles. The level of functions was determined through the public role of literature, social feature, personal peculiarities. Now the stages as the literature management planning, organization, motivation, as well as the evaluation are forming and developing in Kazakhstan. But we still need the development of literature management to satisfy the actual requirements of the today’s agenda.

Keywords: literature management, material, literary process, social functions

Procedia PDF Downloads 384
423 A Real Time Set Up for Retrieval of Emotional States from Human Neural Responses

Authors: Rashima Mahajan, Dipali Bansal, Shweta Singh

Abstract:

Real time non-invasive Brain Computer Interfaces have a significant progressive role in restoring or maintaining a quality life for medically challenged people. This manuscript provides a comprehensive review of emerging research in the field of cognitive/affective computing in context of human neural responses. The perspectives of different emotion assessment modalities like face expressions, speech, text, gestures, and human physiological responses have also been discussed. Focus has been paid to explore the ability of EEG (Electroencephalogram) signals to portray thoughts, feelings, and unspoken words. An automated workflow-based protocol to design an EEG-based real time Brain Computer Interface system for analysis and classification of human emotions elicited by external audio/visual stimuli has been proposed. The front end hardware includes a cost effective and portable Emotive EEG Neuroheadset unit, a personal computer and a set of external stimulators. Primary signal analysis and processing of real time acquired EEG shall be performed using MATLAB based advanced brain mapping toolbox EEGLab/BCILab. This shall be followed by the development of MATLAB based self-defined algorithm to capture and characterize temporal and spectral variations in EEG under emotional stimulations. The extracted hybrid feature set shall be used to classify emotional states using artificial intelligence tools like Artificial Neural Network. The final system would result in an inexpensive, portable and more intuitive Brain Computer Interface in real time scenario to control prosthetic devices by translating different brain states into operative control signals.

Keywords: brain computer interface, electroencephalogram, EEGLab, BCILab, emotive, emotions, interval features, spectral features, artificial neural network, control applications

Procedia PDF Downloads 317
422 “Post-Industrial” Journalism as a Creative Industry

Authors: Lynette Sheridan Burns, Benjamin J. Matthews

Abstract:

The context of post-industrial journalism is one in which the material circumstances of mechanical publication have been displaced by digital technologies, increasing the distance between the orthodoxy of the newsroom and the culture of journalistic writing. Content is, with growing frequency, created for delivery via the internet, publication on web-based ‘platforms’ and consumption on screen media. In this environment, the question is not ‘who is a journalist?’ but ‘what is journalism?’ today. The changes bring into sharp relief new distinctions between journalistic work and journalistic labor, providing a key insight into the current transition between the industrial journalism of the 20th century, and the post-industrial journalism of the present. In the 20th century, the work of journalists and journalistic labor went hand-in-hand as most journalists were employees of news organizations, whilst in the 21st century evidence of a decoupling of ‘acts of journalism’ (work) and journalistic employment (labor) is beginning to appear. This 'decoupling' of the work and labor that underpins journalism practice is far reaching in its implications, not least for institutional structures. Under these conditions we are witnessing the emergence of expanded ‘entrepreneurial’ journalism, based on smaller, more independent and agile - if less stable - enterprise constructs that are a feature of creative industries. Entrepreneurial journalism is realized in a range of organizational forms from social enterprise, through to profit driven start-ups and hybrids of the two. In all instances, however, the primary motif of the organization is an ideological definition of journalism. An example is the Scoop Foundation for Public Interest Journalism in New Zealand, which owns and operates Scoop Publishing Limited, a not for profit company and social enterprise that publishes an independent news site that claims to have over 500,000 monthly users. Our paper demonstrates that this journalistic work meets the ideological definition of journalism; conducted within the creative industries using an innovative organizational structure that offers a new, viable post-industrial future for journalism.

Keywords: creative industries, digital communication, journalism, post industrial

Procedia PDF Downloads 280
421 An Interactive Online Academic Writing Resource for Research Students in Engineering

Authors: Eleanor K. P. Kwan

Abstract:

English academic writing, it has been argued, is an acquired language even for English speakers. For research students whose English is not their first language, however, the acquisition process is often more challenging. Instead of hoping that students would acquire the conventions themselves through extensive reading, there is a need for the explicit teaching of linguistic conventions in academic writing, as explicit teaching could help students to be more aware of the different generic conventions in different disciplines in science. This paper presents an interuniversity effort to develop an online academic writing resource for research students in five subdisciplines in engineering, upon the completion of the needs analysis which indicates that students and faculty members are more concerned about students’ ability to organize an extended text than about grammatical accuracy per se. In particular, this paper focuses on the materials developed for thesis writing (also called dissertation writing in some tertiary institutions), as theses form an essential graduation requirement for all research students and this genre is also expected to demonstrate the writer’s competence in research and contributions to the research community. Drawing on Swalesian move analysis of research articles, this online resource includes authentic materials written by students and faculty members from the participating institutes. Highlight will be given to several aspects and challenges of developing this online resource. First, as the online resource aims at moving beyond providing instructions on academic writing, a range of interactive activities need to be designed to engage the users, which is one feature which differentiates this online resource from other equally informative websites on academic writing. Second, it will also include discussion on divergent textual practices in different subdisciplines, which help to illustrate different practices among these subdisciplines. Third, since theses, probably one of the most extended texts a research student will complete, require effective use of signposting devices to facility readers’ understanding, this online resource will also provide both explanation and activities on different components that contribute to text coherence. Finally results from piloting will also be included to shed light on the effectiveness of the materials, which could be useful for future development.

Keywords: academic writing, English for academic purposes, online language learning materials, scientific writing

Procedia PDF Downloads 269
420 Analytical Solutions of Josephson Junctions Dynamics in a Resonant Cavity for Extended Dicke Model

Authors: S.I.Mukhin, S. Seidov, A. Mukherjee

Abstract:

The Dicke model is a key tool for the description of correlated states of quantum atomic systems, excited by resonant photon absorption and subsequently emitting spontaneous coherent radiation in the superradiant state. The Dicke Hamiltonian (DH) is successfully used for the description of the dynamics of the Josephson Junction (JJ) array in a resonant cavity under applied current. In this work, we have investigated a generalized model, which is described by DH with a frustrating interaction term. This frustrating interaction term is explicitly the infinite coordinated interaction between all the spin half in the system. In this work, we consider an array of N superconducting islands, each divided into two sub-islands by a Josephson Junction, taken in a charged qubit / Cooper Pair Box (CPB) condition. The array is placed inside the resonant cavity. One important aspect of the problem lies in the dynamical nature of the physical observables involved in the system, such as condensed electric field and dipole moment. It is important to understand how these quantities behave with time to define the quantum phase of the system. The Dicke model without frustrating term is solved to find the dynamical solutions of the physical observables in analytic form. We have used Heisenberg’s dynamical equations for the operators and on applying newly developed Rotating Holstein Primakoff (HP) transformation and DH we have arrived at the four coupled nonlinear dynamical differential equations for the momentum and spin component operators. It is possible to solve the system analytically using two-time scales. The analytical solutions are expressed in terms of Jacobi's elliptic functions for the metastable ‘bound luminosity’ dynamic state with the periodic coherent beating of the dipoles that connect the two double degenerate dipolar ordered phases discovered previously. In this work, we have proceeded the analysis with the extended DH with a frustrating interaction term. Inclusion of the frustrating term involves complexity in the system of differential equations and it gets difficult to solve analytically. We have solved semi-classical dynamic equations using the perturbation technique for small values of Josephson energy EJ. Because the Hamiltonian contains parity symmetry, thus phase transition can be found if this symmetry is broken. Introducing spontaneous symmetry breaking term in the DH, we have derived the solutions which show the occurrence of finite condensate, showing quantum phase transition. Our obtained result matches with the existing results in this scientific field.

Keywords: Dicke Model, nonlinear dynamics, perturbation theory, superconductivity

Procedia PDF Downloads 134
419 Transgender Practices as Queer Politics: African a Variant

Authors: Adekeye Joshua Temitope

Abstract:

“Transgender” presents a complexion of ambiguity in the African context and it remains a contested topography in the discourse of sexual identity. The casts and stigmatisations towards transgender unveils vital facts and intricacies often ignored in the academic communities; the problems and oppressions of given sex/gender system, the constrain of monogamy and ignorance of fluidity of human sexuality thereby generating dual discords of “enforced heterosexual” and “unavoidable homosexual.” The African culture voids transgender movements and perceive same-sex sexual behavior as “taboo or bad habits” and this provide reasonable explanations for the failure of asserting for the sexual rights in GLBT movement in most discourse on sexuality in the African context. However, we could not deny the real existence of active flowing and fluidity of human sexuality even though its variants could be latent. The incessant consciousness of the existence of transgender practices in Africa either in form of bisexual desire or bisexual behavior with or without sexual identity, including people who identify themselves as bisexual opens up the vision for us to reconsider and reexamine what constitutes such ambiguity and controversy of transgender identity at present time. The notion of identity politics in gay, lesbian, and transgender community has its complexity and debates in its historical development. This paper analyses the representation of the historical trajectory of transgender practices by presenting the dynamic transition of how people cognize transgender practices under different historical conditions since the understanding of historical transition of bisexual practices would be very crucial and meaningful for gender/sexuality liberation movement at present time and in the future. The paper did a juxtaposition of the trajectories of bisexual practices between Anglo-American world and Africa, as it has certain similarities and differences within diverse historical complexities. The similar condition is the emergence of gay identity under the influence of capitalism but within different cultural context. Therefore, the political economy of each cultural context plays very important role in understanding the formation of sexual identities historically and its development and influence for the GLBT movement afterwards and in the future. By reexamining Kinsey’s categorization and applying Klein’s argument on individual’s sexual orientation this paper is poised to break the given and fixed connection among sexual behavior/sexual orientation/sexual identity, on the other hand to present the potential fluidity of human sexuality by reconsidering and reexamining the present given sex/gender system in our world. The paper concludes that it is obligatory for the essentialist and exclusionary trend at this historical moment since gay and lesbian communities in Africa need to clearly demonstrate and voice for themselves under the nuances of gender/sexuality liberation.

Keywords: heterosexual, homosexual, identity politics, queer politics, transgender

Procedia PDF Downloads 305
418 Development of Pre-Mitigation Measures and Its Impact on Life-Cycle Cost of Facilities: Indian Scenario

Authors: Mahima Shrivastava, Soumya Kar, B. Swetha Malika, Lalu Saheb, M. Muthu Kumar, P. V. Ponambala Moorthi

Abstract:

Natural hazards and manmade destruction causes both economic and societal losses. Generalized pre-mitigation strategies introduced and adopted for prevention of disaster all over the world are capable of augmenting the resiliency and optimizing the life-cycle cost of facilities. In countries like India where varied topographical feature exists requires location specific mitigation measures and strategies to be followed for better enhancement by event-driven and code-driven approaches. Present state of vindication measures followed and adopted, lags dominance in accomplishing the required development. In addition, serious concern and debate over climate change plays a vital role in enhancing the need and requirement for the development of time bound adaptive mitigation measures. For the development of long-term sustainable policies incorporation of future climatic variation is inevitable. This will further assist in assessing the impact brought about by the climate change on life-cycle cost of facilities. This paper develops more definite region specific and time bound pre-mitigation measures, by reviewing the present state of mitigation measures in India and all over the world for improving life-cycle cost of facilities. For the development of region specific adoptive measures, Indian regions were divided based on multiple-calamity prone regions and geo-referencing tools were used to incorporate the effect of climate changes on life-cycle cost assessment. This study puts forward significant effort in establishing sustainable policies and helps decision makers in planning for pre-mitigation measures for different regions. It will further contribute towards evaluating the life cycle cost of facilities by adopting the developed measures.

Keywords: climate change, geo-referencing tools, life-cycle cost, multiple-calamity prone regions, pre-mitigation strategies, sustainable policies

Procedia PDF Downloads 379
417 Personalizing Human Physical Life Routines Recognition over Cloud-based Sensor Data via AI and Machine Learning

Authors: Kaushik Sathupadi, Sandesh Achar

Abstract:

Pervasive computing is a growing research field that aims to acknowledge human physical life routines (HPLR) based on body-worn sensors such as MEMS sensors-based technologies. The use of these technologies for human activity recognition is progressively increasing. On the other hand, personalizing human life routines using numerous machine-learning techniques has always been an intriguing topic. In contrast, various methods have demonstrated the ability to recognize basic movement patterns. However, it still needs to be improved to anticipate the dynamics of human living patterns. This study introduces state-of-the-art techniques for recognizing static and dy-namic patterns and forecasting those challenging activities from multi-fused sensors. Further-more, numerous MEMS signals are extracted from one self-annotated IM-WSHA dataset and two benchmarked datasets. First, we acquired raw data is filtered with z-normalization and denoiser methods. Then, we adopted statistical, local binary pattern, auto-regressive model, and intrinsic time scale decomposition major features for feature extraction from different domains. Next, the acquired features are optimized using maximum relevance and minimum redundancy (mRMR). Finally, the artificial neural network is applied to analyze the whole system's performance. As a result, we attained a 90.27% recognition rate for the self-annotated dataset, while the HARTH and KU-HAR achieved 83% on nine living activities and 90.94% on 18 static and dynamic routines. Thus, the proposed HPLR system outperformed other state-of-the-art systems when evaluated with other methods in the literature.

Keywords: artificial intelligence, machine learning, gait analysis, local binary pattern (LBP), statistical features, micro-electro-mechanical systems (MEMS), maximum relevance and minimum re-dundancy (MRMR)

Procedia PDF Downloads 20
416 De Novo Design of Functional Metalloproteins for Biocatalytic Reactions

Authors: Ketaki D. Belsare, Nicholas F. Polizzi, Lior Shtayer, William F. DeGrado

Abstract:

Nature utilizes metalloproteins to perform chemical transformations with activities and selectivities that have long been the inspiration for design principles in synthetic and biological systems. The chemical reactivities of metalloproteins are directly linked to local environment effects produced by the protein matrix around the metal cofactor. A complete understanding of how the protein matrix provides these interactions would allow for the design of functional metalloproteins. The de novo computational design of proteins have been successfully used in design of active sites that bind metals like di-iron, zinc, copper containing cofactors; however, precisely designing active sites that can bind small molecule ligands (e.g., substrates) along with metal cofactors is still a challenge in the field. The de novo computational design of a functional metalloprotein that contains a purposefully designed substrate binding site would allow for precise control of chemical function and reactivity. Our research strategy seeks to elucidate the design features necessary to bind the cofactor protoporphyrin IX (hemin) in close proximity to a substrate binding pocket in a four helix bundle. First- and second-shell interactions are computationally designed to control orientation, electronic structure, and reaction pathway of the cofactor and substrate. The design began with a parameterized helical backbone that positioned a single histidine residue (as an axial ligand) to receive a second-shell H-bond from a Threonine on the neighboring helix. The metallo-cofactor, hemin was then manually placed in the binding site. A structural feature, pi-bulge was introduced to give substrate access to the protoporphyrin IX. These de novo metalloproteins are currently being tested for their activity towards hydroxylation and epoxidation. The de novo designed protein shows hydroxylation of aniline to 4-aminophenol. This study will help provide structural information of utmost importance in understanding de novo computational design variables impacting the functional activities of a protein.

Keywords: metalloproteins, protein design, de novo protein, biocatalysis

Procedia PDF Downloads 151
415 Mucoadhesive Chitosan-Coated Nanostructured Lipid Carriers for Oral Delivery of Amphotericin B

Authors: S. L. J. Tan, N. Billa, C. J. Roberts

Abstract:

Oral delivery of amphotericin B (AmpB) potentially eliminates constraints and side effects associated with intravenous administration, but remains challenging due to the physicochemical properties of the drug such that it results in meagre bioavailability (0.3%). In an advanced formulation, 1) nanostructured lipid carriers (NLC) were formulated as they can accommodate higher levels of cargoes and restrict drug expulsion and 2) a mucoadhesion feature was incorporated so as to impart sluggish transit of the NLC along the gastrointestinal tract and hence, maximize uptake and improve bioavailability of AmpB. The AmpB-loaded NLC formulation was successfully formulated via high shear homogenisation and ultrasonication. A chitosan coating was adsorbed onto the formed NLC. Physical properties of the formulations; particle size, zeta potential, encapsulation efficiency (%EE), aggregation states and mucoadhesion as well as the effect of the variable pH on the integrity of the formulations were examined. The particle size of the freshly prepared AmpB-loaded NLC was 163.1 ± 0.7 nm, with a negative surface charge and remained essentially stable over 120 days. Adsorption of chitosan caused a significant increase in particle size to 348.0 ± 12 nm with the zeta potential change towards positivity. Interestingly, the chitosan-coated AmpB-loaded NLC (ChiAmpB NLC) showed significant decrease in particle size upon storage, suggesting 'anti-Ostwald' ripening effect. AmpB-loaded NLC formulation showed %EE of 94.3 ± 0.02 % and incorporation of chitosan increased the %EE significantly, to 99.3 ± 0.15 %. This suggests that the addition of chitosan renders stability to the NLC formulation, interacting with the anionic segment of the NLC and preventing the drug leakage. AmpB in both NLC and ChiAmpB NLC showed polyaggregation which is the non-toxic conformation. The mucoadhesiveness of the ChiAmpB NLC formulation was observed in both acidic pH (pH 5.8) and near-neutral pH (pH 6.8) conditions as opposed to AmpB-loaded NLC formulation. Hence, the incorporation of chitosan into the NLC formulation did not only impart mucoadhesive property but also protected against the expulsion of AmpB which makes it well-primed as a potential oral delivery system for AmpB.

Keywords: Amphotericin B, mucoadhesion, nanostructured lipid carriers, oral delivery

Procedia PDF Downloads 162
414 Growing Pains and Organizational Development in Growing Enterprises: Conceptual Model and Its Empirical Examination

Authors: Maciej Czarnecki

Abstract:

Even though growth is one of the most important strategic objectives for many enterprises, we know relatively little about this phenomenon. This research contributes to broaden our knowledge of managerial consequences of growth. Scales for measuring organizational development and growing pains were developed. Conceptual model of connections among growth, organizational development, growing pains, selected development factors and financial performance were examined. The research process contained literature review, 20 interviews with managers, examination of 12 raters’ opinions, pilot research and 7 point Likert scale questionnaire research on 138 Polish enterprises employing 50-249 people which increased their employment at least by 50% within last three years. Factor analysis, Pearson product-moment correlation coefficient, student’s t-test and chi-squared test were used to develop scales. High Cronbach’s alpha coefficients were obtained. The verification of correlations among the constructs was carried out with factor correlations, multiple regressions and path analysis. When the enterprise grows, it is necessary to implement changes in its structure, management practices etc. (organizational development) to meet challenges of growing complexity. In this paper, organizational development was defined as internal changes aiming to improve the quality of existing or to introduce new elements in the areas of processes, organizational structure and culture, operational and management systems. Thus; H1: Growth has positive effects on organizational development. The main thesis of the research is that if organizational development does not catch up with growing complexity of growing enterprise, growing pains will arise (lower work comfort, conflicts, lack of control etc.). They will exert a negative influence on the financial performance and may result in serious organizational crisis or even bankruptcy. Thus; H2: Growth has positive effects on growing pains, H3: Organizational development has negative effects on growing pains, H4: Growing pains have negative effects on financial performance, H5: Organizational development has positive effects on financial performance. Scholars considered long lists of factors having potential influence on organizational development. The development of comprehensive model taking into account all possible variables may be beyond the capacity of any researcher or even statistical software used. After literature review, it was decided to increase the level of abstraction and to include following constructs in the conceptual model: organizational learning (OL), positive organization (PO) and high performance factors (HPF). H1a/b/c: OL/PO/HPF has positive effect on organizational development, H2a/b/c: OL/PO/HPF has negative effect on growing pains. The results of hypothesis testing: H1: partly supported, H1a/b/c: supported/not supported/supported, H2: not supported, H2a/b/c: not supported/partly supported/not supported, H3: supported, H4: partly supported, H5: supported. The research seems to be of a great value for both scholars and practitioners. It proved that OL and HPO matter for organizational development. Scales for measuring organizational development and growing pains were developed. Its main finding, though, is that organizational development is a good way of improving financial performance.

Keywords: organizational development, growth, growing pains, financial performance

Procedia PDF Downloads 219
413 Acquisition of Overt Pronoun Constraint in L2 Turkish by Adult Korean Speakers

Authors: Oktay Cinar

Abstract:

The aim of this study is to investigate the acquisition of Overt Pronoun Constraint (OPC) by adult Korean L2 Turkish speakers in order to find out how constraints regulating the syntax of null and overt subjects are acquired. OPC is claimed to be a universal feature of all null subject languages restricting the co-indexation between overt embedded pronoun and quantified or wh-question antecedents. However, there is no such restriction when the embedded subject is null or the antecedent is a referential subject. Considered as a principle of Universal Grammar (UG), OPC knowledge of L2 speakers has been widely tested with different language pairs. In the light of previous studies on OPC, it can be argued that L2 learners display early sensitivity to OPC constraints during their interlanguage grammar development. Concerning this, the co-indexation between overt embedded pronoun o (third person pronoun) and referential matrix subject is claimed to be controversial in Turkish, which poses problems with the universality of OPC. However, the current study argues against this claim by providing evidence from advanced Korean speakers that OPC is universal to all null subject languages and OPC knowledge can be accessed with direct access to UG. In other words, the performances of adult Korean speakers on the syntax of null and overt subjects are tested to support this claim. In order to test this, OPC task is used. 15 advanced speakers and a control group of adult native Turkish participants are instructed to determine the co-reference relationship between the subject of embedded clause, either overt pronominal o or null, and the subject of the matrix clause, either quantified pronoun and wh-question or referential antecedent. They are asked to select the interpretation of the embedded subject, either as the same person as in the matrix subject or another person who is not the same person in the matrix subject. These relations are represented with four conditions, and each condition has four questions (16 questions in total). The results claim that both control group and Korean L2 Turkish speakers display sensitivity to all constraints that OPC has, which suggests that OPC works in Turkish as well.

Keywords: adult Korean speakers, binding theory, generative second language acquisition, overt pronoun constraint

Procedia PDF Downloads 309
412 The Development of Iranian Theatrical Performance through the Integration of Narrative Elements from Western Drama

Authors: Azadeh Abbasikangevari

Abstract:

Background and Objectives: Theatre and performance are two separate themes. What is presented in Iran as a performance is the species and ritual and traditional forms of the play. The Iranian performance has its roots in myth and ritual. Drama is essentially a Western phenomenon that has gradually entered Iran and influenced Iranian performance. A theatre is based on antagonism (axis) and protagonism (anti-axis), while performance has a monotonous and steady motion. The elements of Iranian performance include field, performance on the stage, and magnification in performance, all of which are based on narration. This type of narration has been present in Iranian modern drama. The objective of this study was to analyze the drama structure according to narration elements by a comparison between the Western theater and the Iranian performance and determining the structural differences in the type of narrative. Materials and Methods: In this study, the elements of the drama were analyzed using the library method among the available library resources. The review of the literature included research articles and textbooks which focused on Iranian plays, as well as books and articles which encompassed narrative and drama element. Data were analyzed in the comparative-descriptive method. Results: Examining and studying different kinds of Iranian performances, showed that the narrative has always been a characteristic feature of Iranian plays. Iranians have narrated the stories and myths and have had a particular skill of oral literature. Over time, they slowly introduced narrative culture into their art, where this element is the most important structural element in Iran's dramatic art. Considering the fact that narration in Iranian traditional play, such as Ta'ziyeh and Naghali, was oral and consequently, it was slowly forgotten and excluded from written theatrical texts. Since the drama has entered in its western form in Iran, the plays written by the authors were influenced by narrative elements existing in western plays. Conclusions: The narrative’s element has undoubtedly had an impact on modern Iranian drama and Iranian contemporary drama. Therefore, the element of narration is an integral part of the Iranian traditional play structure.

Keywords: drama methodology, Iranian performance, Iranian modern drama, narration

Procedia PDF Downloads 129
411 A Study on the Application of Machine Learning and Deep Learning Techniques for Skin Cancer Detection

Authors: Hritwik Ghosh, Irfan Sadiq Rahat, Sachi Nandan Mohanty, J. V. R. Ravindra

Abstract:

In the rapidly evolving landscape of medical diagnostics, the early detection and accurate classification of skin cancer remain paramount for effective treatment outcomes. This research delves into the transformative potential of Artificial Intelligence (AI), specifically Deep Learning (DL), as a tool for discerning and categorizing various skin conditions. Utilizing a diverse dataset of 3,000 images representing nine distinct skin conditions, we confront the inherent challenge of class imbalance. This imbalance, where conditions like melanomas are over-represented, is addressed by incorporating class weights during the model training phase, ensuring an equitable representation of all conditions in the learning process. Our pioneering approach introduces a hybrid model, amalgamating the strengths of two renowned Convolutional Neural Networks (CNNs), VGG16 and ResNet50. These networks, pre-trained on the ImageNet dataset, are adept at extracting intricate features from images. By synergizing these models, our research aims to capture a holistic set of features, thereby bolstering classification performance. Preliminary findings underscore the hybrid model's superiority over individual models, showcasing its prowess in feature extraction and classification. Moreover, the research emphasizes the significance of rigorous data pre-processing, including image resizing, color normalization, and segmentation, in ensuring data quality and model reliability. In essence, this study illuminates the promising role of AI and DL in revolutionizing skin cancer diagnostics, offering insights into its potential applications in broader medical domains.

Keywords: artificial intelligence, machine learning, deep learning, skin cancer, dermatology, convolutional neural networks, image classification, computer vision, healthcare technology, cancer detection, medical imaging

Procedia PDF Downloads 86
410 A Qualitative Study Identifying the Complexities of Early Childhood Professionals' Use and Production of Data

Authors: Sara Bonetti

Abstract:

The use of quantitative data to support policies and justify investments has become imperative in many fields including the field of education. However, the topic of data literacy has only marginally touched the early care and education (ECE) field. In California, within the ECE workforce, there is a group of professionals working in policy and advocacy that use quantitative data regularly and whose educational and professional experiences have been neglected by existing research. This study aimed at analyzing these experiences in accessing, using, and producing quantitative data. This study utilized semi-structured interviews to capture the differences in educational and professional backgrounds, policy contexts, and power relations. The participants were three key professionals from county-level organizations and one working at a State Department to allow for a broader perspective at systems level. The study followed Núñez’s multilevel model of intersectionality. The key in Núñez’s model is the intersection of multiple levels of analysis and influence, from the individual to the system level, and the identification of institutional power dynamics that perpetuate the marginalization of certain groups within society. In a similar manner, this study looked at the dynamic interaction of different influences at individual, organizational, and system levels that might intersect and affect ECE professionals’ experiences with quantitative data. At the individual level, an important element identified was the participants’ educational background, as it was possible to observe a relationship between that and their positionality, both with respect to working with data and also with respect to their power within an organization and at the policy table. For example, those with a background in child development were aware of how their formal education failed to train them in the skills that are necessary to work in policy and advocacy, and especially to work with quantitative data, compared to those with a background in administration and/or business. At the organizational level, the interviews showed a connection between the participants’ position within the organization and their organization’s position with respect to others and their degree of access to quantitative data. This in turn affected their sense of empowerment and agency in dealing with data, such as shaping what data is collected and available. These differences reflected on the interviewees’ perceptions and expectations for the ECE workforce. For example, one of the interviewees pointed out that many ECE professionals happen to use data out of the necessity of the moment. This lack of intentionality is a cause for, and at the same time translates into missed training opportunities. Another interviewee pointed out issues related to the professionalism of the ECE workforce by remarking the inadequacy of ECE students’ training in working with data. In conclusion, Núñez’s model helped understand the different elements that affect ECE professionals’ experiences with quantitative data. In particular, what was clear is that these professionals are not being provided with the necessary support and that we are not being intentional in creating data literacy skills for them, despite what is asked of them and their work.

Keywords: data literacy, early childhood professionals, intersectionality, quantitative data

Procedia PDF Downloads 252
409 Use of Social Media in Political Communications: Example of Facebook

Authors: Havva Nur Tarakci, Bahar Urhan Torun

Abstract:

The transformation that is seen in every area of life by technology, especially internet technology changes the structure of political communications too. Internet, which is at the top of new communication technologies, affects political communications with its structure in a way that no traditional communication tools ever have and enables interaction and the channel between receiver and sender, and it becomes one of the most effective tools preferred among the political communication applications. This state as a result of technological convergence makes Internet an unobtainable place for political communication campaigns. Political communications, which means every kind of communication strategies that political parties called 'actors of political communications' use with the aim of messaging their opinions and party programmes to their present and potential voters who are a target group for them, is a type of communication that is frequently used also among social media tools at the present day. The electorate consisting of different structures is informed, directed, and managed by social media tools. Political parties easily reach their electorate by these tools without any limitations of both time and place and also are able to take the opinions and reactions of their electorate by the element of interaction that is a feature of social media. In this context, Facebook, which is a place that political parties use in social media at most, is a communication network including in our daily life since 2004. As it is one of the most popular social networks today, it is among the most-visited websites in the global scale. In this way, the research is based on the question, “How do the political parties use Facebook at the campaigns, which they conduct during the election periods, for informing their voters?” and it aims at clarifying the Facebook using practices of the political parties. In direction of this objective the official Facebook accounts of the four political parties (JDP–AKParti, PDP–BDP, RPP-CHP, NMP-MHP), which reach their voters by social media besides other communication tools, are treated, and a frame for the politics of Turkey is formed. The time of examination is constricted with totally two weeks, one week before the mayoral elections and one week after the mayoral elections, when it is supposed that the political parties use their Facebook accounts in full swing. As a research method, the method of content analysis is preferred, and the texts and the visual elements that are gotten are interpreted based on this analysis.

Keywords: Facebook, political communications, social media, electrorate

Procedia PDF Downloads 383
408 Clinical and Structural Differences in Knee Osteoarthritis with/without Synovial Hypertrophy

Authors: Gi-Young Park, Dong Rak Kwon, Sung Cheol Cho

Abstract:

Objective: The synovium is known to be involved in many pathological characteristic processes. Also, synovitis is common in advanced osteoarthritis. We aimed to evaluate the clinical, radiographic, and ultrasound findings in patients with knee osteoarthritis and to compare the clinical and imaging findings between knee osteoarthritis with and without synovial hypertrophy confirmed by ultrasound. Methods: One hundred knees (54 left, 46 right) in 95 patients (64 women, 31 men; mean age, 65.9 years; range, 43-85 years) with knee osteoarthritis were recruited. The Visual Analogue Scale (VAS) was used to assess the intensity of knee pain. The severity of knee osteoarthritis was classified according to Kellgren and Lawrence's (K-L) grade on a radiograph. Ultrasound examination was performed by a physiatrist who had 24 years of experience in musculoskeletal ultrasound. Ultrasound findings, including the thickness of joint effusion in the suprapatellar pouch, synovial hypertrophy, infrapatellar tendinosis, meniscal tear or extrusion, and Baker cyst, were measured and detected. The thickness of knee joint effusion was measured at the maximal anterior-posterior diameter of fluid collection in the suprapatellar pouch. Synovial hypertrophy was identified as the soft tissue of variable echogenicity, which is poorly compressible and nondisplaceable by compression of an ultrasound transducer. The knees were divided into two groups according to the presence of synovial hypertrophy. The differences in clinical and imaging findings between the two groups were evaluated by independent t-test and chi-square test. Results: Synovial hypertrophy was detected in 48 knees of 100 knees on ultrasound. There were no significant differences in demographic parameters and VAS score except in sex between the two groups (P<0.05). Medial meniscal extrusion and tear were significantly more frequent in knees with synovial hypertrophy than those in knees without synovial hypertrophy. K-L grade and joint effusion thickness were greater in patients with synovial hypertrophy than those in patients without synovial hypertrophy (P<0.05). Conclusion: Synovial hypertrophy in knee osteoarthritis was associated with greater suprapatellar joint effusion and higher K-L grade and maybe a characteristic ultrasound feature of late knee osteoarthritis. These results suggest that synovial hypertrophy on ultrasound can be regarded as a predictor of rapid progression in patients with knee osteoarthritis.

Keywords: knee osteoarthritis, synovial hypertrophy, ultrasound, K-L grade

Procedia PDF Downloads 75
407 Applying Image Schemas and Cognitive Metaphors to Teaching/Learning Italian Preposition a in Foreign/Second Language Context

Authors: Andrea Fiorista

Abstract:

The learning of prepositions is a quite problematic aspect in foreign language instruction, and Italian is certainly not an exception. In their prototypical function, prepositions express schematic relations of two entities in a highly abstract, typically image-schematic way. In other terms, prepositions assume concepts such as directionality, collocation of objects in space and time and, in Cognitive Linguistics’ terms, the position of a trajector with respect to a landmark. Learners of different native languages may conceptualize them differently, implying that they are supposed to operate a recategorization (or create new categories) fitting with the target language. However, most current Italian Foreign/Second Language handbooks and didactic grammars do not facilitate learners in carrying out the task, as they tend to provide partial and idiosyncratic descriptions, with the consequent learner’s effort to memorize them, most of the time without success. In their prototypical meaning, prepositions are used to specify precise topographical positions in the physical environment which become less and less accurate as they radiate out from what might be termed a concrete prototype. According to that, the present study aims to elaborate a cognitive and conceptually well-grounded analysis of some extensive uses of the Italian preposition a, in order to propose effective pedagogical solutions in the Teaching/Learning process. Image schemas, cognitive metaphors and embodiment represent efficient cognitive tools in a task like this. Actually, while learning the merely spatial use of the preposition a (e.g. Sono a Roma = I am in Rome; vado a Roma = I am going to Rome,…) is quite straightforward, it is more complex when a appears in constructions such as verbs of motion +a + infinitive (e.g. Vado a studiare = I am going to study), inchoative periphrasis (e.g. Tra poco mi metto a leggere = In a moment I will read), causative construction (e.g. Lui mi ha mandato a lavorare = He sent me to work). The study reports data from a teaching intervention of Focus on Form, in which a basic cognitive schema is used to facilitate both teachers and students to respectively explain/understand the extensive uses of a. The educational material employed translates Cognitive Linguistics’ theoretical assumptions, such as image schemas and cognitive metaphors, into simple images or proto-scenes easily comprehensible for learners. Illustrative material, indeed, is supposed to make metalinguistic contents more accessible. Moreover, the concept of embodiment is pedagogically applied through activities including motion and learners’ bodily involvement. It is expected that replacing rote learning with a methodology that gives grammatical elements a proper meaning, makes learning process more effective both in the short and long term.

Keywords: cognitive approaches to language teaching, image schemas, embodiment, Italian as FL/SL

Procedia PDF Downloads 87
406 The Determinants of Co-Production for Value Co-Creation: Quadratic Effects

Authors: Li-Wei Wu, Chung-Yu Wang

Abstract:

Recently, interest has been generated in the search for a new reference framework for value creation that is centered on the co-creation process. Co-creation implies cooperative value creation between service firms and customers and requires the building of experiences as well as the resolution of problems through the combined effort of the parties in the relationship. For customers, values are always co-created through their participation in services. Customers can ultimately determine the value of the service in use. This new approach emphasizes that a customer’s participation in the service process is considered indispensable to value co-creation. An important feature of service in the context of exchange is co-production, which implies that a certain amount of participation is needed from customers to co-produce a service and hence co-create value. Co-production no doubt helps customers better understand and take charge of their own roles in the service process. Thus, this proposal is to encourage co-production, thus facilitating value co-creation of that is reflected in both customers and service firms. Four determinants of co-production are identified in this study, namely, commitment, trust, asset specificity, and decision-making uncertainty. Commitment is an essential dimension that directly results in successful cooperative behaviors. Trust helps establish a relational environment that is fundamental to cross-border cooperation. Asset specificity motivates co-production because this determinant may enhance return on asset investment. Decision-making uncertainty prompts customers to collaborate with service firms in making decisions. In other words, customers adjust their roles and are increasingly engaged in co-production when commitment, trust, asset specificity, and decision-making uncertainty are enhanced. Although studies have examined the preceding effects, to our best knowledge, none has empirically examined the simultaneous effects of all the curvilinear relationships in a single study. When these determinants are excessive, however, customers will not engage in co-production process. In brief, we suggest that the relationships of commitment, trust, asset specificity, and decision-making uncertainty with co-production are curvilinear or are inverse U-shaped. These new forms of curvilinear relationships have not been identified in existing literature on co-production; therefore, they complement extant linear approaches. Most importantly, we aim to consider both the bright and the dark sides of the determinants of co-production.

Keywords: co-production, commitment, trust, asset specificity, decision-making uncertainty

Procedia PDF Downloads 188
405 Broadband Platinum Disulfide Based Saturable Absorber Used for Optical Fiber Mode Locking Lasers

Authors: Hui Long, Chun Yin Tang, Ping Kwong Cheng, Xin Yu Wang, Wayesh Qarony, Yuen Hong Tsang

Abstract:

Two dimensional (2D) materials have recently attained substantial research interest since the discovery of graphene. However, the zero-bandgap feature of the graphene limits its nonlinear optical applications, e.g., saturable absorption for these applications require strong light-matter interaction. Nevertheless, the excellent optoelectronic properties, such as broad tunable bandgap energy and high carrier mobility of Group 10 transition metal dichalcogenides 2D materials, e.g., PtS2 introduce new degree of freedoms in the optoelectronic applications. This work reports our recent research findings regarding the saturable absorption property of PtS2 layered 2D material and its possibility to be used as saturable absorber (SA) for ultrafast mode locking fiber laser. The demonstration of mode locking operation by using the fabricated PtS2 as SA will be discussed. The PtS2/PVA SA used in this experiment is made up of some few layered PtS2 nanosheets fabricated via a simple ultrasonic liquid exfoliation. The operational wavelength located at ~1 micron is demonstrated from Yb-doped mode locking fiber laser ring cavity by using the PtS2 SA. The fabricated PtS2 saturable absorber offers strong nonlinear properties, and it is capable of producing regular mode locking laser pulses with pulse to pulse duration matched with the round-trip cavity time. The results confirm successful mode locking operation achieved by the fabricated PtS2 material. This work opens some new opportunities for these PtS2 materials for the ultrafast laser generation. Acknowledgments: This work is financially supported by Shenzhen Science and Technology Innovation Commission (JCYJ20170303160136888) and the Research Grants Council of Hong Kong, China (GRF 152109/16E, PolyU code: B-Q52T).

Keywords: platinum disulfide, PtS2, saturable absorption, saturable absorber, mode locking laser

Procedia PDF Downloads 188
404 Development of a Triangular Evaluation Protocol in a Multidisciplinary Design Process of an Ergometric Step

Authors: M. B. Ricardo De Oliveira, A. Borghi-Silva, E. Paravizo, F. Lizarelli, L. Di Thomazzo, D. Braatz

Abstract:

Prototypes are a critical feature in the product development process, as they help the project team visualize early concept flaws, communicate ideas and introduce an initial product testing. Involving stakeholders, such as consumers and users, in prototype tests allows the gathering of valuable feedback, contributing for a better product and making the design process more participatory. Even though recent studies have shown that user evaluation of prototypes is valuable, few articles provide a method or protocol on how designers should conduct it. This multidisciplinary study (involving the areas of physiotherapy, engineering and computer science) aims to develop an evaluation protocol, using an ergometric step prototype as the product prototype to be assessed. The protocol consisted of performing two tests (the 2 Minute Step Test and the Portability Test) to allow users (patients) and consumers (physiotherapists) to have an experience with the prototype. Furthermore, the protocol contained four Likert-Scale questionnaires (one for users and three for consumers), that inquired participants about how they perceived the design characteristics of the product (performance, safety, materials, maintenance, portability, usability and ergonomics), in their use of the prototype. Additionally, the protocol indicated the need to conduct interviews with the product designers, in order to link their feedback to the ones from the consumers and users. Both tests and interviews were recorded for further analysis. The participation criteria for the study was gender and age for patients, gender and experience with 2 Minute Step Test for physiotherapists and involvement level in the product development project for designers. The questionnaire's reliability was validated using Cronbach's Alpha and the quantitative data of the questionnaires were analyzed using non-parametric hypothesis tests with a significance level of 0.05 (p <0.05) and descriptive statistics. As a result, this study provides a concise evaluation protocol which can assist designers in their development process, collecting quantitative feedback from consumer and users, and qualitative feedback from designers.

Keywords: Product Design, Product Evaluation, Prototypes, Step

Procedia PDF Downloads 118
403 Collaborative Approaches in Achieving Sustainable Private-Public Transportation Services in Inner-City Areas: A Case of Durban Minibus Taxis

Authors: Lonna Mabandla, Godfrey Musvoto

Abstract:

Transportation is a catalytic feature in cities. Transport and land use activity are interdependent and have a feedback loop between how land is developed and how transportation systems are designed and used. This recursive relationship between land use and transportation is reflected in how public transportation routes internal to the inner-city enhance accessibility, therefore creating spaces that are conducive to business activity, while the business activity also informs public transportation routes. It is for this reason that the focus of this research is on public transportation within inner-city areas where the dynamic is evident. Durban is the chosen case study where the dominating form of public transportation within the central business district (CBD) is minibus taxis. The paradox here is that minibus taxis still form part of the informal economy even though they are the leading form of public transportation in South Africa. There have been many attempts to formalise this industry to follow more regulatory practices, but minibus taxis are privately owned, therefore complicating any proposed intervention. The argument of this study is that the application of collaborative planning through a sustainable partnership between the public and private sectors will improve the social and environmental sustainability of public transportation. One of the major challenges that exist within such collaborative endeavors is power dynamics. As a result, a key focus of the study is on power relations. Practically, power relations should be observed over an extended period, specifically when the different stakeholders engage with each other, to reflect valid data. However, a lengthy data collection process was not possible to observe during the data collection phase of this research. Instead, interviews were conducted focusing on existing procedural planning practices between the inner-city minibus taxi association (South and North Beach Taxi Association), the eThekwini Transport Authority (ETA), and the eThekwini Town Planning Department. Conclusions and recommendations were then generated based on these data.

Keywords: collaborative planning, sustainability, public transport, minibus taxis

Procedia PDF Downloads 59
402 Correlation between Body Mass Dynamics and Weaning in Eurasian Lynx (Lynx lynx L, 1758)

Authors: A. S. Fetisova, M. N. Erofeeva, G. S. Alekseeva, K. A. Volobueva, M. D. Kim, S. V. Naidenko

Abstract:

Weaning is characterized by the transition from milk to solid food. In some species, such changes in diet are fast and gradual in others. The reasons for the weaning start are understandable. Changes in milk composition and decrease in maternity behavior push cubs to search for additional sources of nutrients. In nature, females have many opportunities to wean offspring in case of a lack of resources. In contrast, in controlled conditions the possibility of delayed weaning exists. The delay of weaning can lead to overspending of maternal resources. In addition, the main causes of weaning end are not so obvious. Near the weaning end behavior of offspring depends on many factors: intensity of maternal behavior, reduction of milk abundance, brood size, physiological status, and body mass. During the pre-weaning period dynamic of body mass is strongly connected with milk intake. Based on that fact could body mass be one of the signals for end of milk feeding? It is known that some animals usually wean their offspring when juveniles achieved body mass in some proportion to the adult weight. In turn, we put forward the hypothesis that decrease in growth rates causes the delay of weaning in Eurasian lynxes (Lynx lynx). To explore the hypothesis, we compared the dynamic of body mass with duration of milk suckling. Firstly, to get information about duration of suckling we visually observed 8 lynx broods from 30 to 120 days postpartum. During each 4-hour observation we registered the start and the end of suckling acts and then calculate the total duration of this behavior. To get the dynamic of body mass kittens were weighed once a week. Duration of suckling varied from 3076,19 ± 1408,60 to 422,54 ± 285,38 seconds when body mass gain changed from 247,35 ± 26,49 to 289,41 ± 122,35 grams. Results of Kendall Tau correlation test (N= 96; p< 0,05) showed a negative correlation (τ= -0,36) between duration of suckling and body mass of lynx kittens. In general duration of suckling increases in response to decrease in body mass gain with slight delay. In early weaning from 30 to 58 days duration of suckling decreases gradually as does the body mass gain. During the weaning period the negative correlation between suckling time and body mass becomes tighter. Although throughout the weaning consumption of solid food begins to prevail over the milk intake, the correlation persists until the end of weaning (90-105 days) and after it. In that way weaning in Eurasian lynxes is not a part of ontogenesis controlled only by maternal behavior. It seems to be a flexible process influenced by various factors including changes in growth rates. It is necessary to continue investigations to determine the critical value of body mass which marks the safe moment to stop milk feeding. Understanding such details of ontogenesis is very important to organize procedures aimed at the reproduction of mammals ex situ and the conservation of endangered species.

Keywords: body mass, lynx, milk feeding, weaning

Procedia PDF Downloads 18